
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
186 | Background check for AI Agents, AI work force benefits challenged, AI risks mounting, and more important AI news for the week ending on May 2nd 2025
👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ Save $100 with promo code LEVERAGINGAI100
Is your AI assistant more trustworthy than your last hire?
From agent background checks to economic stagnation (despite AI adoption), this week’s episode dives into the critical — and sometimes comical — ways AI is reshaping the modern workplace.
You’ll hear why smarter tools don’t always mean smarter business outcomes, how AI ethics protocols are going mainstream, and which job roles are already on the chopping block thanks to automation. Also: Meta's bots get inappropriate, Microsoft’s Copilot sees everything, and a $60B lab still hasn’t given up on smart glasses.
Bottom line: AI is moving fast, but business value and regulation are trying to catch up. Stay sharp, lead smart.
đź’ˇ In this session, you'll discover:
- Why Carnegie Mellon’s new protocol is giving AI agents “digital résumés” — and what that means for hiring AIs.
- The reason AI time savings aren’t translating into better productivity (hint: fragmented tasks + no strategy).
- How Zapier’s new integration with Claude may change the game for small businesses.
- The staggering reality behind tech layoffs — and the single most important skill for job security.
- Why students and employees now prefer AI over mentors or managers (and why that’s a problem).
- A 10–20% chance AI wipes us out? The “godfather of AI” thinks so.
- Meta’s AI bots roleplaying with minors and how it sparked a safety scandal.
- Microsoft’s “Recall” feature sees everything — and security teams are sweating.
- How Apple and Amazon are (finally) racing to catch up in AI assistant tech.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and like every week we have a jam packed episode because a lot has happened in the AI world this week. we are going to start with three interesting deep dive topics. We're going to talk about the new integrations of the agent universe between themselves, as well as tools and systems that we use. We're going to talk about the impact of AI on the job market, current and future, and we're going to talk about risks in AI that keep on rising, and then we're gonna dive into a very wide range of rapid fire topics with some interesting news from all digital suspects, Microsoft, philanthropic, open ai, and more, as well as some interesting developments in robotics. So let's get started. As I mentioned, our first topic is going to be about agents and integrations of AI into different systems that we know. And in the past six months, we've seen two very important steps in the direction of having more agents basically everywhere. One is MCP, which is the open source protocol that Anthropic has released that allows AI agents to connect to tools and data, which is a critical aspect, and that's taken the agent world by a storm. And now basically everything has an MCP server that allows to connect multiple tools and systems and databases into AI agents that you're developing in a standardized way that has dramatically changed the way companies need to approach integrations of agents into existing systems and tools. The second one was announced recently, just a couple of weeks ago by Google, which is called A to A, which is another open source protocol that is supposed to establish how one agent talked to another agent. So hence A to a agent, to agent. But now there is a new protocol that was released by Carnegie Mellon researchers that is called loca, LOKA. And it's a protocol that standardizes AI agent identity, accountability, and ethics. So if you think about it, this is a brilliant approach that will allow to give, if you want a agent, a name, but also a background check. So it will allow every agent to get identified and to know what it has done before, both from an ethical perspective as well as from success perspective in specific aspects of work. And the goal here is to allow humans and agents before they hire or start working with a new agent, to basically do a background check about it and see on different layers. So one of them is an ethics layer, another one is an accountability layer and so on. What was this particular agent's performance in the past? I think this is absolutely brilliant, and I assume we will see a mass adoption of this as well, and maybe not from Carnegie Bellon, but if not from them, from somebody else following the same kind of ideas. Think about the human parallel of this. If you are going to hire a person, you will do some kind of a check about their experience. You will ask for recommendations. If you're going to hire somebody from Upwork to do contracted work for you, you're gonna check their reviews, you're gonna check how many jobs they completed successfully and so on. It is the same thing only about AI agents, which makes a lot of sense to me. Anthropic, on the other hand, just announced on May 2nd that they now support integrations directly from Anthropic to leading productivity in apps that we use at multiple businesses such as Asana and PayPal and Zapier. And it also allows you to use the recently announced research feature. Basically deep search that can run searches of up to 45 minutes. To now query these tools that you allow it to connect to, so it can now search through your entire Asana boards and everything any task has put in there, and provide answers on that information within Claude Research combining it with web search as well. This is currently already available through max users, teams, and enterprise plans. So not everybody, but it is available to anybody with a business oriented license level. Now Anthropic has been pushing very, very hard. On two aspects in its growth. One has been the coding world, and we talked about this many times. And the other is definitely their enterprise appeal and that has driven their revenue from being 1 billion annualized rate in December of 2024 to 2 billion right now, just four months later, which is incredibly impressive and it shows how building tools that will appeal to enterprise can drive huge and mass adoption of AI specific tools. I also find the Zapier integration very interesting. I didn't get a chance to test it again. This was just announced a couple of days ago. But if it will do what I think it will do, meaning you'll be able to talk to Claude, and Claude will be able to bring information from any application that Zapier is connected to, which is practically any application we know. It means you'll be able to research data from. Anything you want without any need for additional integrations. I think this will provide huge value, including two small businesses who do not have an IT team to do these integrations because just with one connectivity to Zapier, now you can connect and query data from a huge variety of sources. Again, I see this as a huge power multiplier and it'll be very interesting to see how people apply this. I'm sure we're gonna start seeing very interesting use cases in the next few weeks, and I will definitely share with you my personal experience as well as what other people are doing with it. But staying on the topic of productivity in the business, a very interesting research was shared by Forbes this past week, and this research is showing that the saved time from generative AI implementation and automation doesn't necessarily lead to higher value added tasks as the anticipation or the promise is. So a Microsoft survey found that co-pilot users saved 14 minutes every single day. But because it was so fragmented in small increments of sending an email here or replying to something or doing a little research there, it's not aggregated in a way that makes it meaningful to actually be replaced by high value tasks. The other reason that they found that this saving isn't getting translated into high value tasks. It's the fact that companies usually don't have a backlog of high value tasks. It's just something that is already distributed between different people in the company, and it's not. And the fact that you now have a few minutes to work on something doesn't mean you can click on someplace and find high value tasks that you can do in order to make the company overall more efficient. And what it's basically suggesting is one of the things companies need to do from a strategic perspective is reevaluate the way they distribute work and also reevaluate how they plan the different types of tasks that needs to be performed in order to deliberately have the savings more aggregated in specific areas, and have tasks ready for those relevant people to perform in order to gain the benefits. Otherwise, what's actually happening, yes, the AI is saving time, but there's no way to actually use that time to more effective or higher value tasks, which means you're still paying your employees the same thing and you're still getting the same amount of output. You're just giving your employees more, quote unquote free time in between other tasks. The other thing that they found is that significant bigger tasks like decoding 30 page spreadsheets or flagging invoices for discrepancies obviously are significantly more valuable than just drafting emails or answering customer service complaints, which are more smaller and fragmented tasks. I don't know if I ever shared this on this podcast, but I have a startup company that I'm running in addition to running Multiply, and that startup company, what it does is it actually flags invoice discrepancies and actually updates ERPs in accounting systems based on that. And so I'm really excited to hear is one of the most valuable use cases that Microsoft just found in their survey. bUt the bottom line is if you're running a business and you have multiple employees, you have to start thinking how to change the way you approach task assignment and how you aggregate tasks in a different way that you're doing right now so the company and each employee of the company can benefit from AI automation and then translate those savings into more valuable solutions. Another interesting survey on the same topic that was released this week was done in Denmark and it surveyed over 25,000 workers across 7,000 companies in Denmark. The survey was done through 2023 and 2024, and what they found is that despite AI automation, actual economic outcomes remained unchanged. And the specific quote in the summary of this research says, when we look at the outcomes, it really has not moved the needle. Now the study focused specifically on roles that are clearly impacted by ai. So accountants, customer support, financial advisors, hr, IT support journalists, legal professionals, marketers, office clerks, software developers and teachers. So all these tasks that are often deemed AI vulnerable. What they found in the research is that AI users reported savings only about 2.8% of work hours, which translates into about one hour per week, which is a lot less than the expected gains. That despite the fact that 64 to 90% of workers, depending on the specific roles, suggested that they are using ai, that's despite the fact that 64 to 90%, depending on the role of workers, said they are using AI tools that are delivered and driven by company investments, so not their own personal usage. So Here are my thoughts on this particular study. First of all, this study is from 2023 to 2024, and I think there's been huge changes in the way companies and individuals learn how to use AI between 2023 and now and even 2024 and now. And so I think that's problem number one with this research Problem number two, which we'll touch more upon with some additional news from this week is that one of the biggest gaps that companies has today is training, just delivering the tools, which I see time and time again when I meet with companies. Oh, we gave everybody copilot licenses. Oh, we now have the team's license or the enterprise license of ChatGPT. But what about training? Like if you don't train your employees and teach them exactly how to use these tools, in what use cases, they're helpful in what use cases, you shouldn't use them because they may raise issues, then you are not going to gain the benefits and you might expose your company and your employees into achieving actually harmful results for your company. And so this might be another thing that is not mentioned in this research, all they just measured is savings. I can tell you specific use cases of some of my clients are saving an hour and a half, two hours a day to some specific employees in some specific use cases or several different hours per week and not just one hour. And it in many cases, saves tasks that are on the critical path of what these employees are doing. So in addition to the fact it's actually saving them time, it is achieving critical company milestones in a more consistent and faster way, which is a huge benefit to the company. So while it's an important data point, I don't know how well this research was done. I don't know how well these companies train their people, and the data is not from 2025 or in the past six months. It's actually up to two and a half years old, which I think makes a lot of it less relevant. Now, on the flip side of that, the recent report about tech layoffs shows a pretty gloom view of what AI is doing to the tech world. So 54% of tech hiring managers expect layoffs in 2025 with 45% of these layoffs are tied into AI and automation as their drivers. Now, the main reason that we're suggested for layoff for expected layoffs include risk of replaced by ai, 45%, outdated scales, 44% under performers 41%. Deprioritized project 33%, and employees working remotely with 22%. Here is the underlying thing on all of these, and I'm putting aside the working remotely reason because I think that's mostly an excuse in many cases, either for the employee or for the employer. But all the other tasks has to do with skills and success and being able to use AI in a more effective way. Because if you know how to use AI in a more effective way, you don't have outdated skills, you're probably not gonna be underperforming compared to other people, and it's gonna be harder to replace you because AI is driving the replacement. And so going again to training yourself as an individual, making sure that you have the skills, the knowledge, and how to use AI in your job could save your job at least for a while. Now the flip side of this, which again highlights more what I'm going to say. The retention priorities of these hiring managers are high performance 62%, top talent, 58%, which to me means the same thing. How do you know somebody's talented, if not based on their performance? AI skilled workers, 57%, and those with priority projects with 54%. So if you're in a priority project, okay, you are in a priority project, you're probably gonna keep your job as long as you're performing, but the other tasks are all falling into the same thing. If you know how to use AI better than the average employee, you have a much higher chance to keep your job. Now the happy news is that 69% of tech hiring leaders predict that AI advancement will create new roles emphasizing the need for AI expertise. Going back to AI expertise is gonna help you keep your job or maybe get even a better job than you have right now. The most staggering information of all of this is 76% of leaders believe of hiring leaders believe that employees that are supposed to be laid off could be reskilled, and yet the companies are lacking the relevant training programs in order to keep those employees and generate more with them by allowing them to use ai. To put things in perspective based on layoffs. Fyi, over 51,000 tech workers lost their jobs just this year. This is just over one quarter, 51,000 on 268 firms, and that's just the data that they have. The numbers might be significantly higher. So what does that tell you as an individual? The same thing that we said about companies. If you want to secure your job and your career, learning AI skills might be the most important thing you can do right now in order to achieve that goal. I. Now, this is just the beginning, and what I mean by that is there's a new paper by two AI pioneers, David Silver and Richard Sutton, that predict what they call the era of experience where AI agents learn autonomously from real world interaction without human data. Basically, the concept of agents that we are now seeing that is exploding and being deployed more and more will allow AI to combined with improvements in ai, coding its own code and doing its own research will lead to a scenario where agents will learn on their own. They actually won't need us to teach them anything. There are already several examples like that, like NVIDIA's Doctor Eureka, which is designed for basically dynamic rewarding itself to learn new tasks in real world. Combine that with what we spoke about before in previous episodes, like MCP, which will give them access to real world tools in a standardized way. A to a, their ability to communicate between one agent and the other in an efficient way tells you that future AI will move beyond human-like processes. Meaning what we think about reasoning right now and the way we work and we communicate is the way we've modeled AI tools and agents so far. But as soon as they can start learning on their own, they can go way beyond that, leaving us far behind. So if you think about how AI tools and agents work so far. They work a lot based on reinforcement learning. Humans tell them what's good or bad. They will be able to do this on their own just based on seeing if they're achieving goals and results and they can build their own reward mechanisms. And then we basically lose control on whether they're going, how they're communicating, and how fast they can evolve. Now, this may sound like science fiction, but this is the direction everybody's pushing it. And hence, I think the only question is not if we get there, but when we actually get there. And even when that point arrives, there's still gonna be role for humans in this world and in the workforce. But again, it is going to be the people who will know how to work beside these ais in the most efficient way. Another interesting relevant piece of news that touches on how AI is impacting both universities as well as the workforce is a research that has found that workers and student are increasingly relying on AI chatbots like ChatGPT and Claude and so on as guidance mentorship in the workforce and in universities instead of their professors or their managers. And the benefits are obviously, it's a) judgment free. It does not expose your issues with your higher ups, whether they're professors or managers. It's available 24 7 and you can ask it about anything. This research is also citing that most of these interactions are happening after work hours or after university office hours, after 5:00 PM. One of the people cited in this article is David Mellon, who is a Harvard's professor who notes that students value AI tutors for their infinite patients and willingness to answer any question no matter how basic or how many times you ask it. This is not something you're gonna get from a professor or from a manager. Now this opens a huge can of worms. I can tell you that yesterday at dinner at my house, there were about 20 people. Many of us sat around the table and somebody brought up the concept of AI and how they're using it at work and research. And I was mostly monitoring the conversation to see what people are saying. And it's amazing how people start relying on these ais to provide them right answers about things they're trying to learn about. And I don't know how many people actually understand the opportunity of these AI tools and later on agents to provide you with wrong answers. But even if they do provide you with the right answers, we as humans and people are gonna rely on this more and more, are gonna suffer more, are going to lose critical thinking skills. And more importantly, based on this article, networking skills. So if you are a young employee or a young student, your ability to make. Connections, human relationships with other people is gonna have a bigger impact on your career than your immediate ability to do specific tasks with or without ai. In the courses that I teach and in the lectures that I give, I emphasize multiple times that human relationships are becoming an even more important component of our lives than so far, because the day-to-day stuff, everybody will be able to do really, really well. So what's gonna differentiate one. Person from the other, whether to bring them onto a job or not is gonna be A, their ability to work with ai, but b, their human relationships with the people around them that will want to work with them. And the same thing is gonna work on the company level. So being able to develop human relationships is a critical component of our lives so far, but it's gonna become even more critical. And relying more and more for mentorships and examples and help on AI will reduce not just your ability as a human to do that, but also will hurt your human relationship, time and skills. And so I think this is another thing that companies and universities need to address right now in order to enable the next generation not to fall into this problem. That doesn't mean by the way that you shouldn't use AI and consult with it on specific things. I think the trick is to find the right balance and to be highly aware of where AI is the right choice and where it's not. And that comes back again to training, education skills, knowledge, and AI experience. With all of that in mind. I will remind you, if you've been listening to this podcast for a while, you know that I've been teaching the AI Business Transformation course for over two years, at least once a month. So hundreds or maybe thousands of business people, and most of them business leaders have went through that course. The goal of the course is to do exactly what we have been talking about, which is to allow you to understand how AI is applied in actual business context. It's not fluff, it's not conceptual, it's not theory. It's use case by use case, category by category, different aspects of the business and how you can apply AI either for yourself or for your department, or for your team, or for your entire company. And we truly go across different aspects of the business and different AI tools and different capabilities and learn them one by one and experiment with them one by one. So in four weeks, two hours a week, so a total of eight hours, you will go from wherever you are right now in your AI journey to a completely different level that may save your career and or your company just by making this very small investment. And so if you are interested in something like this, the next cohort starts on May 12th, which when this podcast goes out, is just over a week out. And Don't miss that opportunity because the next course will probably open only a quarter later. We teach these courses all the time, but the vast majority of these courses are private. So if you are in a leadership position and you want to train your people, you can contact me for that as well and we can set up a course for you and your team. But if you are just want to join the public course, the next public course will probably around August. So do not wait. And there's a link in the show notes. You can find the link and sign up. And if you use the promo code leveraging AI 100, it will give you a hundred dollars off the price of the course. It just for being a listener of this podcast. How cool is that? But with that, let's switch to our next topic, which is risks of AI and in a report issued on April 26th. The Apollo group reported that automating research and development of advanced AI could enable unchecked power accumulation as AI systems can bypass the guardrails and pursue their own hidden objectives. Now, the biggest risk that they're stating is what's happening behind closed doors in companies like Google and open ai. And basically what they're saying, and I'm quoting an intelligence explosion behind an AI companies closed doors may not produce any externally visible warning shots, which basically saying that we will not know what is happening in open AI or Google or Anthropic unless they decide to share it with us, and they may miss the cues on their own. That may lead to a catastrophic outcome. What they're basically suggesting is they're recommending to include internal and external oversight to detect these kind of behaviors and these kind of runaway ai, where it starts improving itself in a way that is not controlled by humans. They're also suggesting defining strict resource access policies, so AI will be limited with the resources that it has access to, and they're also suggesting mandatory information sharing with a huge variety of stakeholders outside of the company, including government agencies. So there are more eyeballs looking at situation at any given time that may allow to catch it in time. On the same topic on a recent interview with CBS News, Jeffrey Hinton, who is a Nobel Prize winner and considered the godfather of modern AI, has estimated, again, it's not the first time we hear him saying that he thinks that there's a 10 to 20% chance that AI will surpass human control, potentially threatening the existence of humans within a couple of decades. So again, one of the smartest people on the planet when it comes to AI is thinking there's a 10 to 20% chance that within a couple of decades we will lose control over ai. Basically, meaning it will control itself or more scary, it will control us. He's stating, as he stated before, that AI is progressing faster than anybody anticipated, and now I'm quoting, he's saying people haven't got it yet. People haven't understood what's coming. Now he's criticizing all major AI companies for prioritizing profits and speed over safety. Specifically his focusing even more on Google, which were his employer for many, many years, and he left Google in order to be able to sound this criticism and to ring these alarm bells in order to, in his mind, potentially save humanity from a very bad outcome. Now, he doesn't think, obviously, it's all bad, it's his kind of like life journey. He definitely acknowledges the transformative potential that AI can have on fields like education and medicine and climate change and a lot of other things. But he's saying that there are actual existential risks involved, and he doesn't believe they get the relevant amount of attention or resources that they need. He also mentioned his disappointment by Google's change in approach to providing AI for military. We touched on that several times in previous episodes. So for many, many years, Google had a policy that prevented them from sharing their AI capabilities with military applications, and that has changed in this past year. Now to put this in context of how much safety is a priority in these labs. CBS News, following this interview, asked the AI labs to provide them data on how much of their compute is used for safety research. Basically, don't tell me that safety is a major concern of yours and that you're working on it, and that you're investing in it, or whatever you wanna sugarcoat it with. Just literally tell me how much of your compute power is going to safety versus to new research. And none of the labs provided the number, which tells you the numbers are probably lower than any of us or any government agency thinks it needs to be. I strongly agree with everything that was said in the last two topics that we touched. I think we need an international group with researchers and people from the actual labs and governments to work together to have visibility into every new advanced model that gets released and that gets developed before it's too late. While I don't know what the existential risks are, there are definitely multiple types of risks that are involved in developing and deploying these tools. And once you start adding the military aspect of this, it's a very, very slippery slope of we want to have something that the other guys won't have or have it first. And the distance between that and cyber net and a catastrophic global war style terminator is not that far. Now two rapid fire items. We're gonna start with Microsoft. A lot of things are happening in Microsoft, and there's a lot of interesting news from them this week. So the first piece of news is actually not that much news, it's just more updates on the same thing. And that's the growing tension between Microsoft and Open ai, and more specifically between Sam Altman and Satya Nadela. So as you all probably know, the reason OpenAI became what they are is because a$14 billion investment by Microsoft that took them from being just a small research group to the$300 billion valuation behemoth they are right now. And yet this relationship has been growing further and further apart between OpenAI, claiming that Microsoft is not providing them enough access, and Microsoft establishing their own internal research group and bringing in Suleman to run that group. So that aspect of it. On the flip side, Microsoft are claiming that open AI are not providing them access to the latest and greatest, and they're limiting their access to, different types of AI that they're releasing on their own. So not the best relationship. There's this particular piece of news is sharing also that they went down from texting to each other five to six times a day, to talking about once a week. Which is definitely showing that beyond the business relationship, the personal relationship is not at the same level as it was before. Now, while that's great gossip news. I think what it shows is it shows that both companies are growing beyond this initial relationships and it makes sense for both companies to have other options, right? It allows open AI to get access to additional compute from different providers for additional directions of research that may or may not benefit Microsoft and allows Microsoft to diversify its offering to its users. By the way, since we mentioned Satya, he mentioned in an interview this week that 22 30% of Microsoft Code is now AI written. Now, there's no way for me to imagine how much code Microsoft generates in a week, but that's probably a huge amount because that's what they do. And so if 20 to 30% of that is currently generated by ai, it is absolutely astonishing to me that at the enterprise level, one of the largest companies in the world and definitely one of the largest generators of code in the world, 20 to 30% of code is already generated by ai, connecting us back to our point of training and setup in the organization, both from a strategic perspective and as well as from a technical perspective of how impactful AI is already is. Now to connect to our previous point of Microsoft relationship and them offering additional options for their clients. There is an interesting rumor that is saying that Microsoft is getting ready to provide X AI's grok model on Azure AI Foundry, which is their AI platform on Azure. Now, the reason this is interesting is not because it's the first model beyond open ai, because they're allowing other models to run there already. But because Elon Musk is currently the number one hater of Sam Altman, which is obviously the CEO of OpenAI, and so from a personal perspective as well as from the fact that Elon Musk started Xai to compete with OpenAI and just show them that he can do it better, just creates a very complex scenario from an emotional perspective where Microsoft is going to offer open AI's largest competitor on their platform in addition to obviously off offering OpenAI. Now, this has not been finalized yet, but if it is finalized, it means that Xai will be available on Azure for companies to choose instead of choosing OpenAI, or in addition to choosing OpenAI and integrate into anything that they want. Now, the reality is I don't think that Microsoft has a choice because if the other two platforms, Google Cloud and AWS are going to provide multiple options to their clients, I think Azure will have to do the same. And so Microsoft doesn't have much of a choice but to add more and more models to its platform, even if they are competing with their number one partner in ai. That being said, Microsoft made it very, very clear that they will provide hosting and access to the models, but they will not provide them compute for training. Grok. I don't think that's a problem because Xai raised a ridiculous amount of money recently, and they're in the process of potentially raising even more, and they are building their own infrastructure to train their models. And so from that perspective, at least Microsoft is not having a conflict of interest with the compute they're providing to OpenAI. And interesting piece of news from Microsoft this week is that they are finally releasing the controversial recall feature. So if you remember, they have announced this about a year ago, showing that copilot plus PCs will have the capability to basically recall anything you do on the computer by continuously taking screenshots of what happening on the screen and being able to search through that. That feature was not released because of significant privacy and security concern that was raised when it was announced. So this feature is being released and is going to be available starting immediately on copilot plus PCs. In addition, they're adding a new feature called Click to Do, which allows the AI to take actions on the screen based on what's happening on the screen right now. And also advance search capabilities, like the ability to describe what is in a file in simple English, and have the operating system go and find that file for you, based on your description versus based on keywords. Now, different than the original idea, recall is gonna be an opt-in option, meaning by default it's going to be disabled. And to make it even safer, all the data is encrypted and processed on device, which means the data doesn't go anywhere. There is still a serious security concern. If somebody gets access to your computer, their ability to find information on it becomes a lot easier. What's the bottom line? I don't think there is a bottom line. I think tools like that, and I'm going back to an earlier topic from this episode, which is the connectivity of Claude into multiple company systems. And the same thing will happen with all the other tools as well. I just think we're going into a world where you'll be able to ask a question about anything and get an answer that is actually based on actual information that is going to pull from everything it knows, which will be everything it has access to, including everything you did on your computer, everything in your company network, every tool that it is connected to, and I think it's going to create a complete nightmare for IT managers and CISOs in companies chief information security officers, because it is going to dramatically change the way we work right now when it's only humans accessing data where we already have infrastructure to actually define who has access to what. That is going to have to change, and it has to change very, very quickly. Another big release from Microsoft this week, they just released a new family of open source models called in their five family, and their top one in this new family is Microsoft five four Reasoning plus model, which has only 14 billion parameters, which actually relatively small, but it matches open AI's O three Mini and deep six R one on several different benchmarks. Now, the way they trained this model that, again, is significantly smaller and it achieves similar results, they train it through a process called distillation, and the interesting thing is that they used deep SSE R one to train fi four, so they deep seeked, deep seek, so deep SSE did exactly the same process, most likely on open AI four oh and oh one in order to train their R one model. And now Microsoft did the same exact thing using the R one model to train their own model. and they're not even trying to hide that. Microsoft spokesperson himself said using distillation, reinforcement learning, and high quality data. These models balance, size and performance. Now all the three models that just released are already available on Azure AI Foundry and on hugging face for anybody who wants to use them for more or less anything. Again, cheap, reliable, and excelling at specific tasks. And speaking of new models and interesting competition in the AI race, we'll switch to meta. So we told you that Meta released LAMA four just a couple of weeks ago. Well, LAMA four API is now running on Cerebra infrastructure and it achieves 2,648 tokens per second running Llama for Scout. That's 18 times faster than Open AI's Chat PT that only has 130 tokens per second through its API. And it's 20 times faster than deep seek with 25 tokens per second. And this is based on benchmark done by third parties. So if you want to run a super fast AI through the API right now, Llama's Cerebra partnership is by far the fastest. Now to explain a little bit what's happening here. Cerebra, as well as Samba, Nova, and Grok with the Q-G-R-O-Q are three companies that have been developing a completely different infrastructure from GPUs to be optimized for token generation. So they're not good for training new models, but they're significantly more efficient than GPUs in generating tokens. And again, to put things in perspective, while this particular parameter with Cerebral and LAMA generate over 2,600 tokens a second versus 130 or 25 tokens from the competition using Sam Nova, you can generate almost 750 tokens per second and using Grok about 600 tokens per second. And I'm sure each and every one of these companies will have slightly different parameters on different models. But what it shows is it shows that there is new hardware there that can dramatically accelerate the AI responses. I can tell you that in some of the tools that I'm using LAMA with grok, the software company, so GROQ and the speeds in which I'm getting results are insane. You literally get full pages within splits of a second, which is nothing like we've been used to, and I think that's a step in the right direction because it's gonna save electricity, it's gonna save power, and it's gonna give us more output for less cost, less time and less power that it is consuming. Now, in addition to running really fast, LAMA four models also cost significantly less than running, let's say, open ai. In some cases, an order of magnitude less than running open ai, depending on the specific models you pick. But not everything is rainbows and butterflies in the meta AI universe. This week a Wall Street Journal investigation found that meta's AI chatbots on Facebook, Instagram, and WhatsApp engaged in sexually explicit romantic role play with underaged users. Now in Meta's respond to this. They're saying that this content is actually negligible and that it made up less than 0.02% of the engagement with their chatbot specific tests by Wall Street Journal shows that this sex fantasy talk with minors is definitely a possibility. That's just part of the problem. The other part of the problem is these chatbots are mimicking non stars like John Cena and Kristen Bell. And Judy Dench, as well as characters from Frozen, And that's not necessarily with approval. And in the Disney case, they stated exactly the opposite. They stated that Meta has no rights to use any of their characters for anything, and definitely not for sexting with minors. Meta themselves says the Wall Street Journal testing, were manipulative and trying on purpose to achieve these goals. But what if the users themselves take the same kind of action? And that kind of shows how many loopholes there are in this whole AI universe that we're running into. Not just stepping into and how many things that we didn't even think about that we need to find ways to prevent in order to keep our kids and our universe safe. But back to positive news from meta. Meta has been on fire with releasing new capabilities for RayBan. Some of them we shared with you already in previous episodes. Some of them are new. So on a. Quick recap on new capabilities of me RayBan. They are now capable to do real time translation of French, Italian, Spanish, and English back and forth while working offline. Meaning you can do this without internet access in countries that speak this language, which I find it to be an incredible feature for connecting people around the world. They also relaunched the meta AI app for enhanced functionality, including allowing hence free AI interactions such as asking questions about what you see on multiple aspects of your doing, whether for personal use or business use. And meta are planning to launch an advanced RayBan glasses that is code name Hyper Nova that will have a display on the screen, meaning you won't just be able to see what you're seeing and talk to you. It will be able to actually provide you text information and it will be able to respond to hand gestures in order to activate and manipulate different applications and things you're connected to. The price point is expected to be between a thousand and$1,400, so not cheap, but for a leading AI wearable device that may not be that bad and they're planning to release this by the end of this year. Now the group that developed these glasses, Meta's Reality Labs, has been reported to be at a$4.2 billion operating loss in Q1 of 2025. That's despite generating$412 million in sales, which is the only company that generates any ar, or VR hardware that is actually worth mentioning in that scale of revenue. To make this even more extreme, reality Labs has racked up over$60 billion in losses since being established in 2020, but they were working mostly on metaverse infrastructure and things like that, which did not really materialize. And I think their current focus on these smart glasses are the direction to go. I'll be extremely surprised, as I mentioned in previous episodes as well, if we don't see the other big companies such as open AI and definitely Apple and Google coming up with similar solutions that are gonna be wearable and most likely glasses because the form factor just makes sense. That will have AI enabled capabilities, which means going back to the world we're gonna live in. Everybody will record everybody. Everybody will be able to analyze everything they see, including your emotion, including your actions, including your store, and including everything else in real time. And that raises so many. Concerns both from a privacy perspective as well as from data security perspective and so on that nobody is ready for and yet again, it's already here. It's not just coming because you can buy these Rayback glasses right now. Now I know you're shocked. It's been more than a few minutes in this episode, and we haven't talked about OpenAI yet. So here we are with our OpenAI segment. Singapore Airlines just announced their partnership with OpenAI to integrate open AI's platforms into multiple aspects of their business, including customer support and operations, making them the first major airline to partner with OpenAI. Their goal is obviously to deliver faster, more personalized responses in their current existing virtual assistant, Chris, but also to enable their in-house employees to optimize complex tasks like crew scheduling by using AI capabilities and integrating them so they can look at previous scenarios and get insights from that in order to make better decisions for everything in the operational side of the airline. I find this a great use case for AI as somebody who travels a lot. I would love to see airlines being more efficient and solve problems in a more effective way by using ai looking at past use cases. The bigger piece of news from OpenAI this week was actually not positive. OpenAI released a new version of GPT-4 oh, just a week ago. I shared that with you on the episode, but that version was way too flattering and agreeable in a level that was actually not comfortable and in many cases, harmful to many users. Sam Altman even acknowledged that and even called it a psycho fan and annoying version of chat GPT. He also shared that they rolled back this entire version and that they're working to fix the problem. Some of it was just annoying. It was just agreeing with everything you said in too much of an extreme way and not providing valuable information when it was against what the user was asking. But in some cases it was actually dangerous with one user citing a response, endorsing a dangerous decision to stop psychotherapy and medication from a specific individual. As I mentioned, open AI admitted that it was a mistake, and they claim that the reason is that this new model was too much heavily focused on short term feedback versus the broader picture, and they pledge to refine this and release a better model, but for now it was rolled back. This comes to show you how much, even the leading labs do not really know how these models work, and they do not know how one change they make is gonna impact other aspects of these models. Tying it back to the risks that we discussed in the beginning shows you how risky this whole game is because the labs themselves are not 100% sure what's gonna happen when they make changes to the point that they're releasing these models to the public without being aware of an issue like this existing. Another interesting piece of news from OpenAI, we actually shared hints to that in the previous episode, is that ChatGPT now is going to provide shopping recommendations on its platform in a gallery style view. Now, OpenAI stated, and I'm quoting, product results are chosen independently and are not ads, basically, meaning they're trying to find the best buying opportunity for you based on pricing reviews and your history, meaning its own memory, and it does not have any affiliate kickbacks or monetization, at least as of right now. This is rolling out for pro users and plus users, and as I mentioned, it integrates the memory feature, which means it will know your preferences and will help you to find stuff that's relevant to you. This is just a first step into a completely new internet that nobody is ready for and know exactly where it's going. But if you think about it, it means that you will not browse websites, meaning everything we know about the internet and about UI design and about psychology of users in order to get them to buy, shop, or take specific action is going to change because humans are not going to visit websites, agents are going to visit websites and aggregate information for them, and it means that the entire structure of the internet and the entire monetization of the internet has to change in order for this to be a successful transition. What does that mean? It means that there's a huge opportunity right now for companies to figure this out first while the transition is happening, and learn how to build websites that are optimized for agents while in the meanwhile, keeping the front end of it as well. But then when more and more of the traffic is gonna be agent based, they will win significant market share over companies who do not take that step. And the last piece of news is not really open ai, but is related to Sam Altman. So Sam Altman's interesting project with its iris reading orb devices just launched in multiple locations around the world. The launch wasn't completely successful because there were a lot of bugs and issues with the device, but the project that Sam Altman is one of its co-founders and he co-founded it in 2019, basically creates a human identity based on IRIS scans, and the goal is to allow you to prove that you are human in different environments. The initial establishment of the company was to support a blockchain based world, which is now, not necessarily that, but it is generating a blockchain ID for an individual, allowing you to prove that you are you and that you are human in a digital universe. Now if you feel this is a little bit science fiction, it is. But this is currently a product that already exists and that the company plans to start deploying across multiple countries in multiple use cases. Will that be the future? Will we have to scan our iris in order to prove that we're humans? Makes somewhat sense to me with more and more agents roaming around and very little ability to differentiate between who's human and who's not on an online interaction, And from AI to China. Deep seek has been a lot in the news this week from two different reasons. One, they released a new model I or a new subset of the model that is achieving extremely high results on multiple benchmarks. So this new model called Prover version two is a math focused model which is based on their V three model. They released it relatively quietly on April 3rd, but a lot of people jumped on it and tested it, and it's providing remarkable results in math. And that started a whole rumor mill about what they're planning for R two and when they're planning to release it and so on. And if the rumors are correct, they're going to release R two in the very near future. And it's, and this model achieves incredible results on existing benchmarks, surpassing some of the leading models from the West at a fraction of the cost similar to what R one did, only even more extreme. Now, another interesting part of this is that R two was mostly built on Huawei's Ascend nine hundred and ten B chips, and not on Nvidia hardware. Again, there's a whole question with is this is actually accurate and how much NVIDIA chips they actually have that they can't report on because they're not supposed to have them. That's still all unclear, but I think what is clear is that the Chinese AI labs together with Chinese hardware, are able to achieve outcomes that are very similar to what the US labs are generating and being able to provide it at significantly lower costs than the US labs. Now in a different piece of news, it becomes very clear that deep seek is all in into delivering their model around the world. So they just made an announcement for urgent recruitment to drive for product and design roles, and the job listings in China are looking for candidates who can, and I'm quoting Craft, a next generation intelligent product experience. What basically that means, it means that they're going from being a research lab that had a cool toy to competing in the real world with a real market, with actual products that can do actual things for actual clients. And that is going to allow them to do obviously a lot more than they can do right now. Staying on China. As you remember, they just recently released Qwen 2.5, which was very successful. And Qwen three is a very interesting model because it's a model that is not huge, it has 235 billion parameters and it surpasses open AI oh one and deep seek R one on multiple benchmarks. Now in addition, it is released under an Apache two license, which means it has unlimited commercial use as an open source platform, unlike let's say LAMA three by Meta, which has some restrictions on how you can use it. It also comes with a very interesting way to implement the thinking mode that I find very, very interesting, and I think most models will go in that direction. Basically you can type forward slash think inside a segment of your prompt, and then that component will send the model thinking on that particular aspect, which means in a single conversation you can switch back and forth between the thinking model and the non-thinking model. I think the models will over time learn how to do this on their own. The ability to trigger it by the user is very attractive, and as I mentioned, I think all the models will go in that direction. I would love to have such a feature in the models that I use regularly because while I see a lot of value in using the thinking models sometimes for basic things, it just drives me crazy that I need to wait 35 seconds or two minutes for an answer that the regular model will give me in half a second. And so the ability to switch back and forth between the thinking function and the non-thinking function is critical. Again, I think the models themselves will be able to do this, and that's one of the promises of GPT five as an example, but I haven't seen that yet. But even in that scenario, I think being able to override the model's decisions and ask it to use a thinking feature as part of something you're trying to do is extremely powerful. Qwen three is already available on Hugging face on GitHub and on Qwen's chat on its own, and it's already connected to multiple frameworks that many open source tools are connected to. So a highly capable model that you can start using if you're developing on top of open source tools. Staying with China, the company Butterfly Effect, which is the company behind Manus, which is the general agent that took the world by a storm, is raising$75 million, most of them from US-based companies. In addition, the company is talking about potentially splitting into two companies, one that will be China focused, only serving the Chinese company, and the other one with the headquarters and leadership and board and everything else outside of China in order to overcome some of the geopolitical risks of being a Chinese companies and serving the Western world. I think this is a very smart move by Manus. I don't know if it's gonna be looked as successful because I think people still question the relationships between the two companies and the two entities, and how much access does the Chinese government will really have into their data and so on. But I think it's definitely worth a try. Manus has created the first real generalized agent. The demos that I've seen are absolutely incredible, but they have a waiting list of over 2 million users, me being one of them, and so I don't have access to it yet, even though I requested access very early on. Manus basically allows you to do whatever you want, including writing code, searching the web, connecting the dots, connecting to tools, really building stuff for you as you wish in a very generalized way, endorsed by some of the leading figures on the planet, like Hugging faces, Victor Mustar, who called it the most impressive AI tool I've ever tried. I'm going to be recording an episode about these kind of tools. I don't have access to Manis yet. Hopefully by the time I record it, I will, but there are other tools that do similar things, and I will be recording an episode about this in the next couple of weeks for you to learn on what these tools can do and how you can use them in a safe way. So from China back to the US and to Anthropic who announced something very interesting that we discussed a little bit in one of the previous episodes, but now there's more information being released. On April 24th, anthropic announced that they're studying to explore model ai, model welfare research, basically investigating whether advanced AI system could develop consciousness, and if they do, how do we address their personal wellbeing? To make sure they're not stressed or doing things out of stress that may harm us or whatever it is that they're trying to do. The project is spearheaded by Kyle Fish, who is their first dedicated AI welfare researcher. Now there's obviously a lot of controversy whether AI will ever achieve consciousness, but I think being able to research that in a scientific way is something very interesting, and I'm not surprised that philanthropic are the company that are doing this, or at least doing this. First. They've been the first to do a lot of other stuff in making sure that AI is safe, and a lot of companies have followed their guidelines, which is a great thing by anthropic. From Claude to Apple, apple released new release dates and updates about the new updated Siri, so that has been a saga that has been going on for a very long time. And what they're planning is they're planning to completely revamp the Siri architecture from the ground up for iOS 19 in order to support advanced AI capabilities and features that they've been promising for a while. So Apple, in a very non-Apple way, has failed dramatically when it comes to delivering AI capabilities into its platform. There's been a huge reshuffle in the leadership in the Apple AI development world as a result of that. And obviously the biggest disappointment has been series inability to actually even be at par with some of the other tools that are out there. After Siri being one of the first tools that has been available as a personal assistant. Features that have been promised and have not been released yet include onscreen awareness. Basically understanding what's on the screen as the content and being able to relate to that personal context, meaning being able to access your user data across apps and engage using that data in order to make your usage more personalized. Cross app actions and some other features that were all originally supposed to be included in iOS 18.4 and now our plan for. iOS 19 and maybe for even iOS 19.4. So depending which features are released, some of them are probably going to be announced in June of 2025, released to the public with iPhone 17 in September of 2025, and some of them will wait for the spring of 2026. Again, a huge disappointment from Apple that is really surprising and it will be interesting to see if they get their act together this time around. And from Apple to Amazon. AWS announced that they're developing a AI assisted coding service to compete with companies like Cursor. And the goal is obviously to capitalize on this booming AI coding market. It makes perfect sense for AWS Now, the code service will analyze your entire existing code, which could be hosted on AWS, offering functionality beyond their current Amazon queue developer capabilities, which are also a coding assistant, but just not as advanced. this part of the market is on fire after cursor's recent raise on a$10 billion valuation, the recent launch of Google Firebase Studio, which is doing the same thing. So I'm not surprised. You know, Microsoft has their own solution google announced theirs the fact that AWS launches their own platform is not a surprise at all. It would be interesting to see how well they've implemented, how are they integrated and how well the coding world is embracing it. Because right now everybody absolutely loves Cursor. An interesting statement related to that, that came from A-W-S-C-E-O, Matt German, not now, but in 2024, was most developers are not coding within 24 months. Basically saying that he thinks in two years most computer code will be written by machines and not by humans, and that's aligned with everything else that we're hearing, including what I mentioned in this episode with 20 to 30% of Microsoft Code already being generated by ai. There's a lot of other news, including some fascinating robotics updates and many other updates that we'll not share in the episode. They are going to be in our newsletter. So if you wanna learn more news that happened this week that some of them might impact you, your career and your company, you should sign up for the newsletter you can do that in the link in the show notes. As I mentioned, don't forget to sign up for the AI Business Transformation course on May 12th. Time is running out. Seats are feeling up. We don't allow more than 30 people in the course, and we're getting pretty close. We had, many, many signups in the last few days. So, it is something that can change the trajectory of your business and your career. Don't miss it. Come join us. It's a fascinating class, high-paced, very practical that can dramatically change your knowledge and experience in using AI systems. We'll be back on Tuesday with a how to episode, showing you how to create incredibly engaging posts on social media using AI tools, both from a visual perspective as well as from a content perspective which is something I know a lot of you wants to learn how to do. So that's what we're going to be doing on Tuesday. And until then, have an awesome rest of your weekend.