
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
180 | Will AI get out of control by mid 2027? Super-Agent is showing off amazing results, Google AI is taking center stage, and many more important AI news for the week ending on April 12, 2025
Could your job—or your entire company—be replaced by a million AI-powered robots by 2027?
This week’s episode of Leveraging AI is packed with breakthroughs that business leaders can’t afford to ignore. From Google's jaw-dropping AI showcase at Cloud Next to sobering predictions of superintelligence spinning out of control, we cover the AI headlines shaking up the business world.
Plus, we unpack the rise of autonomous agents that can not only execute tasks—but build other agents on the fly. Sound like sci-fi? It’s not. It’s this week’s news.
Whether you're planning your next AI strategy or simply trying to stay ahead, this episode is your edge.
In this session, you'll discover:
- Why Google’s Gemini ecosystem may quietly be overtaking OpenAI and Anthropic
- The chilling prediction that AI will become uncontrollable by mid-2027
- What "Super Agents" are—and why they're rewriting how work gets done
- Why top business leaders are requiring AI use before approving headcount (yes, really)
- How AI agents like Ava and Aria are coming for sales jobs… and outperforming humans
- The truth about alignment risks, and why AI may soon be thinking 50x faster than you
- The latest on ChatGPT memory, Grok 3, and Meta's mind-blowing 10M token context window
- Why Microsoft is stepping away from bleeding edge models—and why that may be genius
- How companies like Block are saving 8–10 hours per engineer per week using AI agents
- The bold new moves from Canva, Amazon, and Shopify that signal what’s coming next
💡 Ready to go deeper? Check out the AI Business Transformation Course starting May 12 — and use code LEVERAGINGAI100 for $100 off.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news edition of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to prove efficiency, grow your business, and advance your career. This is Isar Meitis, your host, and every week I'm hoping there's gonna be some slow down with a week, though we won't have a lot to report, but this is not going to be, this week there is an explosion of stuff for me to share with you. Our main topics are going to be everything that Google shared in Cloud Next 2025, the big event they just held this week we're going to talk about a doom and gloom projection to where AI is going by 2027. We're going to talk about major progress with AI agents, and then we're going to dive into multiple rapid fire pieces of news, including a lot of new and interesting releases from the biggest players and small ones as well. So let's get started. Google held its cloud next event this past week, and as expected, it was huge impact with AI related announcement. If you go back a year and a half ago and a year ago on this podcast, when Google's respond to the ChatGPT moment was, I'll be gentle, embarrassing. I said that it's not a good idea to bait against Google in the AI space because they have everything they need in order to win this race. They have the capital, they have the data, they have the human capacity, they have and can attract the best minds in the planet. They have the compute capabilities, the distribution, the tools, everything you need in order to be successful in this space. And they're the ones that invented the transformer that actually led this entire revolution. So they have years and years of research before even the ChatGPT moment. So it was very obvious to me back then that they will take potentially the leading position, and it seems that this is the direction that it's going. If you've been watching the chatbot arena chart in the past few months, you see that Google is on top at number one and sometimes number one and number two, most of the time you know that they now have the best video generation tool with VEO and so on and so forth. And I think slowly they will take over more and more components and more and more capabilities in which they're going to be the leading force in the world and they will have the ability to combine them all together into unified environments. And this is more or less what we've seen at Cloud next. It was a spectacular presentation of their ability to combine multiple capabilities together, all under the Gemini umbrella. And I'm gonna go very quickly on some of the things that they've shared during the conference. First of all, they introduced Ironwood, which is their seventh generation TPU, which is their AI chip that is now built specifically for inference, which will allow the test time, compute these reasoning models, to run more effectively. And it is going to be available on Google Cloud later this year. Compared to prior generation, it offers five times more peak compute capacity and six times the high bandwidth memory capacity. And overall, they're saying it's producing 10 x faster results of every process that it's doing. Which is obviously very impressive and potentially will present an alternative to Nvidia GPUs. But they've also introduced the whole creative side of this, which was really interesting and impressive. So they introduced Lyria, which is a new addition to Vertex ai. That's their New voice model. So now in combination with Imagine and VEO two, which by the way is getting a new VEO two and the ability to generate music now allows creators in Vertex to create anything they want. Voice, images, videos, speech, music, basically any creative thing you wanna produce, you can now produce in one environment within Vertex. This is nothing like anybody else has, and that is obviously very powerful. They're claiming that some of the introduction videos that they had in the transitions to each of the speakers were done with the new version of VEO2, and it's absolutely mind blowing. It's worth for you to watch the first few seconds of every presentation that was done there just to see what was done. I'm sure that required a lot of editing and post-production and stuff like that, but even if just components of this were produced with VEO O2, it's incredible. They're introducing a lot of new tools to Gemini in workspace, bringing more and more capabilities into docs and sheets, and meet and chat, and all the stuff that is available in workspace. I must admit I'm using more and more of Gemini within the different tools. I'm just praying for the day that it's all gonna come together, and it's not gonna be Gemini for Docs and Gemini for Sheets and gemini for Drive, but actually just Gemini that will know everything that I'm doing, including my emails and everything. I guess we'll have to wait a little longer for that. I really thought we're gonna be there by now, but I guess that's more complex than I thought. They also introduced some additions and improvements to Agent Space, which is their platform that allows companies and enterprises to develop agents. They announced that they have a growing AI agent marketplace where you can go in and see what partners of Google have developed and is making available, and you can buy agents from different people. This is a+new vector of the economy, if you want, of companies that will develop these agents and you'll be able to go and buy them as an enterprise and use them within the Google ecosystem. They also unveiled what they call agent development kit, an A DK versus an SDK from traditional software, which is an open source framework for building agents while maintaining control over their behavior. And they also introduced a to a, or agent to agent, which is a new open source protocol that gives agents the ability to talk and collaborate with other agents regardless of the vendor. So if you think about the huge success of MCP in the past few months, and we talked about this multiple times, it became a huge success because it is open source and it allows agents to connect to multiple data sources. Well, now this new framework from Google, this time MCP is from Anthropic. So this new framework is targeting the ability to agents to talk to other agents. With the amount of hunger there is for this kind of capabilities right now, I assume this will catch in the same speed as MCP did for connecting to data and tools. They also shared a new unified security solution called Google Unified Security, which brings together all their security capabilities for the products threat, intelligence, security operations, cloud security, and secure enterprise browsing. So they're building a whole suite of security tools around their AI infrastructure, which makes perfect sense if they want to see this successfully deployed across multiple enterprises and their expanding the access to their WAN solution, their wide area network from being just for internal use across Google facilities to be available to enterprises for significantly faster capabilities, services, and search across everything Google. So bottom line, Google is showcasing why they are the leading developer of AI solutions in the world, how their AI cloud solutions are going to be in the forefront of AI capabilities. And that's obviously in a fierce competition with AWS and Azure. So it'll be interesting to follow, but definitely this was a very clear statement by Google on how they see themselves in this race. And from all of these exciting news that every time I see one of these events, I always think of Cyber Net from the Terminator movies. There seems to be people who agree with me and actually know a lot more than me and have taken the time to actually put this down on how they see the trajectory of AI moving forward and what its impact might be in the world. So two AI experts. Scott Alexander, who's the author of Astro, Cox 10, and Daniel Colo, who's a former AI researcher, has created a website called AI 27, which shows a timeline from now till 2027 and what they think might happen during that timeframe. Now, I found it through the dsh podcast. So those of who don't know, dsh is a brilliant interviewer and he has a podcast where he has access to basically anyone in the world, and he does a lot of in-depth technical interviews with a lot of interesting people. So he brought these two people to an interview at his podcast. I'm warning you on two things. One, it's a three hour interview, so even at one and a half speed, you need quite a long drive, but also that it's very scary and uneasy to listen to and in some cases a little too technical, but I will summarize most of it for you. So they're trying to portray how super intelligence will develop and what might be its impact on the world. The key concept that they're talking about is that once we hit super intelligence and AI agents can build AI agents, or AI systems can build new AI systems, we're getting to intelligence explosion. Basically a self-sustained feedback loop that keeps generating better and better AI completely out of our control, and they're claiming we're gonna get to that point around mid 2027. Now, the way they've done this is they have identified several milestones that are required in order to get to that point, and they try to estimate the time it will take to each and every one of these milestones. Hence the website, which is called AI 2027. And you can go and look at is actually really, really cool. Extremely well done, beautiful graphics charts on the side. but it has a timeline on the left. And as you're scrolling, you can kind of see where you are in the timeline and how we might hit these milestones. In addition to the age agentic world, they're also claiming that about a year after hitting super intelligence, the super intelligence itself will allow us to build production capacity to generate a million robots per month. That sounds completely sci-fi to me, but a lot of things in AI sound completely sci-fi to me, which doesn't necessarily make them wrong. A million robots per month, meaning that very, very quickly, you can take over, more or less, any task and any job in the world because you're generating 12 million robots every single year. That can take each and every one of these robots can do the job of at least one person that shows you how quickly it's going to change the way we know the world. Now they're talking about that once we hit the point that these AI tools can run 50 times faster than humans in doing cognitive tasks, what it basically means is that they can do in one week the work a human does in an entire year. That's basically what 50 times faster means, and you can spin up as many of these as you want, as long as you have the compute power. So the combination of these two things shows you how quickly things can get out of hand. This basically means that even if it does not accelerate beyond that point, it just gets to the 50 x speed. It means that in one year it can generate the innovation humans would generate in 50 years. That is obviously something we cannot even grasp, and they're saying we're gonna get to that point in 20 27, 2 years from today. Now, one of the main concerns that they're raising is what they're calling an alignment crisis, basically that our ability to align or if you want control and understand the super intelligence is not going to work. There's actually a really fascinating conversation about this by Yuval who is definitely a DOR when it comes to ai, and he's saying that the relationship between us and AI right now is we're an adult and the AA is a three-year-old, and we can control it and we can try to understand what it's doing and we can easily identify when it's trying to fool us. But he's saying in the very near future, it will be the other way around that the AI will be the adult and we will be a three-year-old with our ability to think and understand, and it will be able to manipulate us in multiple ways that we can't even understand. Just like an adult can easily manipulate a three-year-old. And so this is where we're going, and this is why I think that the research that we talked about several times on this podcast, philanthropic to try to understand the inner doing of the ai quote unquote black box, is so critical for us to be safe from AI in the future. The question is that enough? And the question is it moving ahead fast enough compared to the speed that AI is moving? Now they're talking a lot in this timeline about the AI arms race between us and other nations, mostly China. And they're saying that the incentive to get there first is incentivizing, cutting corners and generating significant safety risks, especially in alignment research. Another model also talks about China stealing some of US secrets to accelerate their side of the process, what they're calling agent two. So there's these different level of agents that you can get to, and by stealing agent two, it allows them to accelerate, their development. I must admit that from everything that I'm seeing from China recently, I don't necessarily think they need to steal anything from the us. They're very, very close behind, if even behind the US development of AI capabilities. But either way, this is not painting a positive future of where the AI race is going. That being said, they themselves admit that they're not sure that this is the scenario that's going to happen. Even Scott Alexander himself is saying, and I'm quoting, I'm not completely convinced that we don't get something like alignment by default, meaning that the models won't behave just because that's the way they will behave. But the thing is, we don't know. And the truth is nobody knows. And that's the really scary thing about this whole thing. They're also discussing the potential nationalization of AI development, but that there's a very serious problem in the decision whether it should be nationalized or run by private company. And now I'm now I'm quoting Daniel colo and he said, I would summarize the situation as governments lacks the expertise and the companies lack the right incentives. And so it is a terrible situation and I tend to agree. So what is the bottom line? The bottom line is this is one possible scenario, right? I don't know. And nobody knows if that's the scenario that's going to happen. We might have con full control. Everything might move a lot slower. There might be hurdles that will slow it down, whether from government or from people, or even from specific industry. But the reality is it's a possible scenario and that possible scenario is coming into. Years, meaning they're claiming that within two years from today, mid 2027, AI will be able to spin up new AI that is so powerful that we have no ability to control it. And it will also be able to generate physical AI robots at a speed that nobody thought will be able to regenerate at that point in time. and my goal here is obviously not to scare you, but just to paint potential versions of this future, there's obviously the flip side of that. If you listen to Daria Ade, he paints this future of abundance and everything is good and we solve world hunger and no diseases and so on and so forth. where is the future? I'm not sure. I must admit I am more probably on the doom side with everything that I'm seeing and the speed that I'm seeing things are moving and the lack of control. do I think it's as doom and gloom as AI 27 presents? I don't think so, but I really do think that we all need to work together to educate each other and to find ways to control this before it's too late.
Speaker:We talk a lot in this podcast about the importance of AI training. Multiple research from leading companies has shown that this is the numbeR1 factor in success of AI deployment in businesses large and small. I'm excited to announce that we just opened the registration for our spring cohort of the AI Business Transformation course. I've been teaching this course for two years, starting in April of 2023 and hundreds or maybe thousands of business leaders has went through the course. We had people in the recent cohort that ended in February from India, the Emirates, several different countries in Europe, South Africa, many places in the us, Canada, and even Hawaii. So regardless of where you are in the world, this could be a great opportunity for you. In previous courses, we had people as far as Australia and New Zealand, so weird hours of the day, but still getting a lot of value from this course. The course is four sessions of two hours each spread over four weeks on Mondays noon Eastern time, starting on May 12th. If you are looking for ways to accelerate your personal knowledge and career, or to change the trajectory of your team or your entire business, this is the right course for you. It is really a game changer, and within four weeks and only eight hours with some homework, you will dramatically change your understanding of how to use AI in a business. We give multiple hands-on examples and use cases and teach you the tools and the processes on how to use them. And we end up with a detailed blueprint on how to actually implement AI successfully business wide. So if this is interesting to you, go and check the link in the show notes. You can open your phone and click on it right now and go and check all the information about the course And because you are a listener of this podcast, you can get a hundred dollars off of the price of the course with promo code leveraging AI 100. I would love to see you join our course in May. And now back to the episode.
Now, within those lines of doing it in a safer way, DeepMind released a 140 page paper that is discussing safety development of AGI. They're predicting that AGI could arrive by 2030, and that it could support drug discovery, personalized learning, democratize innovation, enabling small organizations to tackle big challenges, address multiple big problems in the world. Those of you who have been following Demi Sabes, who's the CEO and the founder of DeepMind, you know that's his goal in life is to solve really big problems for the human race and the world, and them putting up this paper that is called an approach to technical, AGI Safety and security. the goal of it is very, very clear, is telling the world, this is where we're going, but this is how it needs to be done right in order to reduce the risks. And the risks that they're highlighting is misuse. So basically people using AI to do bad things in the world, misalignment, basically what we just talked about when the AI decides to do its thing and goes rogue versus listens to the humans and what we wanted to do. And then accidents, basically creating situations where AI generates negative outputs by mistake. And they're pushing for more robust training on the safety side, monitoring in a different level than we have right now. And human in the loop checks to prevent harm in every step Along the way, they're also calling for strong regulation because, and I'm quoting Shane Legg, who's the chief, AGI scientist. He's saying this will be a very powerful technology and it can and should be regulated. And they're pushing and urging for global cooperation on that aspect. The problem with this is that they're claiming that we have time until 2030, and you have other experts who are claiming this may be coming this year or next year, or as we heard, super intelligence by 2027. So the timeline that they're predicting might not be aggressive enough to stop whatever needs to be stopped, if it needs to be stopped, in time. To make things even more interesting or complicated or scary, depending on your point of view, a new MIT study that was published on April 9th find that AI models lack coherent value system. Now that's good and bad because on one hand, they lack value system. That sounds really scary. But on the other hand, the research found that they lack human-like priorities, like self preservation, which is something that was actually assumed that they do have. The biggest thing that they found. And they researched multiple models from meta, Google, Misra, OpenAI, Andro, and they were testing them across multiple views to see are they fully aligned or not compared to human values and other guidelines and what they found. And I'm quoting Stefan Casper, who is the MIT Doctorate student, who is one of the co-authors. He said, for me, my biggest takeaway is to now have an understanding of models as not really being systems that have some sort of stable, coherent set of beliefs and preferences. In other words, despite all the training and the alignment and everything that we're trying to do, they do not have the same core values that drive humans to make decisions and take action. And that is a very scary thing, especially with the level of acceleration that's happening right now. Now, let's combine this with everything that happens in the agent world. And there are a lot of agent news, and we talked about this many times, that 2025 is the year of AI agents. We talked last week about emergence AI and their new announcement that their system now allows AI agents to build AI agents on the fly. So basically you give it a task, it understands what the task needs to be, and then it creates agents subagents that will do the subtasks that needs to be done in order to complete domain tasks that you gave the agent And their CEO, Satish Pilay has said the following, this is a step towards AI that evolves itself. This is literally what they're saying that they're doing. They're not trying to hide it from their perspective. That is the goal. Combine that with multiple new AI agents running in browsers and controlling them. And we'll give several different examples in the show in this episode. The first one comes from opera. The browser company. They just introduced opera browser operator. It's another AI browser controller, but very different from open AI operator and the cloud solution. This is built into the actual browser, meaning open AI and cloud are actually taking screenshots every second and trying to analyze a screenshot to see what it's doing in order to decide what to do next and where to move the mouse. This is built into the actual browser itself, which allows you to be significantly faster and more accurate with everything that it's doing. Now there's goods and bads in the fact that it's not image-based, but it's actually looking at the browser, as I mentioned, it will allow it to run faster, significantly less slack. It also allows to reduce vulnerabilities because the browser actually understand what's actually happening. but it's still opens the door for complete misuse of your browser to do well, anything the agent decides to do. Staying on the topic of agents and their potential impact. A new interesting startup called Artisan is developing an AI sales agent. That's what they're developing. They just raised a 25 million Series A, but the big deal that made them really known is that their ads are saying literally, stop hiring humans. That's what the ad says. That's obviously very controversial, but the goal and the direction is very, very clear. They want to replace human salespeople. They're claiming that their flagship product agent, Ava, now only hallucinates in one email out of 10,000 emails that it answers. That's significantly less than humans, right? So we saying, oh, it still hallucinates. It's one in every 10,000. Well, people make mistakes when they write emails as well, and I'm sure they make more mistakes than one in every 10,000. And they're now coming up with new AI agents, one's called Aaron for inbound messages, and the other is called Aria for meeting management. And the three are obviously going to work together in tandem to do well. Everything salespeople need to do. Let's keep going. A new release was made by Palo Alto Gens Spark, which just launched what they call Super Agent, which is an AI system that autonomously handle tasks like anything in the browser. They are basically doing what Manus from China is doing, and they were even able to score a higher score on the Gaia benchmark, which tests these autonomous agents when Manus scored 86% and they scored 87.8%. For those of you who don't know what Manus is, we talked about this, but these are fully open, do everything agents that are really general in their purpose. It's not geared towards being a sales agent or something else. And they can do everything from browser A to write reports, to creating webpages, to writing software, and combining all of it in a multi-step process. I've seen two mind blowing demos this week of things that Gen Spar can do and it literally blew my mind. In one of them, it created a website that allows to enter specific information and get outputs for specific vendors of a specific company. And this tool doesn't work in seconds like it used to from gpt. It actually takes hours to work sometimes, but it can spin up these amazing solutions. The other demo was requesting to create an ebook that can be used as a lead magnet for a specific target audience. And it went and it created this amazing ebook. It's beautiful, it's designed well. It has graphs and charts and the right color scheme, and it's very visually appealing. It looks completely professionally produced, which means it looks like the data is real. I don't think anybody will test it, which is part of the problem because these tools now are gonna generate these really high-end products. I really doubt anybody will go back and test whether the data behind them is actually accurate. Because what the tool did in 55 minutes may take you two days to test and you just won't do it. And you'll just believe the information that is in there. And that is another very scary aspect of this whole thing. But this is where we are going with agents. This Super agent connects to nine different large language models and 80 plus tools that it can use to do the things that it is doing. And to tell you how far behind the government is connecting the dots to what was said by the authors of AI 27 in the A-S-U-G-S-V summit this week, which is potentially the US biggest Education summit in the world, the US Secretary of Education repeatedly called artificial intelligence, a one instead of ai, meaning she doesn't even consistently call the term correctly when she's speaking about this. Now, she might have been excited and maybe making mistakes, but the fact that she made that mistake multiple times, I've used the combination of the two letters, AI probably thousands of times in the past two years, and I did not say a one even once. So what does that tell us or hints to? It tells us that the people who are driving some of the biggest and most important decisions of our generation, including how AI will be integrated into the education system, which is a huge opportunity and a huge risk. Do not understand basic concepts of what AI is and what its implications might be while industry is running at full steam ahead and keep on accelerating every single day. So where do I stand on this? As I said, on the scale between complete optimism and complete doomers, probably 75% into the doomers side, but I still think we have a chance to get this right if everybody understands where we're going, and we work together to educate, push, and sound our concerns to the right people to potentially slow this madness down and get to the positive results while reducing the risks. This is one of the main reasons why I'm doing this podcast in order to help educate you so you can take your role and make your own decisions on where you think this is going, so you can take your own action to try to help get to the right outcome versus the wrong one. Okay, and now two rapid fire. There's a lot of rapid fire to talk about. First of all, meta just released LAMA four, so this could have been a whole full item by itself, probably an entire episode by itself. And they actually released two models called LAMA for Scout and LAMA for Maverick. These models are fully multimodal, meaning they have text and video and images and audio all in one seamless model, meaning it's not calling Submodels, it's actually all in one, the same model, which gives it a lot of powerful capabilities. But maybe the most incredible thing that they released is that LAMA four has a 10 million tokens context window. So for those of you who don't know what a context window is, a context window is the amount of data that can stay consistently in one chat, meaning beyond the context window, the AI starts getting confused and forgetting things that it's talking about, and basically losing coherent thought about the thing that you were talking about. And the largest model we had so far was the experimental version of Gemini two, and then Gemini 2.5. Not the ones that they're releasing in the regular Gemini, but the versions that you can get in Google AI Studio has 2 million tokens, and now LAMA four has a 10 million token context window. This is five x. The largest model there is out there that is, and that is 50 x the next best model, which is Claude, and then Chachi PITI only has 128,000 tokens context window. So that kind of gives you an idea On the amount of data, this is roughly 15,000 pages. Think about a big book. A big book has about 400 pages. This can fit 15,000 pages of text in a single chat without losing the ability to retrieve and understand and reason against that data. That changes literally everything because companies can put their entire data in the context window or entire software code in the context window and still work with it in a coherent way. As all the LAMA models, they are open source, meaning their developers cost-effective alternatives, as well as the ability to massage and edit the actual language model and how it works. And meta also previewed what they call LAMA four behemoth. Which, as the name suggests, it has 2 trillion parameters model that they're currently working on. And the main goal of it is to allow it to train smaller models significantly faster. As you probably remember, we said several times meta committed$65 billion to AI infrastructure in 2025 alone. And it is very, very clear that they're all in on their AI efforts. And LAMA four is a very good testament to where that money is going. I personally cannot wait to test it out, and I think this long context window is gonna be very appealing, especially that it's open source and you can run it on your own servers to many enterprises. Another interesting announcement from this week is on April 9th, XAI, Elon Musk's AI company launched the API for Grok three. I've been using Grok three for a while now. I really like it. I actually use it to produce this news episode every single week, but it did not have an API connection. Now it does, and it's priced at$3 per million input tokens and$15 for the output tokens. And they also released growth three mini and 30 cents for input and 50 cents for output. At this price point, they are aligned with Anthropic Claude 3.7 and slightly more expensive than Google, Gemini 2.5 Pro and users on X. Noted that the API has 131,000 context window token limit versus XAI claimed 1 million tokens. There's not been a comment from X on this to this point. I will keep you updated where that stands, but that's another API that is now available that anybody can use to develop solutions. We're gonna talk about the Gemini 2.5 Pro later in this episode. Now, in addition to all the big news from Google in cloud next that we started with, Google also announced that they are going to support MCP in its Gemini universe. I'm quoting Demi Saab, the CEO of DeepMind, MCP is a good protocol and it's rapidly becoming an open standard for the AI agentic era. So MCPI think is more or less is a final fact. Everybody's gonna use MCP. It is taking over the AI world, and now Gemini will be able to connect to it as well. As you remember, just in late March, OpenAI announced that they're going to support MCP. So now you have the three leading development companies in the world, at least in the Western hemisphere. Anthropic, who released MCP, and then OpenAI and Gemini, all supporting this architecture. Now google, who just recently released Gemini 2.5 Pro now has it with no rate limit and pricing it at a dollar 24 per million input tokens, up to 200,000 tokens and$10 per million tokens on the output tokens. That's cheaper than both Claude and open AI's APIs. And they have slightly higher pricing for over 200,000 tokens, which is a$2.50 for the input and$15 for the output. But it's still competitive and it's allowing you to use, it's 1 million tokens context window, which until this week was the largest context window through API. And now, as I mentioned, LAMA four has a much bigger context window. That being said, Google said several times in the past that in their testing they're running it with 10 million tokens. So it'll be interesting to see whether they open that limitation as well. Just to stay competitive with Llama, they're pushing this very aggressively, mostly for developers so I assume they will try to push it to stay ahead within the context window race as well. The other thing that Google announced this week that is really important and interesting is as of April 9th, Google Deep Research is running on Gemini 2.5 Pro Experimental, which is their most advanced model. And I must admit in my recent test, it's actually showing better results. So I was not too impressed with Google's deep research before. And I think the new version gives it a big benefit because now a reasoning model is running behind it and not just a reasoning model, but the best performing reasoning model in the world right now based on the chatbot arena and based on a human raiders, they have preferred deep research results coming from Gemini on a 2.1 ratio, comparing it to open AI's results, what does that tell you? It tells you that we're gonna get better and better research capabilities for cheaper and cheaper from these companies, and I think we'll get very quickly to the point. It wouldn't matter which one you choose. It actually already is it doesn't matter which one you choose. It's gonna do better research than most humans at a fraction of the time. They've also added a cool feature where now you can create a podcast in the deep research similar to the feature that exists on notebook lamb. So you can listen to the output of the research, which you could have done anyway by just by copying the output and dumping it into Notebook lm. But it's cool to have it in one unified tool. As we mentioned the beginning, Google is all in on taking over the AI world, and they're doing a lot of the right steps in the right direction. On an interesting development when it comes to releases of new model Open AI announced on April 4th that they are going to release oh three and oh four mini in a couple of weeks. So if you remember when they released O three mini, they said it is going to be the last reasoning model, and then the next one is gonna be GPT five. That will merge all the different models into one universe, and the model itself will know how to do and what to do and when to do it. But that's apparently more complicated than they thought. and Sam Altman on X said the following, we're going to be able to make GPT five much better than originally thought. He's mentioning integration challenges and demand concerns. And hence he's saying it's only gonna be in a few months. So to stay in this crazy race, they're going to release O three Pro and O four Mini sometime in the next few weeks to be their most advanced reasoning models. Another big announcement from another interesting announcement from OpenAI is that on April 10th, OpenAI rolled out a memory feature that ChatGPT can now know and reference past conversations to tailor responses to your particular individual needs. This is available already for all Pro and plus subscribers and probably enterprise as well. It is a setting called reference saved memories that allows Chacha PT to use prior chats in order to give you better and more personalized, answered across all the different aspects. So text, voice and image generation potentially reducing the amount of instructions and information you need to give chat PT in order to get consistent results. For some of us like me, that might be very problematic because I use the same chat PT account to support multiple clients and different businesses that I'm running. And so I think it's, thinks I'm schizophrenic, but, but for most that are using chate within the same hat most of the time, that is going to be extremely helpful. This feature is not available in the uk, eu, Iceland, Lichtenstein, Norway, and Switzerland due to different regulatory issues. So if you are in one of these countries, I apologize, you will have to wait or maybe never get that capability. Still on open ai. There have been in discussion to potentially acquire IO products. Those of you who don't know IO products, IO products is a company founded by Johnny Ive, who is the legendary Apple designer, who designed many of the favorite products that we all use and love from Apple. And he's been financed and some of his financing came from Sam Altman himself. So OpenAI is now potentially looking to acquire the startup for half a billion dollars,$500 million in order to accelerate their development of ai, physical, wearable products and not just stay in the software world. This news also mentioned that they're also considering a partnership instead of an acquisition. And it will be very interesting to see where this goes based on the fact that Sam Altman is a major investor in that company and based on the fact that they just raised$40 billion that will allow them to invest in whatever they want to invest. I think it is very likely that this will move forward one way or the other. And now to some interesting developments in the battle between openAI and Elon Musk. So 12 former OpenAI employees filed a brief on April of 11th supporting Elon Musk's lawsuit to block OpenAI transition from a nonprofit to a for-profit, arguing that it actually betrays its mission to prioritize humanity safety over profits. In their brief, those 12 x employees claim that open AI's nonprofit governments is vital to ensure that artificial general intelligence benefits all and not just the shareholders. They're also raising safety concerns, warning that the for-profit model could prioritize profits over safety. Now, if you remember, I shared with you in the past that several nonprofit labor groups also joined the claim against OpenAI and their transition to a for-profit entity And a significant portion of the money that OpenAI raised, including the very recent$40 billion raise, depend on their ability to convert to a for-profit entity in the particular recent raise by the end of this year. So this is a very short timeframe for them to do this, or they lose X number of tens of billions of dollars, which I assume they don't wanna lose. So this battle continues. On the flip side of that, probably because of the urgency OpenAI just found, filed a countersuit against Elon Musk on April 9th, accusing him of malicious campaign to sabotage its business. It was filed in a California federal court, and it seeks to block Musk from further, and I'm quoting unlawful and unfair action to prevent them from doing the conversion and being competitive in the AI race. The lawsuit cites emails showing Musk once pushed for open AI to become, a for-profit organization and for him to be the CEO contradicting his current stand against its commercialization. Not to mention the fact that he's now running a lab that is competing with them, that is commercializing ai. I don't know where this is going to go, but definitely there's a lot more gasoline being poured into that fire. And the last interesting piece of news from OpenAI is, that now the free tier users can generate images with the ChatGPT image generator. They are capped with how many they can generate every single day, but they at least have access to this very useful feature that took the world by a storm and added millions of users to the platform in just one week. From OpenAI to X OpenAI Thinking Machine Lab, the company founded by the ex OpenAI, CTO, Mira Mirati is apparently aiming to raise$2 billion which is doubling its initial target, which was only 1 billion, which may make it the largest seed round raise ever in history. If they're successful in doing that, they're going to do this despite the fact they have no product or any revenue in any near future. And they're raising this amount of money just because of the elite team that we're able to put together, including Mira Marra herself and researchers like Bob McGrew, who is the former AI Chief Research officer, and Alec Radford who is one of the innovators behind of some of open AI breakthroughs. So do I think this makes any sense? No. This is similar to what Ilia is doing. Again, another huge name in the AI research field that already raised$2 billion. So how much return do I think, investors are going to see from a$2 billion investment in a company that has no clear product or roadmap to revenue? I don't know. I think it's complete insanity. But this is the world we live in and you bring the right names into a company, and apparently you can raise any amount of money you want. And from Open AI to another current and soon to be former AI partner Microsoft. In an interview with CNBC, Musafa Soloman, who is the CEO of Microsoft AI has shared that their focus is going to be developing what he calls off Frontier AI models. Basically, he's claiming that they're not going to try to chase open AI and develop better models than them, but actually stay in the level of models that are three to six months behind, but custom tailoring them to specific business use cases. That being said, he said, and I'm quoting, it's absolutely mission critical that long term we are able to do AI self sufficiently at Microsoft. But he also noted that the open AI partnership remains vital at least until 2030. What does that tell us? It tells us that Microsoft cares more about actual business use cases for the people who use the Microsoft products than it cares about being at the tip of the spear. I think that makes perfect sense. This aligns nicely with the news that we heard recently of them stopping the development of many new data centers. I think it's a very smart decision by Microsoft to focus on actual use cases. This is what I do in my courses. This is what I do with my clients. It's never about having the best model, it's about having the best implementation of a model to solve an actual problem that you have in your business and being able to accelerate or do better in specific things and hence, I think Microsoft is making a good decision in this particular aspect. Connecting back the dots to the beginning of this episode and the ability to take actions within a browser. Microsoft just unveiled copilot actions, which is enabling the AI copilot to perform web-based tasks like booking tickets and making reservations. Surprisingly, their first partners are actually more B2C are more on the B2C realm, including booking.com, Expedia, kayak, TripAdvisor, Skyscanner, VRBO, OpenTable, and one 800 flowers.com. So they're focusing on travel, dining, gifting, et cetera, versus enterprise applications. I assume that's where they're gonna go next. And I assume this is the path of least resistance and least risk. I don't know how many people will jump on that, but it will be interesting to follow. But copilot has this capability as of now. Now, if you compare that to two of the tools we talked about in this episode already, minus from China and Gens Spark from California, this is a pretty lame capability because it's limiting it to browsing these websites and engaging with these websites where these other two more general agent capabilities can do well everything that a human could do and a lot more because it can write code and do stuff that most humans cannot do, but it aligns well with what Microsoft said. They want to address specific tasks and specific need rather than be at the tip of the spear. Staying on Microsoft. They released an interesting research study that found that AI models, including the top of the line, Claude 3.7, and open ai, all three mini are really bad at solving software bugs. So while they're getting better and better at writing code, both these models solved less than half of the 300 debugging tasks that is in the SWE benchmark. And the reason they're claiming this is happening is because there's no good way to train them. It's relatively easy to train the to write code because you can just show them code that works. But when it comes to understanding what's not working, what they cannot replicate is the thinking process of top experienced developers because that data doesn't exist. It actually happens in the developer's brain, and there's no step-by-step process to train them on. And they're claiming that the development of such data is necessary in order to take that next step in. Debugging software. Staying in this universe of debugging and writing code. GitHub announced on April 4th that copilot code review is now available to all paid subscribers. It was just a preview for people from before and now any paid user can get a code review on the fly as they're developing code or on a scheduled cadence And it currently covers CC plus slash coline and Swift and HTML and TXT coming soon. So if you are a developer in the GitHub environment, you'll be able to do code reviews on your own with AI without waiting for your team lead or somebody else to do your code review. I assume this is gonna be maybe just a preliminary step in most companies, and then there's gonna be an actual code review, but it will definitely accelerate the code development process in every company that's using this tool. And now to Amazon. Amazon just launched Nova Sonic, which is a full suite of voice AI capabilities. It includes speech to speech, text tope, speech to text, literally anything you want. All in one unified AI tool. And they're claiming that this unified environment can model not just the, what humans say, but also how people say it. So it understands the nuances of how people speak, and it can respond accordingly. Now it's priced at 80% lower than GPT, four oh voice capabilities through the API, and it's already available on the Amazon Bedrock, API. This is obviously a huge advancement that will enable anybody who wants to create voice agents in healthcare, customer service, et cetera, to build them through an API, that per its creators is very, very powerful and cheaper. And from Amazon to Anthropic, anthropic just launched Claude Max, which is a premium subscription that has a hundred dollars tier and a$200 tier per month directly competing with open ais GPT Pro Tier. That is$200 a month. It offers five times higher usage limits compared to the$20. Claude. The$100 tier offers five times higher usage than the$20 tier, and the$200 tier offers 20 x higher usage limits compared to the$20 tier. It is an interesting approach because the OpenAI Pro claims unlimited access to the most advanced capabilities. Also, they're promising that subscribers of these higher tiers are gonna get first divs on new models and features that are going to get released, such as the upcoming voice mode. And from anthropic themselves to a very good testament to the code writing capabilities of anthropics agents block the company behind square, the very successful cash application. Deploying more and more of Claude's Goose coding agents. They're saying that now 4,000 of its 10,000 employees are going to be using it, doubling it from just one month. They're also claiming that every engineer that is using it is saving eight to 10 hours every single week. If you do a quick math on 4,000 people on even just eight hours a week, that means they're saving or if you want, accelerating their development by 44 months. Every single week. That is a very, very big claim to make. And again, it's showing you how fast this world is moving. They doubled from 2000 people to 4,000 people. What Goose allows you to do, in addition to route code, if you're an engineer, it allows you to spin up mini applications on your own without knowing how to write code. So you tell it what you need to do in your work and what can help you do it faster, and it will create that mini application for you without the need to write code. I must admit that this week I did a lot of work in Repli, and I'm gonna record a separate episode about it, but I was blown away with how capable Repli is in creating applications without me writing any code. I don't know how to write code, but as I mentioned, I'm gonna record a whole separate episode about this, but I totally see the value of any employee in the company that can spin up whatever tools and capabilities and applications they need in order to do their job faster, can dramatically change the ways company work. As long as it maintains the defined guardrails and safety and security components that needs to be there for an enterprise level solution. Now back specifically to block. They aim to save 30% of employee time by the end of 2025. Shifting the focus from coding to innovating per them as they're trying to push more customer facing AI products and not just internal capabilities. Now, if you can save 30% of the entire company employee time, and you cannot grow by at least 35%, it usually means one thing that they are going to let people go. Now, if they can grow faster because of that, that is fantastic and I really hope that's the direction that this is going to go. Staying on the same topic and the impact of AI on the workforce and jobs in the world. Shopify, CEO Toby Lutkey now mandates employees to prove tasks cannot be done by AI before requesting new hires. On a memo that was shared on x, latke wrote the following before, asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using ai. He also stated that using AI is a fundamental expectation for all 8,100 employees of the company in their daily work. Now, Shopify headcount dropped from 8,300 to 8,100. At the end of 2024 after a 30 14% cut in 2022 and a 20% cut in 2023. So you can see a very linear trend line of reducing the amount of people who work in the company while pushing more AI focused approach. I think this is going to happen in most companies around the world. And as I mentioned many times before, I don't think we are ready for that as a society, as an economy, when many, many people and how paying jobs are gonna be unemployed, it means the economy as a whole stops. And then the fact that specific companies can, in theory, make more money becomes irrelevant. Because if nobody's going to buy things because they don't have money, there's gonna be less Shopify stores and then there's less of Shopify. And then this whole effort is actually gonna yield negative results for them and for everybody else. But I don't see this stopping. I literally see this as an inevitable future that we'll have to deal with and figure out as humans on this planet. There are many more news that we can't get to this week, and there are going to be available on our newsletter. and there's a link in the show notes for you to sign up to learn even more stuff that we couldn't get to in this episode. But I will end with one last release that is very interesting to me and that is from Canva. Canva, the Visual design tool, has now announced Visual Studio 2.0 at their Canva Creative event. And they're integrating things that were never a part of Canva before, including the ability to write code and create applications and create spreadsheets and analysis and data presentation from these spreadsheets. So they presented something called Magic Insights and Magic Formula is basically allowing you to dump your information into Canva and generate amazing visualizations to present your data properly. It does not require any coding skills, and you just need to speak your ideas and it will generate it for you. This is a very interesting move by Canva. If you remember, I said that the move by OpenAI and its ability to now generate images and designs and everything that you want will dramatically hurt Canva, that's Canva growing in more directions than just design, knowing that's at risk. That's a very interesting and smart move by Canva. I don't know how many people are going to use Canva as a spreadsheet or a coder, but maybe they have 240 million active users. So they have a huge user base that may actually shift, or at least some of them may shift to use everything in Canva versus doing some of it in Canva, some of it in Google, some of it in Microsoft. very interesting move and it'll be very interesting to see. That's it for this week. Don't forget, our next AI Business transformation course cohort starts on May 12th. If you are looking for a structured way to accelerate your AI practical knowledge, that can,dramatically impact your career, your company, your team, you should join this cohort. We do the Open Cohorts only once a quarter, so the next one will probably happen in September, and you do not wanna wait until that timeframe. Keep on exploring ai, keep on sharing what you learn, and if you're enjoying this podcast, share it with other people and give us a review on Apple Podcasts or on Spotify, and have an awesome rest of your weekend.