
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
170 | AGI and ASI are coming and we are NOT READY, Is GPT 4.5 better than 4o? And more AI news for the week ending on March 8th, 2025
Are we truly on the brink of Artificial General Intelligence (AGI)? Or are we underestimating how unprepared we are for what’s coming?
In this episode of Leveraging AI, we dive into the latest AI breakthroughs, the geopolitical arms race for AI supremacy, and the government’s struggle to keep up. With insights from top AI policy advisors and newly released research from Eric Schmidt, Dan Hendricks, and Alexander Wang, we break down what’s at stake and why business leaders must pay attention now.
Key Takeaways from This Episode:
- The government knows AGI is coming—but is it doing enough to prepare?
- China vs. the U.S.: The high-stakes battle for AI dominance and what it means for global power.
- Superintelligence strategy: How AI safety experts see the future (and why it sounds like a sci-fi thriller).
- AI’s impact on business & jobs: Will automation lead to mass displacement or massive opportunities?
- AI agents, humanoid robots & the changing internet: Why businesses need to rethink their digital strategies NOW.
Links & Resources Mentioned:
🎙️ Listen to the full Ben Buchanan interview here: https://open.spotify.com/episode/6u7lNXxdUISGU0TUsO3k3l?si=gNaCKJFdQHWDrxgekRPOBQ
📑 Download the “Superintelligence Strategy” paper: https://www.nationalsecurity.ai/
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host and we have a very interesting week this week. There hasn't been any huge releases, even though there's, even though there's some follow ups and some announcements, but not as much as it's been happening in the past few weeks. But we did have a few really big thought provoking things released in the past few days, and it's gonna be very interesting to talk about. We're going to talk about how ready or unready our world and our government is to what is coming. We're gonna do that with the help of an interview with Ben Buchanan, who was the advisor for artificial intelligence in the Biden administration, as well as a very interesting paper released by Eric Schmidt, the ex Google, CEO, Dan Hendricks, the AI safety expert, and Alexander Wang, the CEO of scale ai. So two different inputs that will provide us a lot of interesting things to think about. And a long list of rapid fire items, including the release of GPT-4.5 to any paid member. So let's get started. We'll start as I mentioned with Ben Buchanan, who's the former special advisor for artificial intelligence to the administration. And he was interviewed on the Ezra Klein Show from. On the Ezra Klein show, and I'm going to start with a quote actually from Ezra Klein himself because I think it's brilliant and it will help us set the stage. For the past couple of months I've been having a strange experience where person after person independent of each other from AI labs, from government, has been coming to me and saying it's really about to happen. We're really about to get to artificial general intelligence. And what they mean is what they have believed for a long time. That we are on a path to creating transformational artificial intelligence, capable of doing basically anything a human being can do behind a computer, but better than most human beings at doing it. And before they thought, maybe it would take 10 or 15 years, but now they believe it's coming inside of two or three years inside the Donald Trump second term. And they believe it because of the products they're releasing right now. They believe because of what they're seeing inside the places they work. I think they're right. If you've been telling yourself this isn't coming, I really think you need to question that. It's not Web3, it's not vaporware. A lot of what we're talking about is already here right now, and I think we are on the cusp of an era in human history that is unlike any of the eras that we have had before. We're not prepared part because it's not clear what would it mean to prepare. We don't know what it would look like, what it will feel like. We don't know how labor markets will respond. We don't know which country is going to get there first. We don't know what it will mean for war. We don't know what it will mean for peace. And as much as there is so much else going on in the world to cover, I think there's a good chance that when we look back at this era in human history, this will have been the thing that mattered. This will have been the event horizon, the thing that the world before it and the world after it were just different worlds. so this is a quote straight out of the beginning of that episode in the interview with Ben Buchanan. And this is again, is recline talking about what he's seeing. And I agree with him. I think we are on the verge of a very dramatic transformation in human history, society, government, politics, economy, everything we know is going to change dramatically. The podcast episode itself is called The Government Knows a g is Coming. I highly recommend listening to the whole thing. It's about an hour long and it's really interesting and they're touching on multiple points. And I'll mention just a few, Buchanan stresses the urgency that basically saying that it's very, very clear that the. Leading forces that drives AI forward are certain that a GI is just around the corner and all they're saying is very loudly shouting to anybody that they can, that it's coming and nobody is ready. Another big topic that Buchanan talks about is the competition with China, basically stating that achieving a GI or a SI first will have profound economic and military and intelligence capabilities that will be very, very critical to who's gonna come out on top in this race. Now, another interesting point that he mentions that relates to that, he's saying that basically every previous technological revolution that we had was preliminary funded by the US Department of Defense. This has to do with the internet, various types of communication, and almost anything you can think of. The DOD was involved in funding and hence in understanding and also controlling to an extent. And this is the first time that the Department of Defense is not driving the bus and is not involved in the process. And hence the US government has a lot less control and understanding on what is happening, which increases the risk of this is falling into the wrong hands or going in the wrong direction. Now, I don't necessarily agree that the government is the only way to go, but it is very clear that as an example for a security perspective, the leading labs are not even nearly secure when it comes to vulnerability to cyber attacks and people stealing their secrets. We've seen several examples of those in the past because they try to do it, but it's not their top priority. And if this was a DOD project or even a DOD monitor project, it probably would've had significantly higher levels of security. They're talking about the tension between the safety and opportunity on this channel, and as we've seen in many different cases, including the speech that we talked about from our vice president currently in this administration, the opportunity and maybe the risk that the other side will get there first is taking the driver's seat instead of the safety of what might happen if we get this wrong, which is different than the previous administration. Now, as I believe, Buchanan also believes that AI will likely have a substantial impact on labor markets, most likely driving significant mass displacement. Now he's saying that there's no clear path for the government right now to do anything about it. He's talking about how slow governments in general and the US governments are in responding to changes in tech. And in this particular case, because the change is so fast and also so profound in its impact on everything, he's claiming that the US government is not equipped in its current setup to handle these changes and provide any meaningful control on the outcome. Now, I don't know if I'm excited or terrified with what's happening right now with Doge and Elon Musk and his process to eliminate different government overheads in general. I'm very much for that. I think when it comes to the push forward with ai, I personally think that they're going beyond the optimum point and there's gonna be not enough control on what US companies even can release. That being said, the race with China is very obvious. I don't know if it's a winner takes all, but it's very clear that whoever's gonna get to advanced AI first will have significant benefits. And Buchanan is also talking about the potential partnerships with China and attempts to collaborate at least on AI safety, while still maintaining strong export controls and trying to keep China behind as much as possible. Bottom line for this particular interview, a) go listen to it. I'm gonna put a link in the show notes. B, it is very, very clear that the government is not in control of what's happening in the AI right now and that the current administration is pushing forward as fast as possible to B ahead of China while throwing safety to the back seat. Will that backfire? I guess we are all going to find out. The other thing that is very, very clear is that a GI is just around the corner and that nobody is ready for that. The second really interesting piece that I found this week is called Super Intelligence Strategy. And it was written as I mentioned, by three people that even if one of them was involved, it was worth reading. And that's Dan Hendrick, who's an AI safety expert, Eric Schmidt, that is the ex Google, CEO, and one of their co-founders. And Alexander Wang, the CEO of Scale ai, all three are globally known experts on the topic of technology and AI specifically. And they released a very detailed paper that talks about what happens on the road to and once we achieve super intelligence. What super intelligence is AI systems that are smarter than humans on all domains, in some cases significantly smarter than humans. And they're saying that this could happen within this coming decade. So by 2035, we may have super intelligence available to anybody who can achieve it. They're saying that this is basically the next Cold War, and controlling and owning these kind of capabilities and making sure that the other side knows that you control it and that you can generate damages to their side will be the new nuclear weapons. They're talking about a concept called mutual assured AI malfunction, or IM as an acronym and they define it as the nation's ability to cripple rival AI by cyber attacks or physical strikes on data centers to avoid being out guns on this race. Now they're taking a position of a winner takes all scenario or the first country that's gonna get to super intelligence could dominate economically militarily. In other aspects, leaving everybody else behind, especially if they can use that in order to slow down the progress of others. This makes the race obviously even more fierce than before, at least on a country level, not necessarily on a company level. This is a very long document and what I suggest to all of you, and again, I will put the link in the show notes, is to download the PDF and upload it to Notebook lm, and then either listen to the summary or create yourself a summary document in notebook LM and read that, and then go back to the original document to read specific pieces that you more interested in. But it's very, very clear that the top minds in the world think we're on a relatively short final to getting a GI and then a slightly longer final to achieving a SI. Artificial super intelligence, and again, nobody's ready for the implications and yet everybody wants to run there faster just to make sure they're there first. Now, since we're speaking a lot about the race with China, let's talk a little bit about a quick summary of what's happening in China right now. So we all remember the deep seek moment for just a month and a half ago where deeps seek released V three and then R one, and claiming that they did it significantly cheaper. I dunno if they did it significantly cheaper, but they're definitely running it significantly cheaper. So they're allowing you to use a very powerful AI at significantly less than you can run it with US companies. They're claiming, by the way, a theoretical potential profit of 544 profit margin if all their users would be paid users. That's obviously very far from the case, but they did a very quick math that says that a daily operation of what their, of their current volume is$87,000. And if all their members would be paying members, they could make$562,000 every single day, which is really interesting other than the fact it's completely made up and most of their users are not paying users. But what it gives you, the understanding is it gives you understanding that it's possible today to create very powerful AI systems that actually run profitably versus most of the leading labs right now that lose billions of dollars every single year. Now, while the dust haven't fully settled on deep seek, we have some new models coming out. So we talked about Quinn from Alibaba when it was released just a few days after R one. Well, they're now released their thinking reasoning model. That is called QWQ 32 B. The interesting thing about this model is that it's a thinking model that was trained on top of Quin 2.5. It has significantly less parameters than many of the other reasoning models, but it's achieving the same level of benchmarks. So a similar approach to what we've seen with R one, where a smaller, significantly cheaper model can achieve comparable outputs to things like deep Seek R one and oh one Mini from OpenAI. This, first of all, puts Alibaba on the global map with its capabilities and it's driving a significant push forward across entire industries in China because now they have models like R one and like the new Quin to integrate into new processes and new tools and new products that they're developing without needing to rely on US and foreign AI capabilities. That's not the final news from China. There's a new force in China that is virally growing in China right now. And it's a company and a product called Manus. And Manus spelled M-U-M-A-N-U-S is a new agent tool that per the reports from China combines capabilities of deep research plus ChatGPT operator, plus Claude computer control, all combined into one. Their website is saying that they're focused on real world complex tasks and they're giving the example of creating an itinerary, and executing to a trip to Japan, providing in-depth analysis of Tesla stock, creating interactive courses for middle school teachers, comparing different insurance policies and so on. So day-to-day use cases that are aimed either on the business market or on personal market, where an AI reasoning model and a deep research model combined with the ability to take action in a browser are all combined into one. This is something that hasn't been released in the Western Hemisphere yet, and I assume it is coming in 2025 maybe soon. And it will change a lot of everything we know. And if you are running a business, any business that depends on web traffic. That depends on SEO online marketing or web traffic. For a significant portion of its revenue, you need to start thinking about it because what it means that more and more of the traffic that comes to the website is gonna be agentic versus humans. Meaning the way the website needs to be built, it needs a completely different backend to appeal and appease agents versus humans. And the front end that we invested so much in order to understand our customers and what they think and what are their pain points and how to communicate it and how to attract them will become less and less relevant. And that is a complete shift from everything we've learned since the introduction of the internet. But it's a shift that's coming. It's coming really fast, and it's gonna have a profound implication to anybody who's running an online business from giant companies all the way to small startups and even solopreneurs. Another big thing that we're seeing from China right now is Huawei's new chip set called nine 10 C, which is reportedly starting to come into the same levels of that is the first chip set from China that is starting to get close to Nvidia H one hundreds. So they're still talking about 40 to 60% capacity, but they're planning to start churning them at a hundred thousand units by the end of this year, which now makes it a significant option for Chinese companies to power their AI ambitions. Now this comes in tandem to another piece of news that is saying that Chinese companies are able to put their hands on the latest and greatest NVIDIA chips. So despite the US export control, it is clear that Chinese firms right now are acquiring these top of the line Blackwell AI chips, and they're buying it through buyers in Malaysia and Taiwan and Vietnam that are legally purchasing these and illegally reselling them, or at least portions of them to Chinese companies. While Huawei is claiming that they're catching up to Nvidia, NVIDIA's, CEO himself is basically saying that their new GB 200 are 60 times faster in token generation than the Chinese competitors. That's obviously very significant and generating tokens is becoming more and more important part of this race because all these reasoning models compared to the previous generation of models require significant more time compute when generating tokens because the reasoning part actually happens when tokens are generated versus in the training part of the process. And so being able to do that makes a very, very big difference to the output. And Nvidia chips are still much superior to their Chinese counterparts on that particular aspect. And so the combination of China being able to make chips that achieve 50% capacity in training and maybe compliment that by illegally getting their hands on NVIDIA chips for inference will allow China to develop advanced AI capabilities. And the US export control needs to find other ways to monitor how to control this in order to sustain the gap. That is very critical based on the first two topics that we talked about in this episode. Now, if you think that a GI and A SI are the only two things we need to worry about when it comes to how our world is gonna look like. Let's switch gears and talk about humanoid robots. We've been talking about this a lot in previous episodes, but figure ai, which is one of the leading companies in robotics in the world right now, just raised$1.5 billion in funding, pushing their valuation to$39.5 billion. This is a huge leap from their$2.6 billion valuation just in 2024. So a year ago. Now, if you remember, we discussed they already have several robots being tested in BMW factories, working in assembly lines, and they're planning based on CEO, Brett Adcock to mass produce a hundred thousand robots in a hundred thousand robots in the near future or built on their in-house Helix AI platform. Staying on the humanoid robots topic. Many of you probably have seen. And if not, go check it out and I'll put a link in the show notes. Rece G one, which is their smallest model, has several kung fu videos released online that are absolutely insane. I literally watched this video, I don't know how many times to see if it's real or it's maybe it's AI generated or it's computerized, but it's actually a robot doing kung fu. Even doing a round hu kick kicking a stick that a person is holding this level of dexterity and control is incredible. And it just shows you how capable these robots are getting right now. Now, to put things in perspective, this G one robot is already being sold and its price tag is$16,000. So while it's not a cheap toy, it's something that many people can afford to put in their house. Now, I wouldn't put one in my house right now because I don't know what it might do to my kids or other things in my house. But once there is clear controls on the safety side of things, paying$16,000 to have a robotic assistant that can do any chore you wanna do in the house or in the yard is not a crazy amount of money. And when you think about businesses, there's no reason for a barista. There's no reason for people packaging packages in different logistics centers, there's no reason for cleaning services. All of these things can be replaced by robots at a price point that even right now is reasonable. If you take this 2, 3, 5, 10 years out, the prices are gonna be significantly lower, the safety is going to be significantly higher, and these robots will be able to take blue collar jobs left and right on almost every task we know. Now, to take that to the next level, UB Tech, which is a Chinese company that generates humanoid robots, just showed an amazing demo of their S one robots working together, collaborating on complex factory tasks. So dozens of these humanoid robots are working together on a single assembly line, doing assembly inspections, sorting all of that, powered by a new system that they're calling Brain Net Framework, which allows a super brain of all these robots to talk to one another and allow them to collaborate in a very effective way. If that sounds really scary to you, I'm on the same page with you because if you take that to the Terminator scenario, we are in very big trouble. If you take that to the factory scenario, we're only in trouble from the fact that I mentioned before that what are we gonna do with all factory workers around the world? Once you have millions of these robots that can do the work significantly more effectively than any person, and they do not require breaks and they do not take vacations and they do not have medical issues or medical insurance, and all they need is to be charged and maintained every now and then, but for significantly less money than any human employee at a significantly higher output. Where does that lead us to and how quickly will we get There is two questions I don't think anybody knows, but I think we'll need to start finding answers within the next three to five years, probably when these robots start to mature and are manufactured at a high scale. Now let's switch to rapid fire items. As I mentioned, the first item is that GPT-4 0.5 has landed in any paying members ChatGPT account. So if you have the$20 a month or the teams account, you can now use GPT-4 0.5. We talked about it last week. This is the largest model ever by ChatGPT, maybe the largest model ever, period. And in theory, its strength is in eq. It feels more human and it's better conversational partner than the previous models. I must admit that in my original test, I couldn't see a significant difference between GPT-4 oh and GPT-4 0.5. I've actually stumbled upon a very interesting post on X by Andrew Scar. Andrew Scarpa was a part of the open AI team. He was the lead AI in Tesla and now he's an independent sharing, very interesting things about everything, ai, because he has very in-depth experience and he actually did a survey on X, basically a mini chat bot arena where he gave the same five prompts to Cha GPT-4 oh and GPT-4 0.5. And then he showed the results to multiple users on X, asking them to vote, which option they prefer, and 80% of responders basically four out of the five different prompts prefer GPT-4 over the newly GPT-4 0.5. Which basically comes to show you that to see the benefits of 4.5, you need something that is very nuanced and that aligns with its benefits. I remind you, and we talked about this in depth last week, that on the token side, if you're using it through the API, using 4.5 is 15 x more expensive than using GPT four without any very clear benefit. So I think nobody's probably using it through the API right now and on a personal note, I must admit, like I said, I couldn't find any big differences until yesterday when I was trying to develop a new custom GPT. When I develop custom GPTs, I usually start with a regular chat with chat GPT, and then I try to finesse the process and then I sometimes ask chat GPT to help me write the instructions for the custom GPT. But in this case, I was testing 4.5 just to see how we were do in that particular task. And the task has to do with analyzing the core values and the why behind companies based on their websites. And then after I was able to do it successfully in the chat, I took the same exact instructions and put them into a custom GPT. Now it's unclear what is running behind custom gpt. Some people saying it's GPT-4 Turbo, some people are saying it. GPT-4, oh, I couldn't find an answer. I'm a hundred percent certain of, but I am certain it's not running GPT-4 0.5 because the differences in that particular task were very extreme. GPT-4 0.5 was very good at actually reading between the lines and understanding the why of a company and the core values of a company based on what they write on their website. And GPT-4 or basically the custom GPT whatever runs the custom GPT did an okay job, but not even close to the same level. So there is something there. And again, it's the first time I see that. And I guess it really has to do with these kind of tasks. if you have tasks that are more emotional and conceptual than very practical and factual, then GPT-4 0.5 will do a better work in doing that. And for everything else, you probably won't be able to tell the difference, and hence it's not worth switching. Staying on the topic of OpenAI and pricing. In a article on the information on March 6th, there has been news from OpenAI that they're reportedly planning to charge up to$20,000 per month for specialized AI agents designed to support PHD level research. They're obviously not gonna release this to the public. They haven't even mentioned exactly who is it going to be released to based on what criteria and when, but it's very obvious that they're seeing in their labs stuff that is gonna be worth more than$20,000 a month to the right entities. I definitely see big research corporations either on academia or on different large corporations who are doing research across different things as well as obviously department of defense. Paying these kind of amounts without even blinking to get the benefits that they can get on the other. And so it's just showing you that beyond what we're using on the day-to-day and people that are considering should they pay the$20 a month on specific tools? And the answer is yes, pay the$20 a month probably on multiple tools because if you know what you're doing and if you have the right use cases, it will pay back very, very quickly. But putting that aside, there's a huge market for significantly more expensive, more specialized tools out there, and they're coming. Another cool piece of news for OpenAI has to do with different news about SORA. SORA is their video generation model that everybody was waiting for about a year. And they finally released in December of 2024. And now they are planning to integrate some of it into ChatGPT. So SORA right now is running on a SORA website, not within ChatGPT itself, but if you have a paid account, you can still use it. Well, the plans are based on a Discord office hour session that happened at the end of February, they're planning to bring SORA into ChatGPT, where you'll be able to have a chat and generate videos on the fly right in there. It will not have some of the fancy capabilities that the SORA website has. It will be less sophisticated, but for most users, it's probably gonna be enough. The other big benefit of having it within ChatGPT versus just on its stand website is obviously the fact that it understand the context of what you're working on. This is one of the main reasons I like generating images in Gemini now more than I generate images. In Midjourney, I still think Midjourney as a tool provides you more control and more capabilities, but it lacks context. It does not understand what you're working on, and you cannot iterate in the same way that you can within a chat. So having the ability to do the same thing with a video, that the video understands what you're working on, your customers, the project you are in, the previous steps that you took, the images and why they're there and the thing that you're trying to achieve will be huge for video creation as well. So I'm personally really excited. They also mentioned that they're bringing the ability to generic images with the Sora engine, which will replace Dali three, which I must admit is a big disappointment compared to everything that else that is out there now. So it was pretty cool when it came out, but it's completely irrelevant right now because it's not even close to the abilities of some of the other tools. So if the integration of SORA into, it'll also bring the same level of quality that we see from other tools within ChatGPT, that will be huge, and you will really make ChatGPT the all-in-one go-to tool for any content creator because you'll be able to create text and analyze text and understand the context and understand your clients and have memory and generate images and generate video all within a single chat. I can't wait. By the way, if you are in Europe, then you now have access to the Sora website, which you didn't have before. So, eu, uk, Switzerland, nowhere, Lichtenstein, and Iceland all now have access as of the end of February to Sora. So if you have the paid account, you can go and check it out. And the last big piece of news about open AI is that on March 4th, a federal judge in San Francisco has denied El Musk's bid to halt, open AI's shift from non-profit to for profit. And, we covered this case in many of the previous sessions. It wasn't very clear where it's going. Well, now it is clear OpenAI can move forward, at least on this particular case, with their transition from a nonprofit to a for-profit, which will allow them to keep the huge amount of funding and the$300 billion valuation that came with it because they were able to, at least, again, in California, move ahead and make the switch to the new structure. Now, nobody still knows how that's gonna look like. There's a lot of other hurdles, but at least from the lawsuit perspective, they're in the clear. Now from Open AI to one of the other giants in the AI space, anthropic. So Anthropic just finalized a$3.5 billion raise series E with a jaw dropping 61$0.5 billion valuation based on Bloomberg. That's up from only$16 billion valuation a year ago. So 16 billion to 61 billion in just one year. I personally really like Claude. I still use it a lot. I must admit it lost a lot of its edge when it comes to its ability to sound more human. I think GPT-4 oh and GPT-4 0.5 is dramatically closing the gap. I really like the outputs from Grok as well. And but that being said, they haven't released a major model. They released 3.7, but not a major model in over a year. And it will be very interesting to see what comes out when they released Claude four, which I assume is in the near future. In an interview this past week, ade, their CEO says that very serious things are coming in late 2025, and he specifically referred to coding things. So Right now, Claude is the top code writer out of all these models, and it's the one that most people are using to drive their coding platforms. And apparently it's gonna get even better. And Dario, in the interview saying it's gonna be matching top human coders by 2026. So thinks about, think about the best programmers. You know, if you're in the software world, and think about having as many of those as you want on each aspect of the coding in the software world, that's mind blowing to think about. most of my career has been spending running software companies. So just the concept of that is very confusing. But if you think about it, there's already more than a few companies with very few people. So 20 to 30 people that achieved over a billion dollar valuation in this past year because they can write code that fast. So if you're starting a new business and you are using these new capabilities, you can do things that just were not doable before. And you can bootstrap a startup to profitability within less than a year. Which questions the whole concept of venture capital and how the new software world is even gonna be run compared to everything we know until they stay. Now, speaking of Claude and its new capabilities, I told you last week that Claude released a new cool tool that is called Claude Code, which allows you to well write code. Well, apparently it had a bug that did some very serious damage to a few different users. Apparently an auto update command messed the permissions on critical system files, basically handling route directory, super user access and missing the actual software and the computers it was running on. And several GitHub users have really complained about this. Anthropic has dropped a fix for that, but it's showing you the risk of running these systems autonomously. One of the reasons I don't let any of these autonomous agents run on my computer is exactly that, is the fear that if it goes wrong, I may get locked out of, everything. And I'm not willing to take that risk right now. And the only way I'm testing some of these tools is on a virtual machine, not even on my computer, just to see what is possible, what they can do. And I mentioned this several times on the show, I think until we have a. A way to control what these tools can do and how to keep them in a box. It will have very slow deployment around the world, definitely around the enterprise or anything that's critical to people or companies. And from these two giants to maybe the biggest giant Google is about to release Project Astra to the wild. So we've seen demos of Astra a year ago where their Gemini AI models can see the world and hear the world. Basically, you can open your camera and your microphone on your phone and show it what's around you and have a communication about it. With the ai, that is an incredibly powerful capability, and in their original demo, they also demoed that with a pair of glasses. So think about combining this with something you can wear all day long, which sees everything that you see. Here's everything that you hear and can participate as you need in helping you understand the world around you, solving things that you're seeing. Address different topics, identify things around you, and so on and so forth. This is the first step of us being cyborgs, but the capabilities are endless and are really interesting, and the rumors are this might replace the Google Assistant and will be a completely different way of us working with assistants because instead of just listening to our voice, they can listen to our voice and see everything that we see initially through the phone and then later on through whatever wearable device that is gonna come with it. Add to that the fact that now the Google Gemini memory feature is now available to free users as well, and you understand that in addition to the fact that Google is gonna collect every information about us, including what we see and not just where we are and which apps we use and what we browse, it will give them a huge amount of information that they can keep in order to, on their hand benefit and monetize it one way or another. Unclear to me how exactly if there's not gonna be ads in search, but I'm sure they will find a way and. From our perspective, is the ability to have a fully personalized AI that understands our entire personal and business universe and can provide us the most contextual and relevant answers in the shortest amount of time, because you'll understand us and we'll know exactly where we live and what we're doing. Now, on one hand, that's very exciting. On the other hand, it's crazy scary, and as I mentioned on many other episodes, a lot of rules and regulations do not really align with what that means because it means you're recording everything that you're seeing, which is from a privacy perspective, a very questionable thing. I don't know if I want anybody around me recording everything that I'm doing, but I think that's the world we're walking into and it's going to be very interesting how it's addressed both from a company regulation perspective. Are people allowed to bring their phones or their smart glasses into their workplace or not. Both in means of privacy as well as in means of data security. And there's a lot of open questions that I don't think anybody has answers to. Going back to our very first segment in this episode of the world we're walking into very, very quickly, and how are we not prepared to handle it? Google also revealed plans to add Gemini to Android Auto, which is their car integrated control system. The first initial demo was actually done on a phone and not in an actual car, which showing you it's in early development and hasn't been deployed anywhere yet, and there was mixed outputs from it. Some of the things it could do were done very, very well. Like a very conversational whale to figure out the weather or what are specific landmarks you're driving next to, or even playing your favorite songs on Spotify. But it wasn't really good at planning trips or listing relevant restaurants in the area. So still work in progress, but it makes perfect sense to me. there's been similar rumors about grok being integrated into Tesla cars. As somebody who now really likes scro and drives at Tesla, I'm very curious to see how that's gonna work out. But the direction is very, very clear. We're going to have AI embedded into everything we know, and the longer we wait, the more things we'll have this ai. So think about putting something in your oven and asking the oven to make sure it comes up perfect. And it will, because it can actually watch and know what the instructions are for that particular thing and understand the recipe. It understand what you're trying to do. It understands when you want to come home and entertain people because it's following you around on your phone or in your conversation or in your text messages. And it will make sure the thing comes out perfectly. So we're not there yet, but I think that's the direction that we're going. Now, Google also just unveiled Plan Gen, which is a new multi-agent framework that allows to connect multiple agents and run them in a way that these agents will compliment each other. So there's a constraint agent that pinpoints problems and nails the specifics. There's a verification agent that checks the plan's quality, and there's a selection agent that picks the best algorithms and processes in order to complete specific tasks. And that framework can now be used by anybody who wants to use it. Staying on the agent world. Amazon just announced that they're starting a new group within AWS focusing on agentic ai, and the idea is to allow people to use AWS to automate everyday tasks. A-W-S-C-E-O. Matt, Matt Garman has said in an internal memo that this could be, and I'm quoting the next multi-billion dollar business for AWS. So they're obviously taking this very seriously, and I a hundred percent agree. This is probably the biggest business opportunity of our lifetime and maybe of anybody's lifetime to this point, is creating and running agents that can basically replace humans across multiple aspects of what we do on the day to day. We know other companies are already in that process, so Salesforce, Google, Microsoft, are all in that field of creating platforms that will allow companies to create and run agents on their environments. Staying on the topic of agents, but switching to another company. Meta has announced that LAMA four will be able to drive AI agents. So Meta's, chief product Officer, Chris Cox, announced that LAMA four, the next iteration of their open source software will be able to power and willpower AI agents. And their agents. Like any other agents, he stated, will have reasoning skills capable of multi-step tasks like browsing the web and taking actions as well as doing multiple other things. Basically agents, like anybody else, is doing agents, but on an open source platform Running Llama in the backend. Now staying on meta and their next interesting move is that they stated that they're planning to release a standalone meta AI app in Q2 of 2025, which is basically this quarter that we're in right now that will rival ChatGPT, Deepsea, grok, et cetera. This is obviously a very important move. We see all the crazy numbers that Met are saying of their daily users, but that's obviously very confusing because. Anybody who's using the search bar in any of Meta's properties, Facebook, Instagram, and WhatsApp is technically using their ai. And I think that's what they're counting, which is not really a fair comparison to how Chachi, piti, or Grok is counting their users. So that is going to change. They're coming out with a new standalone app. This app probably comes out because of the huge success of Deep Seek and Grok as they were released, both going to the first place in number one downloaded app in the US and the top spots in the world. So I think Llama wants to capitalize on the same thing. Do I see the same promise of success to Llama as of see to the other ones? I don't know, but it will give people more options to choose from. This announcement led to a funny response from Sam Ottman that basically wrote on X, okay, fine. Maybe we'll do a social app. So obviously they won't, but it is very clear that there's more and more competition to open ai, at least on the application side. They're still the 800 pound gorilla. They're still ahead, there's still way more users than anybody else, 300 million weekly users to be exact and growing all the time. But there's definitely other companies that are getting a lot of attention. And as I mentioned before, I personally started using a lot of grok, since Grok three was released and not a lot of other people who really liked the new Crock model. Now we spoke about the agent capabilities from China and about developing more sophisticated agent platforms. Well now, something very practical that you can start using tomorrow. Opera, the Norway based browser company. So those of you who don't know them, they've been around for a very long time, probably over 30 years, are releasing a new version that is called browser operator that runs natively on the device. So it's keeping your data local and it's faster and it's skipping the whole, let's screenshot every few frames in order to see what's happening. It's actually running within the browser itself and it can take multiple actions like booking tickets and planning trips and stuff like that. So I think this is coming everywhere. This is just the first one that runs locally on the browser, but I think they're all gonna be that way. So basically every browser will have agents built into the browser itself. Not sharing your data while allowing you to enjoy this new world of agents that, can do research and take actions for you a lot faster than when you can do it on your own. And then the last topic for today is something that has been virally spreading, which is a new company called Sesame that released two AI models, a female and a male models that sound and behave a lot more human than the somewhat scripted voice capabilities from Chue and Google and so on. It stops, it hesitates, it chuckles, it shares its feelings during conversations. And I must admit, I don't necessarily like it, not because I'm scared of the direction, but it just sounds too much like a teenage daughter that I already have, and I love my daughter to death. I think she's amazing, but I don't necessarily want the same kind of sassy approach from a voice assistant. That being said, it shows how far we have become from having almost no voice capabilities a year ago to having these sassy strong personality kind of agents that can have serious conversations with us on any topic that we want. And I think that is the user interface of the future. We will not use keyboards, we'll not use mouse. We will just allow agents to see the world around us, including the screens that we're looking at by sharing the screen as well as viewing it through different wearables that we're going to wear and having a conversation with them through voice. And then maybe later on, if Ill mask has its way, it will be connected straight to our brains. But that's to a whole different episode. This Tuesday, we are coming out with an episode that I think you're going to absolutely love. We are comparing the different deep search tools and we're gonna talk about how to use deep search effectively. What are the different tools that are available out there? What are the differences between them? And how to make sure that the information that you're getting is actually correct and is actually real and not made up. So all of that is coming on Tuesday. If you like this podcast and you're enjoying it, please pull up your phone right now and click on the share button and share it with three to five people that you know that can benefit from it. I'm sure each and every one of you pass a bunch of people that you know that can benefit from this. And all I'm asking is five seconds of your time in which you can help me and help them. So, open it, click the share button, share it with a few people, and while you're there, if you can give us a review on your favorite platform, whether it's Spotify or Apple's podcast, I would really appreciate that. Keep on exploring ai, keep sharing what you find. Connect with me on LinkedIn and tell me what you think about this podcast and have an awesome rest of your weekend.