
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
191 | The craziest week in AI news history 🤯 Microsoft, Google, Anthropic and OpenAI major announcements, and more important AI news for the week ending on May 23rd, 2025
👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/
Is your business ready for the AI-powered arms race of the century?
This wasn’t just another week in tech — it was the week. Microsoft revealed its plan to automate the enterprise. Google’s scrambling to catch up with AI-first everything. OpenAI just spent $6.5 billion to build devices that could replace your phone — and your keyboard. If your company still thinks of AI as "optional," you're already behind.
Here’s the real takeaway:
We're not watching a tech trend — we’re watching the restructuring of how work, business, and even the internet itself will function in the coming months (not years).
Recommendation:
If you lead a business and want to stay competitive, you need to understand what happened this week — because your competitors definitely will.
In this session, you'll discover:
- Microsoft’s bold AI strategy: enterprise-wide agents that can replace whole departments
- The new AI web protocol (NL Web) that may kill traditional websites
- Google’s AI tab in Search, Gemini updates, and why Project Astra could make you feel like Iron Man
- Claude Opus 4's jaw-dropping 7-hour autonomous coding run — and what that means for your dev team
- The $6.5B bet OpenAI made with Jony Ive — and what it reveals about your hardware future
- Klarna’s 40% workforce reduction powered by AI (with revenue still rising)
- Why Satya Nadella is building AI to replace himself (no, seriously)
- Smartglasses arms race: Meta, Apple, Google — who’s going to own your face?
- The scary-cool future of wearable AI, brain chips, and what it might mean for your kids
🎥 Watch the full conversation between Sam Altman and Jony Ive here:
https://www.youtube.com/watch?v=W09bIpc_3ms
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Johnny recently gave me one of the prototypes that the device took the first time to take home and. I've been able to live with it, and I think it is the coolest piece of technology that the world will have ever seen.
GMT20250524-133739_Recording_avo_1280x720:Hello, and welcome to the craziest weekend news episode of the Leveraging AI podcast So far, this is Isar Metis, your host. And the quote you just heard comes from no other than Sam Altman. And when he's saying something is the coolest piece of tech he's ever seen, well, we should be paying attention. And so I know I promised you last week we're gonna dive into a specific topic in this deep dive. However, this week had the most big, impactful announcement in the history of AI news, at least in the time of this podcast, or maybe since the day of the launch of the first cha GPT to the world. and to be fair, it's gonna be a lot more impactful and profound on our day-to-day lives and definitely on our business lives in the immediate and near future. So we have a lot to talk about in the deep dive. And then with the time left, we'll add some rapid fire items because there's a lot to talk about over there as well. The rest will be in our newsletter that you can sign up for in the link in the show notes. But as I mentioned, because there's a lot of exciting stuff to talk about. Let's get started. Before we dive in, I would like to apologize for my stuffy nose. I've been fighting some serious allergies in the last, uh, 48 hours, so I hope that doesn't bother you, uh, too much. I hope both you and me will get used to it, uh, in three sentences then everything will be okay. There were two really big and important events this week. The first one was Microsoft Build and the other was Google io. Both companies are obviously 100% all in on ai, but there were two very different vibes and focuses in these two events. Microsoft focused a lot more on enterprise and Google focused a lot more on personal use and small businesses, which makes sense because these are the audiences they mostly cater to. Let's start with Microsoft and what they announced in Microsoft Build. The list is very long, but I wanna start with a high level. The high level is very, very obvious. Microsoft is building an end-to-end enterprise AI strategy that will integrate everything in the enterprise, beginning to end, from customer service, to data research, to management leadership, research code writing, deployments, application development, and maybe even the web itself around a unified AI strategy. I must admit that from that perspective, their presentation was significantly more impressive than Google's. Not taking anything away from what Google announced. I'll get to that in a minute. But from a cohesive, clear strategy addressing their exact target market, they did an incredible job. So what are the things that they announced? Well, first of all, they announced what they're calling the open agent web. They are planning to, as I mentioned, unify everything that we know both in our personal lives as well as businesses around agents. They started with the burning hot topic of writing code. So there was a lot about, uh, co-pilot, studio and co-pilot in vs. Code and open sourcing co-pilot in vs. Code. their new copilot for code writing Is supposed to be an end-to-end agent If to quote scha model from a pep programmer to a peer programmer, and the goal is that you can assign tasks and bugs to it, chat with it back and forth as if it is a team member of your development team, and it will be able to do these tasks long, short and be a part of the overall entire development process versus just delivering some snippets. They also introduced, or actually shared because they introduced it before notebooks, which does the same things that Notebook LM does, but it's connected to everything in your Microsoft 365 suite and beyond. Meaning you can drop in whatever data you want in there and create specific notebooks that also integrate everything else that they shared. They shared a bunch of agents that are already available in Microsoft copilot, researcher agent that can search the web and enterprise data for any information that you want. An analyst agent, which can take raw data and create detailed report including analysis forecasting and so on. Basically everything an analyst does. Again, from external data and internal data as well. There is going to be an agent store where you can hire agents for multiple tasks, and if you are a developer of agents, you can then post them to the store, either internally for your company to use or for anybody to use. And that will be a whole new business sector of companies and individuals developing agents. Agents will be members in teams, so whatever agent you assign a task, you can chat with it in teams as if it's a regular member of your company or on your team. They announced copilot tuning, meaning you can now fine tune agents and the way they work around your company's data, style, tone, policies, et cetera. And you can replace the models in the background of agents or, you know, not tied to just OpenAI. You can literally use any model that they have in the backend, which is more and more including now grok from Elon Musk, which is the arch nemesis of Sam Altman, which was their closest partner so far. We announced that previously in one of the episodes not too long ago, but now it's actually live. So you have access to multiple language models in the backend that you can replace as you wish, during or after the creation of models. They announced a new platform for multi-agent or orchestration, which will allow to deliver significantly more complex tasks through multiple agents working together. They increased their observability, the ability to see what the agents are doing across multiple aspects. So we have more control from an enterprise level. The announced entra, an agent directory and access control, and the goal is to be able to provision who can see what through agents, which is one of the biggest challenges that stopped deployment of agents across enterprises. Because once you connect agents in or chats into multiple data points, how do you keep the data access the way it was before? So this intra infrastructure is supposed to solve that. They also introduce Foundry Local. So Foundry is their platform on Azure that allows you to run and do everything in ai. Well, now you can do this locally on a Mac or a PC with normal capabilities and develop AI capabilities locally on your machine or on a small local server. They announced the release of Windows Ai Foundry, which is the infrastructure that they've used internally to develop all these tools and is now being available to anybody to use as an infrastructure for development. They are announcing support for MCP on Windows, so you can now connect everything in Windows and in Office 365 to various MCP servers to connect them to other platforms, tools, and data from multiple sources. They announced that Defender, their platform that supports it security is now going to also cover Foundry, meaning all of your AI infrastructure tools, applications, agents, and so on will be covered and protected by Defender. And they announced digital twins, which will allow you to develop digital twins. Similar to the concepts that you can build on Nvidia tools you can now build in a Microsoft platform. and there was one more announcement that was just like one line item in this whole crazy list of things that they announced, but I think to most of us people who are not in large enterprises and not run large enterprises, it's gonna be the most profound. They announced NL Web, which is an INGEN application layer for websites. In other words, if you think about how the web works right now, everything runs on HTTP protocol, meaning that's how the internet works. There was created a one protocol that will allow everybody to access websites. Well, NL Web is supposed to be that for agents. Think about preparing your website to the agentic world with just a few lines of code connecting it to NL Web that will now allow agents to seamlessly read, see, and engage with your website. As I mentioned multiple times on this podcast, I think the web, as we know it today, will cease to exist sometime in the next few years because we'll see less and less human traffic going to websites and more and more agent traffic going to website. How that looks like, nobody knows, but NL Web sounds like a step in the right direction of defining a standard way in which agents can integrate and engage with websites. MIcrosoft also announced Microsoft Discovery for the scientific domain. And overall, as I mentioned, they are building the entire end to end of everything, agents and AI from infrastructure, data, AI platforms and apps and agents on top of all of that. And they're providing a unified solution to do this. This is obviously not gonna be something that somebody can do in their garage. This is for large enterprises, but the presentation, as I mentioned, was very impressive, cohesive, and sounds extremely powerful. And at the same exact time, 750 miles away, Google had Google io, which introduced a huge range of features and things that are gonna be added or already added to the Google environment that just announced in this one event. By the way, that's something that I think we're gonna see more and more. Some of the Microsoft stuff was the same. These companies are gonna release often and release quick every time they have something that is ready. And then the big events will be more of like a unified introduction of how everything works, works together versus the actual release of the different tools. Some will still be released on the big events, but mostly they will ship as they're ready because the competition just forces them to do that. So the Google event, as I mentioned, focus mostly on small businesses and personal usage, which as I mentioned, that's most of their audience. And Google in the same thing, are integrating AI into everything. Google, the biggest piece of news from Google is now there's gonna be the Google AI tab in Google search. So if you think about how the whole Google thing evolved, well, Google wrote the paper that started the generative AI craziness with attention is all you need paper back in 2017. But the first company to actually come up with something significant was OpenAI, followed by a lot of other companies, and Google was scrambling in the beginning. And I said very early on in this podcast that I think Google will take the prime spot in this because they have everything they need in order to be successful. They have more data than everyone. They have more compute than everyone. They have more distribution than everyone. They literally have more talent. They literally have everything they need in order to be ahead. Their biggest problem was the business model. Google's business model depends on billions, over billions of dollars of revenue from ads on Google search. And they couldn't risk that, or at least they had to delay that until the point they had no choice. And that point is now. When it's starting becoming very clear that people are gonna stop searching on Google and start using whatever their favorite tool is, whether it's Perplexity or Chachi, pt or anything else, Google knows that they have to protect their turf and they're introducing AI tab in the AI search that gives you AI based answers instead of links, or actually somewhat of a combination of them, but with a focus on AI results. It's already available on Google search. I must admit I was not highly impressed and in two times that I tried it, it crashed and didn't work, and actually showed me just regular Google results, just in a different user interface. So I don't think it's fully baked yet. I have zero doubts that they will figure it out because as I mentioned, they don't have a choice. So what other things did Google announce? They announced a new version of Gemini 2.5 Pro, where even the existing version outperforms most of the rivals on most things. They're topping the uh, chatbot arena for a while now, and they also ranked number one on the web dev arena for code writing at least until the next thing that we're going to talk about. but definitely a very solid tool. I really like Gemini 2.5. I do more and more with it. It also is now the number one performer on the Humanity last exam benchmark. They also introduced new mode called Deep think mode, which is a new reasoning mode that is used for more advanced research techniques that can evaluate multiple hypothesis and boost accuracy in complex queries. They introduced Gemini Canvas, which is absolutely fantastic. One of the main reasons I stuck in the past few months using mostly Chachi PT versus Gemini or Claude is that. The Chachi PT Canvas is just the best collaboration tool there is with AI, at least so far. So now Gemini has similar features, which is closing the gap on the capabilities that OpenAI had in their platform. And the idea is that you can engage and partner with the AI in a shared document on the right side of the screen for either creating documents or code and it has tone control and a lot of other things you can do in a much easier user interface. And you can also export exported to Google Docs with a single click, which is really cool. They're now including audio overview. So the feature that was previously A part of em is now available in Gemini. I already use it in combination with deep research. It's absolutely fantastic. As an example, before meetings that I have with potential clients, I give a task to Gemini Deep Research to go and research the company, the people that are going to meet the history, what they're focusing on, their current things that they're doing with ai. And then I'm turning it into a podcast that I can listen to in the car. Or when I'm walking the dog, preparing myself for the meeting. They also introduced a cool capability to update and make changes to images, whether AI generated or actual images that you took and that you're uploading, and you can remove the background, add elements, and make other changes. and it can do it in 45 different languages. And then there are three huge announcements of of all platforms that are being upgraded to a much higher level. One is it's video generator VEO. So VO two was very impressive, maybe the best out there. But VO three is nothing short of incredible VO three, in addition to updating and upgrading everything, so you get higher resolution, higher consistency, more styles and higher resolution. In addition, it knows how to add background sound, sound effects, and voice conversations generated by the AI from text prompts. Only a point about that. Right now, VO three only has text prompting. You don't have image to video prompting, which is a disadvantage. I'm sure that is going to get resolved. Also, extending the videos, which is a really cool feature that VO has. Works actually with VO two. So if you create a video with VO three and you wanna extend it, which again is awesome, it's gonna extend it with VO O2. Again, I'm sure these things will be resolved sometime in the immediate future. They also announced Flow, which is a creator suite that includes text and Gemini capabilities as well as Imagine three, their latest image generation model together with Google VO three. BUt it is a lot more expensive than any other tool out there. It's$250 a month with some limitations on the quantity that you can generate that is expensive if you're just toying around and wanna play with it. But it is almost free if you're an actual real creator of video content because one photo shoot of 10 minute video will cost you a hundred times that amount, and so it is. An extremely, extremely powerful tool that can generate videos that look highly realistic with very detailed, prompt following, like nothing we have from any of the other companies right now. The second project that has made a huge step forward is Project Mariner. Project Mariner is agent that designed to help consumers do things like purchase tickets, sporting events, buy groceries, and so on, and they are going to release this as a first step and then build on top of that. Basically what I said before, the ability to engage with the internet through an agent versus you doing it yourself. It will search through hundreds of website, will find your stuff based on your specific preferences, pricing points, and so on, and can even shop for you because it's integrated with the shopping platform from Google. And then there's the updates to Project Astra. Project Astra was introduced last year in IO 24, where they showed the Google glass walking around and doing different things and engaging with the real world. Well. AsTra just got a lot of updates and the biggest one is its ability to be proactive, meaning the previous tool in Astra allowed you to open your phone and or Glassage, which we'll talk about in a minute, and ask the model about different things that it can see in the view or hear at that particular point. Well, the new Astra can look at what's happening and suggest and intervene where it thinks it can help you in specific things, whether it is a student doing their homework and getting stuck and it can jump there. Or you trying to work on something in your garage and you will figure out when you're stuck to install things in the house or anything at a factory. Literally anything. It can see if it's trained on it, it can understand when you're struggling and jump in and suggest assistance. That's a whole different level of AI assistance like we've never seen before. Another cool new feature in ASRA is highlight. So if you hold your phone in front of a lot of objects and you ask to look for something, it can find it and highlight it for you in the screen. Think about packing for a trip and looking for specific item, making sure they're all there or in an assembly line looking for specific parts or if you are packing a shipment and you need to verify that all the things are there before you put them in the box. These are the things these tools will be able to do out of the box without any additional programming and tuning. So quick summary on these two events, both companies are all in on AI each with its own unique way. Microsoft seems to be a lot more structured and strategic. While Google seemed to be in a really big stress with everything that's going around the trial of potentially breaking up Google as well as the risk of AI taking some of their market share in search. So it seems more like Google are scrambling to add AI into everything they're doing, not necessarily in a bad way. A lot of these things are awesome, but definitely the Microsoft presentation were a lot more cohesive as a strategic overall approach. Before we jump to the next topic, a little experiment that I did with a new Gemini that literally blew my mind, that from my perspective, signals maybe the end of CRMs, but if not, it's definitely moving in that direction. I went yesterday after all the announcements and I said, okay, let's see how good it is really now in understanding data and getting access to my data. And I literally asked it to look at all the proposals that I sent this year for AI training, education and consultancy to multiple companies to find them in my Google Drive and in my Gmail account, and to create a summary table of each proposal. What was the amount? What was the company, what was the last conversation date? What was the last thing that were discussed? What are the open action items and what it suggest I will take as actions in order to close these deals? And I got a detailed summary with everything in it. It didn't get all the proposals in the first run. I asked it to check if there's more, and then it actually found all of them across multiple folders in multiple places in my Google Drive and then through my Gmail account on the last communications with each and every one of those people. This is now available. Right now in Gemini, and it's an incredibly powerful capability if you're a Gemini user. I assume, I don't know that for a fact, that Microsoft Co can do similar things with SharePoint and Outlook, but this was the promise since day one, right? The next step will be connecting it to other sources through MCP servers, and then the real magic happens when you can ask a question about the status of a project or a specific situation in your company, and you will be able to gather information from numerous sources and bring you a helpful, actionable summary of exactly what's going on based on your needs and your level of details, and I think we'll get there this year. But that was just two of the four things we gotta talk about. So still this week Anthropic introduced Claude four, both Claude four Opus and Claude four Sonnet. Both are completely new models that are absolutely incredible to work with, and some of the stats are absolutely mind blowing. So as an example, Opus four achieved a groundbreaking 72.5% score on the SWE bench for coding. To put things in perspective, the second best tool so far was GPT-4 0.1 with 54.6 and Gemini 2.5 with 63.2. So the best so far was Gemini 2.5 Pro, as I mentioned earlier, was the king until this new Claude model at 72.5 on the benchmark. But I think benchmarks are less interesting. What really blew my mind is the statistics from a project by Rakuten Who stated that Opus four coded autonomously for seven hours straight. Now if you remember, we shared with you recently a research that was showing how long can AI agents code effectively and how that is accelerating over time. Well, seven hours is completely outside of that scale, right? The scale was in minutes and then tens of minutes, and then maybe an hour, and now it's seven hours from one prompt of effective coding by Opus four. This is a whole different kind of animal. Now, one of the cool things about this Claude four version, both Opus and Sonnet, is that they are, from my perspective, the perfect mix. Between three different things. Age agentic work, traditional models and reasoning models. I tried it for multiple tasks in the past 48 hours, and I'm amazed with how well it works with understanding what I want, researching on its own things that I just hinted to and I didn't really provide all the information for, and provide very accurate and helpful results by thinking through different things, by running through other things. It's doing it extremely quickly. You can jump in and see what it's actually thinking about. And it's just magical. It just feels like this is how AI should have been from the beginning. So kudos to Claude for that. In addition, I started coding different games with the kids, and that's really, really fun because it runs the code within Claude Artifacts. So you can literally ask it to code whatever game you want and you will code the game, and then you can make changes with additional prompts and it just runs smoothly every single time. And it's just a such a refreshing experience to be able to sit with your kid, think about a game and create it, and five minutes later you can already play it on your computer. Absolutely magic. Two more capabilities that they added is that now it integrates with your Google Drive and with your Gmail account for researching information over there. And in the little dropdown menu, you can stop it from searching the web and have it only search that information with toggle buttons and then you can research your own stuff. I try to test today the Gmail integration didn't work well. The Google Drive integration worked amazingly well and I'm sure they will solve the Gmail stuff as well. So similar to what we've seen from Gemini, if you're not necessarily a hardcore Gemini user and you love Claude, you can use Claude to do similar things. And then for the last component of the deep dive and the last big announcement from this week, I wanna start with a longer snippet.
And so IO is merging with OpenAI formed with the mission of figuring out how to create a family of devices that would let people use AI to create all sorts of wonderful things. The first one we've been working on I think is, has just completely captured our imagination. You know, Johnny called one day and said, this is the best work our team's ever done. Yeah. I mean, Johnny did the iPhone, Johnny did the MacBook Pro. I mean these are, these are like the defining ways people use technology. It's hard to beat those things. Those are really wonderful. Johnny recently gave me one of the prototypes that the device took the first time to take home and. I've been able to live with it, and I think it is the coolest piece of technology that the world will have ever seen.
GMT20250524-133739_Recording_avo_1280x720:This segment that you just heard comes from a video that was released by OpenAI, that is sharing that OpenAI just acquired Johnny i's team for$6.5 billion in stock. Now, who the hell is Joni ive, why the hell are they paying so much money for it and what the hell is going on? So let me break this down for you and explain and explain the story. First of all, who's Johnny ive. Johnny Ive is the iconic product designer behind everything modern computing that we know. The original iMac that put Apple back on the map together with Steve Jobs, the iPod, the iPhone, the MacBook, the MacBook Air. You get the point. Basically every new version of devices that changed the planet was created by this guy. He left Apple some five or six years ago and started a company called Love From That did mostly consulting and didn't really create any new products. Johnny and Sam Altman met a couple of years ago and they decided that they need to create a new kind of device and a new company that will change the interface to the AI universe. And this is how IO was funded, which was a new company in which OpenAI invested and owns 20%. Well, now both IO and the Love form team are rolling into OpenAI with a goal of building a family of devices. So not one device, but a family of devices that will change the way humans engage with computers. What they're basically saying, which makes perfect sense, is that the tools that we use today in order to engage with computers are built for an era where computers did not understand us, did not understand language, and could not engage with the real world. Hence, we needed a keyboard and mouse. And that is not necessary anymore. Now, the quotes that you heard both from Johnny, as well as from Sam, are extreme and they're extreme because these two people have seen. Everything right? They are running at the peak of technology. One of them on the product design side literally changed the world several times and the other created the most amazing company ever created from a growth perspective and impact perspective. And when they are saying this is the coolest piece of technology that they're seeing, you gotta think on how cool can that be? That has to be insanely cool and it has to be very, very different than anything we've seen today. Now, why is that so exciting? And also why should we fear from all of that? But before, with that, what's happening right now from a practical perspective is that the love form team led by Johnny Ive, is gonna lead everything designed in open ai. So not just these devices, but presumably the software and a lot of other stuff around it. And the user experience is gonna be managed by Johnny. Which personally I find exciting, but now let's talk about what are the implications of this? So nobody knows what they're building. There are a lot of rumors and a lot of conversation on X and online and across multiple platforms, but the reality is nobody knows. But what we do know is that it's gonna eliminate the needs to pull out a hardware device like we're doing today. So in the conversation in which I'll put the link in the show notes and you can go and watch, it's a, I dunno, 15, 20 minute video that shows you the process and their conversation. Talking in a coffee shop in San Francisco. Very well produced, by the way, a very laid back, chill down to earth way to explain what they're after and also the backend story. So in this conversation, Sam Altman says, well, if I now want to use Chachi pt, I need to stop what I'm doing with Johnny right now. Lean down from my chair, grab my backpack, open my computer, start it up, go to Chachi pt, and then start engaging with it, which doesn't make sense when you have tools like Chachi PT and similar AI capabilities. So the assumption is they're building some kind of a wearable interface that will allow you with voice and maybe video and probably both. Especially for they're talking about a family of products to engage with them in a most intuitive human way. Like talking to it and showing it different things. Is that gonna be glasses? Is that gonna be a bracelet? Is that gonna be an earpiece with AI capabilities built into it? I don't know, but it's probably some combination of these things. Now as exciting as it is, it raises huge amounts of very large questions in my head. The first one, which I talked about many times in this podcast is privacy. What we are walking into, and we're gonna talk about new glasses in a second. What we're going into is a universe where everybody will record everything all the time, and it's gonna be processed by AI all the time. Now, I'm sure there's gonna be some rules and regulations where you can and cannot wear these devices, but I think over time this will become the norm, and everybody will just be used to, that's being the case. But take it into school and education. Do we allow our kids to use these devices at school or at graduate school? Like, is that okay? Is that not okay? It's also gonna dramatically increase the digital divide. So if today you have people who don't have access to computers or the internet, well now you'll have people who have AI in their ear, in their eyes, every single second, and the people who cannot afford it. This is a completely different level of force multiplier that we didn't have before. It's the first step if you want, into a cyborg era where humans are collaborating with machine in a very seamless ways in everything that we are going to do. Now you wanna take it into a more fufu, crazy direction. Think about Elon Musk's brain chips. That's a technology that is still not there for what we're talking about, but that's Elon's goal and that's where the technology is going. Meaning sometime in the future, this may be five years, this may be 10, this may be 20. Sometime in that timeframe, you'll be able to connect to AI tools straight into your brain. You can think about something and get AI results instead of having to show it things and talk to it. Now the question is, as crazy as it sounds, will you allow your kids to go through that operation and install that in their heads? I Will let that think for a minute. The ability to know everything that AI knows in your head without anything other than thinking about it. That's the ultimate cyborg and knowledge straight in your head. Now I know what you're thinking. You're saying you are crazy. There is no freaking way I will ever do something like this to my kids. But what if three quarters or nine out of 10 kids in their class or their higher education has that? What happens then? What happens then is that the social pressure will be such that you will be forced to do it, and you'll basically have to choose, do you wanna stay apart of the advancing humanity of the 21st century? Or you wanna become the Amish of the 21st century? I have a feeling that there's gonna be a huge amount of quote unquote Amish people, at least in the beginning, that are saying, we're not doing this. This is beyond what we think is human. But I think over time again, this will become the norm. And because you will wanna stay competitive in the world as it is, you will upgrade and you will do these things and everybody, or a lot of people will have that functionality, which will increase the divide even further. Sorry if I went a little fufu on you, but this is where my head is going when I'm seeing these things. And I know it sounds crazy, but I think from a technological perspective, this is where it's all converging to Now, to bring it back down to Earth,$6.5 billion is a huge amount of money for a product that doesn't exist. So why? Why invest such a huge amount of money? Well, the annual smartphone revenue in 2024 selling smartphones to people was over$800 billion. That is supposed to grow to almost$850 billion by the end of the decade unless something else comes in. Shifts are spending from cell phones to a different way to engage with the world. So we're talking about a number that is getting close to a trillion dollars in market. And if you're in there and you control the hardware as well, meaning if OpenAI stops being dependent on delivering their tools and their tools and their capabilities through Apple or Google or Microsoft, they are now a major player in that universe. And again, this is just the smartphone world. What if some of the work we're doing on computers right now can be replaced by the devices that they're building? So that gives you an idea why a$6.5 billion investment that sounds absolutely insane is a very solid investment. If you can build the next iPhone first before everybody else better than everybody else, because you have the guy who built the original iPhone and the team of the most capable people in the world in creating devices, hardware, and designs that are perfect for their need. So since we started talking about devices, there are two pieces of news that came out this week. One in the Google IO announcement, Google introduced a new version of Google Glass. Remember Google Glass from X number of years ago the only geeks in Silicon Valley War and that were not very productive and died very quickly. Well, Google now introduced new glasses that are running Google, Android xr, which is something that developed in partnership with Samsung that features cameras, microphone, speakers, and optical in lenses to display text and other information. And very different than the geeky, weird looking glasses of last time. They're doing it very similar to what Meta is doing, and they're partnering with glasses, manufacturers and designers. In this case, Warby Parker and Gentle Monster. So they will design the glasses, Google will provide everything else. Now, the way this is going to work is it going to stream back and forth to your phone allowing the glasses to be significantly smaller and lighter because it needs less component in the actual glasses.'cause most of the compute will be done on the phone that we're carrying in our pocket as well. I'm sure that step one and that step two will actually change that into an independent device. This is going to be the first real competition to Meta's highly successful partnership with RayBan. They've sold over 1 million units in 2024, and they doubled that amount in the first half of 2025. So they are the only company in the market right now and that's gonna be a burning market moving forward. The other company that made a similar announcement is Apple. Apple just announced that they were released an AI power Smart glasses by the end of 2026 with a large scale prototype production starting at the end of 2025. Very similar idea. Cameras, microphones, and speakers, enabling photo and video capture, realtime translation turn by thorn direction, music playback. Et cetera, et cetera, et cetera. And obviously working in collaboration with a new version of Siri that is yet to be introduced. But you see where this is going. As I mentioned earlier, everybody will wear devices that are connected to the internet, that has AI powering them, that will allow us to engage with the world around us in a very different way than we're doing right now. And if you look at the news from Project Astra that I shared earlier, it will also be proactive in providing us information. Combine that with display on the screen and you will walk around feeling like the Terminator, where you can see data about everything that you see pop up, about people's heads, about things you wanna shop about, buildings and everything else straight into your eyes So now after these really crazy four announcements in a single week, let's go to some rapid fire items. The first one is actually from Satya Nadella. We talked about Microsoft earlier. Satya had a very interesting interview to Vanity Fair and another one to Bloomberg, and he's basically saying that he's building an ai that will replace him. The exact quote is, I'm trying to make myself obsolete, and he was talking about how he's using specific copilots and agents in his day-to-day life as a CEO of one of the largest companies in the world. He basically is talking about how he prepares podcasts to listen in his commute, something that I'm doing all the time with notebook lamb, and he can quote, unquote, talk back to the radio and ask follow-up questions. He shared that he's using at least 10 tailored large language model bots and agents in his work to summarize messages, prepare for meetings, and conduct different pieces of research, both internal and external. So think about the power of A CEO that is completely connected to everything that's happening in his company and in his industry, and can query about that and get information about this almost immediately. It's nothing like we've ever had before because before he had to spend hours and sometime days in collection of data across multiple departments and people to then be brought to him. And then when he has a follow-up question, he has to wait another few weeks to get it, and now he can do it in minutes or maybe a day in the worst case scenario, which allows him to make better decisions faster, all based on actual information. This ties back very well to the topic I wanted to discuss in the beginning of today's episode, but I may just have to record a separate standalone episode about the AI Company of the future. I. Now to stay on the agent focused topic from Microsoft, Salesforce just signed an agreement to acquire Convergence ai, which is a London based startup that is specializing in AI agent creation. They're one of the most used platforms today to create AI agents, and it's gonna roll the technology and the team into agent Force, which is their platform for agent creation. Marvin Putter, the CEO and Co-founder of Convergence said, our mission at Convergence is to help organizations stop viewing automation as just another tool, and instead adopt it as the very way work gets done locking new levels of innovation and efficiency. I cannot say any better. This is where we are going, right? We're going from a point that these are gonna be co-pilots, meaning things that assist us in doing the work to the thing that's actually doing the work. Now Salesforce itself, even without the acquisition of Convergence, is claiming that their own AI agents now solve 97% of customer service queries leaving only 3% for human intervention. I don't know if that's accurate or not. I don't know if anybody tested that, but even if it's 50 50, it is very extreme in its impact on both the workforce as well as the efficiency in which organizations can run. Now, combine that with the fact that the 76 top American retailers are using Salesforce e-commerce platform, by the way, generating$136 billion in web sales alone in 2024. Just imagine what is the impact of all these agents on the economy and the efficiency of these companies from one side, and what does it mean to the workforce, and then the people who have money to pay for these services On the other side. Staying on the same topic. Let's talk a little bit about Klarna. We spoke about Klarna many times before. On this podcast, there are a Swedish FinTech giant that jumped all in into AI early in 2023, collaborating with OpenAI to develop multiple solutions for them initially, mostly for customer service, while they just announced that their employee count has went down from 5,000 to 3000 since they started this initiative. The initiative didn't actually fire anybody, but it just went on a hiring freeze. As they were developing AI capabilities and through natural attrition, they lost 40% of their workforce. Now, very early on in the deployment of their AI agents, the numbers they shared is that their new AI capabilities is replacing 700 human agents and cutting the response time from an average of 11 minutes to two minutes. That's$40 million annually, straight to the bottom line. Now, additional information that was shared is that 87% of carer's workforce currently uses generative AI every single day with non-technical teams like communication at 92.6%, marketing at 87.9%, and legal at 86.4% leading the pack. On the flip side, it was very interesting to hear their CEO saying that the usage of AI only customer service has actually reduced the quality of customer service and they're now switching back. They're looking to hire some human agents and to quote their CEO. He said, from a brand perspective, I just think it's so critical that you are clear to your customer that there will always be a human if you want. So as expected, a blended solution of human support team plus AI to deal with most of the stuff is probably the way forward, at least in the foreseeable future. But the reality is that there is an entire industry of call centers and connect centers that support huge amounts of companies in the world and employing millions of people that will shrink from millions of people to probably tens of thousands of people, because only a few cases will actually require human intervention. The output of the process that Klarna is going through is profound. But the efficiency output of Klarna s all in approach to AI is incredible. Klarna s revenue per employee surge to nearly$1 million in Q1 of 2025, up 74% from just a year ago. So every employee in Klarna on average, so you take the total revenue of Klarna divided by the number of employees, is now almost$1 million. The other interesting aspect of this is that their Q1 of 2025 revenue rose 13% despite the significant reduction of headcount, by the way, that doesn't help them being profitable, yet they're still losing money and actually losing a lot of money. Uh, right now, probably a lot of it because of capital investment in ai. Either way, their IPO was pushed back due to uncertainties in the market right now. And I think if they figure out the AI stuff quicker, that's actually gonna be a good way for them to get a higher valuation once they actually go public. A few pieces of news from OpenAI. Obviously, we cannot pass a week without some significant stuff happen on their front. So OpenAI is spearheading the development of a colossal five gigawatts data center in Abu Dhabi named Stargate, UAE. The project is very interesting first of all, because it's insane in size. The actual data center itself is gonna spread over 10 square miles. Just imagine that. That's the size of a large neighborhood. Out of that, OpenAI will own only one gigawatt out of the five in cluster computing. But it puts a very clear statement of where OpenAI is going. And the quote from Sam is by establishing the world's first target outside of the US in the UAE, we're transforming a bold vision into reality. Now, the other interesting aspect of it is the UAE government will be the first country in the world to enable nationwide Chachi PT access. So the goal, going back to the growing digital divide from the UAE perspective is that the government will finance Chachi PT access to all the people who live in the UAE, at least in Abu Dhabi. This is an incredible move from the government. I really hope that additional governments will do that, but I really hope that the governments will also find ways to actually train people from young age all the way through professionals on how to actually use these tools and not just give them access. Now, the downside of that, which we talked about every time we talk about these mega project, is that this facility will consume the power equivalent to five medium-sized nuclear reactors. So the amount of power that this new facility will require is insane, and if it's not provided by clean energy, it has a very severe impact on the environment. But obviously that's just the UAE project. Open AI also announced an increased investment in their Texas data center, growing it from two buildings to eight buildings, bringing the total funding of that to$15 billion. It is going to be their largest open AI facility with 50,000 and v Blackwell chips. Now all of this is actually allowing, obviously, OpenAI to reduce its dependency on compute from Microsoft as these companies are drifting further and further apart. And both these projects are a part of the Stargate project that was announced in January, 2025 together with President Trump and openAI, Oracle and SoftBank to provide infrastructure for open AI's growth. And it is very clear that OpenAI is positioning themselves as the Microsoft and or Google of the AI generation with control over everything. They will control the models, they will control the agents, they will control the hardware, they will control the compute and so on and so forth. So a company that more or less didn't exist two years ago is gonna take a very, very significant role in world dominance from a technology perspective, which is absolutely incredible. A week back OpenAI also Unveiled Codex, which is an AI coding agents that integrates into Chachi PT and can handle multiple software engineering tasks simultaneously. Now, the new platform is powered by Codex One, which is a fine tuned version of OpenAI oh three. And Codex writes, code fixes, bugs, run tests, et cetera, et cetera, more or less every aspect of coding, and it operates in a secure, cloud-based virtual computer, integrating with GitHub to get access to users code bases, ensuring safety and defining and denying internet access during tasks. In order to maintain your code, yours, and not giving it access to other things. The tasks that it can perform can run between one minute and 30 minutes. Again, a very big spread from the seven hours of Cloud Opus four. But it's still allowing developers to delegate repetitive work to these agents as they are working. Combine that with open AI's recent acquisition that has not been finalized yet of Windsurf, which is one of the top code vibing tools out there. And you understand that the AI coding market is on fire in a very bright glowing fire as well. OpenAI also announced that it's Chachi PT operator agent that is still in preview now leverages O three as the model behind the scenes. Those of you who remember, it's an agent that can run in your browser and operate your browser like you would on your own that they launched back in 2025, but only available to the people with a$200 a month pro subscription and it had lots of issues in the beginning, so presumably all three, which is thinking capabilities, custom, now tailored for this stuff can really help users with filling up forms, grocery shopping, et cetera, et cetera. Basically everything we do on a browser. And this is obviously the next frontier of how we engage with the world. Going back to what I said multiple times, the days in which we browse the web, the way we did so far are over and they're not over yet, but the hourglass has been flipped and the sand is falling down, and every day that passes, we will start seeing less and less human traffic and more and more agentic traffic visiting websites, which has, as I mentioned, multiple times, profound implications, including the way the web actually works. By being financed by ads and by allowing people to create content that actually drives traffic to them. Well, there's not gonna be traffic, at least not human traffic, and everything else is going to change in the way we engage with data on the internet. Now, Jerry Toric, the VP of research at OpenAI, said that that's just the beginning, and they're planning to make significant improvements to operator in the very near future to make it even more useful. Now, you start to connect all the dots on everything OpenAI is doing together with the latest release of Opus four and what it can do. And it starts giving you hints of what might GPT five look like, A model that can understand everything you need, that can connect to the internet, that can operate the browser, that can select which method to use in what steps that has memory with all your historical data, which right now Claude doesn't have. And that's my biggest reason to maybe still go to Chachi PT, is that it already knows me and it can reference a lot of those things in the conversations. Combine that with MCP capability and the ability to connect to more and more data sources and tools. And you understand that GPT five and Opus four and all the new models that are gonna come after that are going to change literally everything we know on how we work and even how we work with AI tools. Staying around the open AI topic, Sam, Altman's World Network, his digital orb that scans your eyes and make sure that you're actually a human just raised$135 million in order to expand what they're doing. Fans will fuel their global network growth. They're targeting 180 million Americans for orb verified world IDs by the end of this year. I see that as a very ambitious and completely non-realistic target, but that's where they're targeting. For those of you who don't know what that is, it's a. Little orb that you place your eye against it, it scans your iris and then it saves a digital ID on the blockchain so you can verify that you're actually a human and reduce fraud and verify that you are actually you. Overall interesting idea how that will actually work will be very interesting to see. Allowed to two very interesting pieces of news from government and with its relation to ai. One highly controversial, the other one highly needed. So heist Republican included a clause in their big, beautiful tax bill that was passed by the House Energy and Commerce Committee on May 14th, banning states and localities from regulating AI for the next 10 years starting as soon as the bill is put into order. Their argument, which is the same argument that was announced by industry leaders in the Senate committee that we shared with you a couple of weeks ago. They're trying to prevent patchwork of state regulation that will make it very, very hard for these companies to maneuver through. This will immediately block over 20 California AI laws and 30 pending bills just in California, and a lot of similar bills in many other states, including protection against DeepFakes and AI driven healthcare denials and many other laws. That actually makes perfect sense. A lot of people who object the law are saying that right now, the federal government did not put in place any rules and regulation to help Americans in dealing with the negative aspects of ai, and yet this law, if it passes, the Senate will be put into action. The flip side argument from Ted Cruz and others on the Republican Party is comparing it to the 1998 Internet Tax Freedom Act that fueled the e-commerce growth in the us. We talked about this when we talked about the AI Senate hearing. Now from the State's perspective. 40 State Attorney Generals plus 140 organizations, including the Center for Democracy and Technology, arguing against the law, and they're saying that it's leaving consumers unprotected. As I mentioned, there's no clear federal law to protect any of us against the downsides of AI usage. The other legislation is not only about ai, but it's also about ai. President Trump just signed the Take It Down Act, which is a bipartisan law that is criminalizing the distribution of non-consensual intimate imagery, whether real or deep fakes. That law is effective immediately, and it was brought up by Ted Cruz, who's a Republican, and Amy Klobuchar, who's a Democrat. The bill passed the house with a 409 to two vote, and unanimously in the Senate, definitely a great step forward. This law was actually promoted and championed by millennia Trump, and I'm very glad that it's actually passing. It's not protecting any deep fake, but at least it's protecting from harming individuals with intimate fake photos of themselves that can generate catastrophic harm for anybody, especially teenagers and younger people. The next piece of news is a resurrected attempt by the Y Combinator backed company, fire Crawl, who now allocated$1 million to hire three AI agents for content creation, consumer support, and junior development roles, paying each$5,000 a month. Now they've done this publicity stunt a few months ago in February, and we shared with you about that. That didn't work very, very well. But this time they're saying that they already got 50 applicants in a single week since they posted these jobs, which is telling you how quickly the AI agent world is evolving with companies and individuals developing agents and want to sell their services to others, which is going to build a whole new layer to the economy. Now, the million dollar budget also covers hiring the people who are developing these AI agents as full-time or as contractors to the company. And you're asking what these agents will do. So if you wanna dive a little deeper, the content agents must autonomously produce SEO friendly blogs, track engagement and improve accordingly. The support agent handles tickets in under two minutes and the developer agent codes in TypeScript and go, and there's obviously much more detailed instructions in the job posting themselves. Now there's an interesting catch 22 in all of this when you think about it, where companies are going to hire people, developers who can create agents that will eventually will replace them as well, because agents will know how to write code and create new agents. So, that makes my head get stuck in a loop. But that's where we are going, and we're going there very, very fast. Speaking about startups, a Silicon Valley bank report reveals that 40% of US venture capital in 2024 went to AI focused funds. 40% doubling that share from just five years ago, leaving non-AI startups scrambling. This brought back a phrase coined by Mike Maples in 2016 called zombie corns. Zombie corns are startups valued at over$1 billion with stagnant revenue growth and deem prospects for future funding or exits. To put things in perspective, in twenty twenty one, a hundred and thirty eight enterprise software unicorns emerged in 2024, only nine, with not a single one in 2025. So far, based on this report. That reflects a very significant funding drought for anything other than ai. Combine that with the fact that many of the startups that are actually doing well are getting killed by new AI capabilities, features within Chachi, pt, Claude, et cetera. And you understand that the startup world is going through a very serious storm, and it will come up on top and people will figure it out. But right now it's a very serious issue for startups who want to raise money and grow stuff that is not AI related. And switching from software to hardware or specifically robots, which we didn't cover for a couple of weeks. Tesla released a new video of Their Optimus Robot, showcasing some very impressive capabilities, performing tasks like throwing trash, vacuuming, steering, food and even moving Model X parts onto a dolly. The interesting thing about all of those is not the fact that the robot can do this, but that these robots learned how to do these skills by looking at first person videos of people actually doing it. So they weren't programmed, they weren't trained in any way other than just watching these videos. And Tesla is planning to actually improve that capability of the robots to learn from third person view. Basically internet videos of every task that you can imagine that exist at abundance and allow the robots to learn through that. That is obviously gonna dramatically shorten the time of training robots to do new tasks safely and effectively. If you remember, Musk thinks that humanoid robots are the biggest product ever and he's estimating a$25 trillion market for autonomous robots in the future, doing more or less every blue collar job, including house chores, as you've seen the robot doing. Tesla already begun limited production of the robot at the Fairmont factory, targeting 5,000 to 12,000 units in 2025, and then starting to sell them externally in 2026. Uh, we've heard a lot of predictions from Elon before that. Most cases, I would say almost all cases fell short. So these numbers might be optimistic, but the direction is very clear, especially that there's competition from many other companies who are building advanced humanoid robots. That's it for this week. There are a lot of other really interesting news that we just couldn't fit into this crazy episode, and you can find all of them in the newsletter in short snippets, quick bullet points and links to the actual articles where you can actually read more. If you're interested in one topic or the other, you can sign up for the newsletter in the link in the show notes. If you enjoy this podcast, please open your phone right now unless you're driving, and click on the share button and share it with a few people you know that can benefit from it as well. Having more people understand what is coming and how to use AI is critical for us as a society figuring it out and enjoying the benefits while reducing the risks. And your ability to play a part in this is literally by clicking the share button and sharing it with more people. A very easy task that you can do in about five seconds. I would really appreciate if you do that. By the way, we just announced new date for the AI Business Transformation course. The next cohort is gonna open to the public in the beginning of August, so if you're interested in that, you can click on that link on your show notes and take a look at our courses. All those courses sell out. These courses have been transforming businesses for over two years. I'm currently running two of them in parallel, but most of these courses are private and so if you are interested in a public course that you can just sign up to go and sign up right now while we still have seats. That's it for today. On Tuesday we'll be back with another how to Fascinating episode and until then, have an amazing rest of your weekend.