
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
187 | Sam Altman’s Senate Testimony, Companies Leveraging AI for cross industry expansion, OpenAI Will NOT become a for-profit, and other important AI news for the week ending on May 9th, 2025
👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ Save $100 with promo code LEVERAGINGAI100
Are U.S. companies ready to lead the AI revolution?
Sam Altman's testimony before the U.S. Senate just set the stage for a high-stakes race in AI, where U.S. innovation is battling global superpowers for dominance. In a world where energy and infrastructure dictate AI's growth, the stakes have never been higher.
But it’s not just about big government moves—businesses are rapidly leveraging AI to break industry barriers. From Salesforce moving into customer service to Canva building out new data capabilities, the lines between industries are getting blurred, and it's all powered by artificial intelligence.
In this session, you'll discover:
- Sam Altman’s bold predictions for U.S. AI leadership and its global implications.
- How OpenAI's decision to remain a non-profit reshapes its path and the AI landscape.
- Why energy infrastructure is now seen as the backbone for AI expansion.
- The rise of cross-industry AI plays—how giants like Salesforce and ServiceNow are breaking into new markets.
- OpenAI’s new blueprint for enterprises: The seven key steps every business should follow.
- The role of government in AI: Regulatory risks, infrastructure needs, and strategic competition with China.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend News edition of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career. This Isar Metis, your host, and we have a packed episode like every weekend, if, to be fair. But there's a few really big discussions that we're gonna have in the beginning that are going to impact the future of AI and the future of businesses with ai, with some very interesting inputs, such as Sam Altman's testimony in front of the Senate hearing, as well as shifts in entire sets of services and value that companies are providing. Blurring the lines between traditional industries, combined with a very interesting interview done by McKinsey, talking about the urgency of CEOs to act now. And followed by Open AI's blueprint for AI in the enterprise with the seven things every company should focus on per OpenAI, per their experience with working with several large enterprises. And then we'll dive into the rapid fire starting with open ai, not changing to a for-profit organization, but there's a lot of other stuff to talk about, specifically with open AI and with a lot of other companies. And there's even some interesting market-wise, global news related to Nvidia in the end. So let's get started. This week there was a meeting of the Senate Committee on Commerce, science and Transportation, which focused on AI and the impact it's going to have on the us It actually started by Senator Ted Cruz completely trashing the policies of the previous administration. He took us back to the growth in the 2000 in the internet revolution. He claimed that the size of the US economy and Europe's economy were similar before that revolution. And then he claimed that a set of regulations in the EU slowed the innovation down while regulations led by Clinton back then in the US made it very easy for companies to innovate and grow different solutions based on this new technology. And he's claiming that's the main reason to a huge difference in size of economies right now with the US economy, more than 50% larger than the sum of European economies. And he's claiming that the main driver for that was the growth of the tech sector in the US that did not grow in the same pace in Europe per him, mostly based on differences in regulation. So that set the stage to what would be a very interesting conversation with very senior guests, people like Sam Altman, the CEO of OpenAI, Lisa Sue, the chair, and CEO of a MD, Michael Rador, the co-founder and CEO of Core Weave. Brad Smith, the president of Microsoft and a few others. So very senior people with significant impact and connections on the AI revolution were testifying in front of this committee. Now the second step of the setup was talking about the importance of US leadership in the AI race, and it was very clear that the tone would be that the leadership in this race is crucial to who's going to more or less decide the future of the world. Is it going to be totalitarian countries like China or democracies like the us? And it was stated that currently the US is in the lead, but significant steps are required in order to maintain that lead from mostly China. Now they spoke about many different things, including potential benefits of AI as well as potential risks of ai, and it was all around how the private sector and the public sector must work together in order to guarantee that the US wins the AI race against China and working together with its allies in order to make sure they use US-based technology versus Chinese technology. I. A big part of the conversation went around infrastructure and defining it as a foundational requirement for AI leadership. So it was very clear that a significant investment in development of physical infrastructure, mostly for power supply, but also in data centers, is essential to supporting the growth of AI everywhere in the world, in the US as well. And this includes, as I mentioned, computing, power, data centers, and most importantly, energy. It was mentioned that electricity demand for AI project is substantial, which we all know, but in the near future is going to reach 12% of the total US demand, which is a very significant amount because it's competing with everything else. The good news about it from a job generation perspective is that there is a significant need for hundreds of thousands of new electricians in different roles and skill laborers in the electrical field to create and then maintain this infrastructure. Another big topic that was discussed is permitting mostly on the federal level. Because a lot of open space wetlands, et cetera, is governed by permits. That is driven by the Army Corps of Engineers, and they are currently a significant bottleneck to big infrastructure projects. It was mentioned as an example that on the state level, major projects can be pushed through permitting between six and nine months, while on the wetland permit is taking 18 to 24 months. So it could take you up to two years just to get the permit. So do a significant project if you need this kind of approval, and that approval is required in many cases to large scale project that goes through these areas. To clarify how critical power is to the future vision of these companies and how the world would look like. I wanna share a quote from Sam Altman who said, eventually the cost of intelligence, the cost of AI will converge to the cost of energy. If it will be how much you can have, the abundance of it will be limited by the abundance of energy. So in terms of long-term strategic investment for the US to make it, I can't think of anything more important than energy. Basically what Sam is saying, and again it was very clear from other people there as well that the race to create new energy to support new development of data centers is a critical and maybe the most critical component of winning the AI race. Another big factor was the availability of skilled talent. It was mentioned that the US currently has a very strong talent pool, but in order to guarantee that the US wins this race, we need a lot more of it, including software developers, hardware developers, and application developers, just to name a few, and it was urged that high skilled immigration is crucial for bringing the best talent from all around the world to work in the US and contribute to US AI innovation. Another important topic was obviously balancing policy and regulation. The term that was used by several different people is a light touch federal regulation, meaning a framework that will balance safety and innovation from the federal level that will reduce the need for a state level patchwork. It was mentioned by Sam and a few others that working through 50 different regulations will dramatically slow down AI innovation versus having a clear guideline from the federal level might prevent the need to do that. Another part of the conversation talked about export controls and the need to balance the risks of having stuff getting to China with the need to have as many countries around the world. Important implement US-based technologies in order to create an alliance around the world that will use US technology versus Chinese technologies for global AI domination, it was very clearly stated that strong export controls will create a vacuum in many areas around the world, which will be filled by competitors of the US to provide AI solution these in these countries. It is already happening in China itself, where Hui, who is their largest AI chip manufacturer over there, is growing at an insane pace, both in means of the amount of chips they're generating and sailing as well as the capabilities that they have because of us export controls. But if we've prevented from other countries, it's gonna fill up this vacuum as well. Risks were also discussed in this conversation, including protecting children from potential harms of ai, talking about learning from the mistakes of the internet and social media eras that had dramatic negative impacts on teenagers and younger kids as well. There was a conversation about the problem with deep fakes and protecting individuals likeness from unauthorized replication, and the push for industry to find ways to clearly identify what is AI generated versus what is not. The problems of disrupting the concept of intellectual property by these AI companies was also a topic that was discussed and the potential for AI to be used for both offensive and defensive. Cyber security was also raised as a concern. The bottom line was very clear. The leading companies and this current administration are calling for a partnership that will have relatively little limitation on these companies to develop the most advanced AI capabilities to keep the US ahead, despite the risks and it called for partnership between government and these companies in order to enable large scale infrastructure projects to enable that in combination with partnerships with allied countries to deploy these solutions for them as well. We discussed this shortly after this new administration was elected, that's probably what is expected when the elected AI SAR was announced, and he's a venture capitalist, one of the PayPal Mafia and a big name on Silicon Valley. It was very, very clear that this is a direction, less regulation, more investment, run fast and we'll figure out the risks later. And while this particular conversation was more balanced, including discussing a lot of the risks, I think the direction is very clear. Another topic that I want to cover is OpenAI released what they call the blueprint for AI in enterprise. It is based on their experience of large enterprises using open AI's technology and what they believe drives it to be successful versus not successful. Their report reveals that 92% of Fortune 500 companies are actively using ai. It didn't exactly say what that means and what the ROI that these companies are seeing, but it's a much larger number than I expected. But again, this could mean that one person in the company is using Chachi PT to write emails. That would still mean that somebody in the company is using ai. I don't think, by the way, that's the case, but I think it's too vague right now to understand exactly how they're using ai. They also claim that companies that use chat GPT are reporting 40% reduction in task completion time for tasks like coding, writing, and data analysis, which means it's enabling significantly more throughput and production by these companies. So the seven steps that OpenAI mentioned are start with evals, meaning define a systematic scientific ways to measure the performance of AI against specific benchmarks versus deploy the AI and then try to figure out what it's actually doing. So define the exact things you want AI to do, define how you measure that, and then actually measure it and correct accordingly. Or you may end up with the wrong output. The second one was embed AI in your products. Meaning don't just use it internally, but actually use AI to enhance the value of the deliverables of your company, whether it's product or services. The next one is start now and invest early. That's pretty self-explanatory, but they're basically saying the AI era is here. It's not the future. If you're not investing it right now, you might miss the train or suffer the consequences. The example they gave is that Klarna early investment and broad adoption of AI led to them having a customer service assistant that handles two thirds of all service chat, cutting the resolution time from 11 minutes to two minutes, and. This by itself is projected to drive$40 million of profit to the company. The next topic is customize and fine tune your models where the benefits are clear, right? it allows you to improve the accuracy and domain expertise, consistent tone and style, faster outcomes, and so on by using customized models. More about that from OpenAI in a few minutes. The next topic was specifically about code generation, but most large enterprises also write code as part of what they're doing, even if it's just for internal purposes. So the next one was unblock your developers. Automating the software development lifecycle can multiply the dividends of what the company is doing. The example they gave there is from MercadoLibre, who is a huge, retailer in South America, who developed a platform layer called Verde that helps their 17,000 developers unifying accelerate AI application builds. So they're now creating applications for internal usage significantly faster. And then the last point that they said is Set bold automation goals. I think this is a very important point because what happens, and I talk about this a lot in the course that I'm teaching as business people, we're used to looking at problems. We used to looking at business processes as processes, multiple small steps that you have to go through. And in many cases the mistake that companies make is they try to solve each and every one of the steps with AI versus looking at the bigger problem versus looking at the goal. The Klarna example, and also in this particular paper, OpenAI talks about themselves and how they're using AI to handle hundreds of thousands of tasks every single month. With AI is showing you that you can, if you look at customer service as an example, AI can do customer service instead of solve small components of customer service, like a better IVR on the phone, or better distribution of the tasks to the relevant people. It can actually do most of the steps of the work circumventing the small steps. So setting bold goals for AI can drive much better results, assuming you have the resources and the knowledge on how to do that. Now, speaking about AI and how it is impacting different industries, there was a very interesting article on the information in the last few days talking about how AI is allowing companies to go beyond their niche or their industry and grow into areas where they couldn't profitably grow before. So the examples they gave are companies like Salesforce that is traditionally just a CRM developer. Now developing AI agent platforms. Also providing customer service solutions, which directly competes with ServiceNow as an example. ServiceNow, on the other hand, known for IT services management is pushing into HR and customer service with AI driven products and so on and so forth. Companies like Glean and Notion are disrupting legacy companies with AI driven search and productivity tools. And Canva now has a spreadsheet. So you see where this is going because companies can now develop solutions faster because AI enables that and there's AI embedded into it. They can provide a lot more value in a lot more fields, a lot faster, allowing them to do it potentially in a way that is profitable. And hence they're gonna go into these fields. This is increasing the competition across the board between companies because they can now shift the focus of what they're doing, or at least go into additional markets that they couldn't address before. I talk about this a lot in the last chapter of my course, where we discuss what questions business leaders needs to ask themselves in order to grow the business, versus potentially be eliminated by the fact that their customers, their ecosystem, their competitors and themselves have access to ai and how to navigate that starting now in order to increase the chances of positive outcome in the future. Staying on the same topic of what CEOs need to do, there was a very interesting interview this week on a podcast by McKinsey. They interviewed John Chambers, the legendary CEO of Cisco, who led it to its greatest success and a lot of the conversation focused on AI And I'll start with a quote. John Chambers said, in many ways the implementation of AI is like that of the internet, but it is going to move at five times the speed with three times the outcome. Basically, he's saying that these two revolutions are similar, but you gotta be able to move significantly faster and the benefits will be three times bigger if you actually do that. And he mentioned that companies need to be able to go from zero to a hundred in no time and basically cross the chasm of AI in one year, or risk being replaced by other companies that will do the same. I. He specifically spoke about the same kind of topic that companies and organization need to change, potentially the overall business that they're in, in order to guarantee that the company will survive. He gave an example of the French postal services. Again, they deliver mail, is using AI to shift from declining letter delivery to parcel services and new offerings, including elder care, showcasing how traditional businesses can completely reinvent themselves with AI or the other way around, right? If they don't do that, they may go extinct because AI may replace the thing that they're doing right now. it was very, very clear that bridging the skill gap and knowledge gap of employees and of C-Suite is a critical component and a barrier to AI adoption. And I agree with that a hundred percent. As somebody who has been focused on AI education and literacy in companies large and small, I can tell you with confidence that C-suite and leadership buy-in, in combination with. Leadership and employee training are hold the keys to incredible benefits from ai, and if you don't do that, both components, C-suite buy-in, as well as education and training for leadership and for employees, your chances of a successful AI implementation are very, very low. Now on the impact on the markets, which we discussed in several different previous episode, the tech industry has shed 214,000 jobs in April of 2025, driving the sector's unemployment rate to 3.5% from 3.1% just in March. So that's one month. Now, obviously AI is not the only driver for this. There are economic concerns of potential recession and tariffs issues and things like that, but it is very, very clear that these companies are pushing very hard on automating and creating code with ai which means you can generate the same outputs and in many cases bigger and in some cases more outputs with significantly less people. Microsoft, CEO, Satya Nadela shared last week that 30% of the code that Microsoft is generated by AI right now. That means that if you need to generate the same amount of code, you need less people. If you want to generate more code, then you can maintain the same amount of people or not, and I think the direction is very, very clear. Both for companies and for individuals that wait and watch what happens. Approach is not going to work. You have to be proactive and you have to take action, and you have to know what you're doing. If you are an individual, you must have significant AI skills or you will lose your job. It's just a matter of time. And for companies, if you do not train your people properly and give them the knowledge on how to implement AI for a large variety of tasks as well as consider the strategic side of what is the future of your company In the AI era, your company will suffer significant market share loss and might be eliminated altogether because of the changes that AI will drive in the economy, in businesses, in your niche, in your industry, and so on. And the best way to do that is to start with training and education of leadership and then the rest of the employees, which is a good point to remind people that the AI Business Transformation course that I've been teaching for over two years to hundreds and maybe thousands of business people, the spring cohort Starts this Monday, May 12th. So if you're listening to this episode on the Date Comes out on Saturday, May 10th, or on. Sunday, May 11th, or even on Monday morning, if you're really a procrastinator, you can still join the course and get incredible value in learning AI and how to implement it effectively in a business perspective. Three very practical sessions ending with a strategic session at session four. And so if you haven't now, so if you haven't done this yet, this is an incredible time to act because the urgency is very, very clear from every angle. And you hear that from government, from top leadership in the AI universe, as well as from industry. The AI change is here, and if you won't adapt, you will suffer the consequences. Now, we teach this course all the time. I'm teaching two courses in April. I'm just finishing a course that started in April. There's two courses in May, and so on and so forth, but most of these courses are private. You cannot sign up for them. We open public courses only once a quarter, so the next course most likely will be in August, meaning if you don't sign up before Monday the next time, you can join our cohort and learn the skills, the knowledge, and the strategy that you can use to drive dramatic changes in your company and in your personal career will be delayed by an entire quarter, which is not a good decision, I think. So if you can come join us on Monday, the course is four weeks, every Monday at noon Eastern time, we have people from all over the world. So even if the time zone is not perfect for you, we have people from China, we have people from New Zealand, we have people from Australia, India, middle East, and so on. So wherever you are in the world, this course can dramatically impact your future success. So come and join us. By the way, if you are listening to this after May 12th, we will replace our sign up for the wait list for the next course, which will probably happen in August. So don't be totally encouraged. You can still come and sign up for the August course and join us later this summer. So that's it on the impact and the big topics. Now let's dive into the rapid fire items. And as I mentioned, there's a lot to talk about, even just from open ai, this could have been an episode by itself. The biggest news from open AI this week is that they abandoned the plan to transition into a for-profit entity, meaning the nonprofit board will stay in charge of the company. Now, we've covered this topic in multiple episodes, but a quick recap for those of you who just started listening this episode, OpenAI started as a nonprofit company with the goal of developing AI that it will benefit all humanity in a safe way. One of the first investors in that company was Elon Musk, who was one of the co-founders, and later on when they understood they need a lot more money, Elon wanted to roll OpenAI into Tesla in order to be able to finance it. He wanted to become CEO of this company as well. That wasn't the path that OpenAI took. Elon Musk left starting a serious beef with Sam Altman in OpenAI as well. OpenAI then went to Microsoft and got a crazy investment that up to this point is over$13 billion in both cash and access to compute It was very clear to keep on raising the amount of the amounts of funds that they're raising, including a$40 billion founding round, the largest of any private company in history that they just raised from SoftBank and other companies. Valuing the company as 300 billion cannot happen with a entity. So they were in the process in the past year or so of transitioning from a nonprofit entity to a for-profit entity so they can provide the relevant returns to their investors. But that faced a very serious pushback from multiple people. The first one being Elon Musk, who sued them for that particular purpose. More about that in a minute, as well as previous employees and other bodies who basically said a nonprofit company cannot just decide to transition to a for-profit company because it betrays all the money that was put into it before for the benefit of the public. So after fighting that battle, including many meetings with the Attorney General of both California and Delaware, OpenAI, I guess decided that they're not going to win this war, even if they might win one or two of the battles. And they just announced that they're abandoning their attempt to become a for-profit company, but it there also announced some significant changes at OpenAI, both from a leadership perspective as well as how the company will be structured. So OpenAI will convert its for-profit arm that existed before and was controlled by the nonprofit board. They're gonna convert that from an LLC to a public benefit corporation allowing for equity to employees and investors. While the nonprofit will retain the majority control. This basically means that the for-profit component can run as an entity, still allowing profits to be shared with investors while at the same time hopefully having the nonprofit board govern the future decisions and the strategy of the company moving forward, that still stays questionable because Sam Altman, if you remember, was fired and then brought back and then need a whole change in the board structure and people in the board to have more control over the board. So how much is the board really nonprofit focused is not a hundred percent clear. And that's one of the reasons that Elon Musk just announced that he's moving forward with his lawsuit against OpenAI, and now I'm quoting the statement from his lawyer. Nothing in today's announcement changes the fact that OpenAI will still be developing closed source AI for the benefit of Altman, his investors and Microsoft and quote, which basically means that the nonprofit structure is just obscuring the actual asset transfer from the open source world and the nonprofit and the benefits of humanity to private gains, which is what he's trying to fight. There's obviously a lot more going on. There's a lot of personal beef, and there is Xai, which is a competitor of Open ai, and being able to slow open AI down will help Xai grow faster. There's also a counter lawsuit from Open AI against Elon for going after them for no good reason and that it actually serves his business agenda and not really the things that he's suing them for. One of the biggest implications of not being able to transition from a non-profit to a for-profit is that part of the money that they raised in the last two rounds, including the 40 billion in the recent round, will not come to them if they do not convert by the end of this year, which they're not going to. So as an example, the$30 billion that soft bank committed are gonna be cut to$20 billion. That is$10 billion with a B that they won't have to use. And there's a similar agreement with the previous round that they've raised in late 2024. How will that change? How will that impact, will SoftBank really pull those funds? Is unclear, but there's also a very significant financial implications to the fact that they can't make that conversion. Maybe the fact that they're changing the structure of the company from an LLC will help them maintain at least some of that amount. But I will let you know once we learn exactly how that evolves. As part of these major changes, they also made a huge announcement from a leadership perspective, where OpenAI hired Instacart, CEO, Fiji, CMO as the CEO of OpenAI applications to oversee product, business, and operational teams. The current full CEO of OpenAI, Sam Altman, will only focus on research infrastructure, safety system, and board collaboration. So basically, OpenAI is going to have two CEOs, Sam focusing more on the strategy and Fiji CMO focusing more on application and deliverable of product. This is obviously a tectonic shift from Sam Altman being the strongest person maybe in the AI world right now. Giving up half the kingdom to cmo. CMO is only 39 years old. She led Instacart into profitability since 2021. Took it public in 2023 and before that headed Facebook at Meta for a decade. So bringing a significant proven tech leadership track record that hopefully will bring the same kind of results to open AI as well. Now that is not happening immediately. Sima will join OpenAI later in 2025 as she needs to transition out of her position at Instacart. This is very significant. Obviously the fastest growing, most known company in the air world that started this crazy era is going to have two CEOs as of later this year focusing on two different things. I think from a leadership perspective it makes sense. I'm sure Sam was stretched very, very thin, and I think his genius will better serve the things he's going to focus on versus product development and growing the business side of things. And it's very clear that si O has a very successful track record in doing that. And it will be interesting to see how much fuel it adds to the OpenAI position in the overall AI race. Now another thing that they announced as part of this restructuring process is they're planning to cut the revenue share of existing investors, including Microsoft from 20% to 10% by 2030. So within five years basically cut their revenue sharing in half. As a reminder, Microsoft as of now invested$13.75 billion in OpenAI, and it has not approved a lot of these restructuring yet, and it'll be very interesting to see how that whole thing evolves. That being said, the formal statement said we continue to work closely with Microsoft and look forward to finalizing the details of the recapitalization in the near future. I obviously don't know what's happening behind the scenes. I would love to be a fly on the wall in those conversations to see how the big boys, really do it. But I think both sides needs to maintain this frenemies relationship, at least in the next few years. So I assume they will figure it out. OpenAI also made a big announcement on the technical side, where on May 8th, they announced that developers can now use reinforcement fine tuning, also known as RFD, to customize all four mini reasoning model, tailoring it for specific company and enterprise needs. Now, it was possible to fine tune OpenAI models before, but the only option was supervised fine tuning versus reinforcement fine tuning. Reinforcement fine tuning, employs a greater model, basically grading the score of multiple responses, adjusting the model weights to align with nuance calls of enterprise and communication styles, meaning it's a more advanced way to fine tune the model that was not available before and it is available right now. They gave an example that accordance AI used this new fine tuning of all four mini for tax analysis, achieving a 39% accuracy improvement, outperforming leading models on tax reasoning benchmarks. Basically meaning that if you know how to fine tune a model based on your data and based on your needs, you can achieve significantly better results than just using the top models, but the top models that are not fine tunes. So if your company needs something like this and you have the right talent in the room, you can now configure RFT via the Opens AI Fine Tune dashboard, and or API, uploading datasets and prompting validation splits with detailed documentation available to show you exactly how to do that. Staying on OpenAI models. OpenAI 4.1 is now the default model for GitHub. Copilot. So previously it was 4.0. Now four one that outperforms 4o in coding has replaced it by default. If you are using GitHub copilot, you can still replace back to 4o by your own selection, but that is going to go away in 90 days. So 4.0 will be completely removed from the model picker, 90 days from now. And 4.1, which performs better, more or less across the board is now the default model. Open AI also announced that they're expanding their data residency in Asia. That's actually a very interesting move. So there are regulations in multiple countries in Asia such as Japan, India, Singapore, and South Korea to store data locally, especially on more regulated industries and open AI is now making a move to guarantee that the data stays in data centers in those countries for companies from those countries. They also stated, and I'm quoting for the API platform and the chacha Business Products data remains confidential, secures, and entirely owned by you. It is very clear that the biggest race right now happens on the enterprise level and not on the individual level. Despite OpenAI having 500 to 700 million weekly users on their platform, the biggest revenues come from enterprises in their adoption. And that's another step in that direction for world domination in the AI race. And the last piece of news from OpenAI has to do with their potential acquisition of Windsurf. We talked about this in the past couple of weeks it was rumored that OpenAI is going to buy windsurf, which is a. AI assisted coding platform that competes with other tools such as GitHub, copilot that we just mentioned, and obviously the biggest one being Cursor. So based on Bloomberg, that deal is already done. It's not approved yet and hence they haven't announced it, but apparently they agreed on all the terms. So OpenAI is most likely going to acquire Windsurf for$3 billion. The competition in the AI code development is fierce, like in other aspects of ai, but maybe the strongest competition is in this particular field. The concepts of vibe coding as well as just accelerating existing code generation has been in the frontier of the AI race in the past six or nine months with companies like Cursor growing like crazy, and impacting obviously the underlying models as well. With Claude 3.7 sonnet benefiting a huge growth just from many coders using their platform under the hood. Now, how will OpenAI use Windsurf? It's not a hundred percent clear. They were conversations previously of OpenAI potentially developing their own code generation tool. They even released a few components of that in the past few months, but I think that's gonna be very interesting to see how they're gonna use it. I don't think they're gonna force people to use OpenAI tools under the hood in Windsurf, because I think that will push away a lot of the 600,000 people that are currently using it. So now that we spoke about open AI and their ambitions in code writing with ai, apple made a very interesting announcement stating that they're going to collaborate with philanthropic to create a quote unquote vibe coding platform internally and to the Xcode platform, which is used internally by Apple to develop their own code. So Apple has been showing up in failure after failure when it comes to AI implementation in the past few years. It's a real embarrassment to them and a real embarrassment to the Apple brand in general. We discussed in previous episode about the reshuffling in leadership over there and how they're trying to fix that, but that's still very long timelines, talking about potentially two to three years out to deliver things that they said that they're gonna deliver last year or this year. So they're now at the point of, instead of building it themselves, they're going to collaborate at least for a while, and their goal is to enhance Apple's flagship programming platform with Claude 3.7 sonnet, which is considered by many people as the leading coding platform in the world as the most capable coding large language model right now. So the platform will now feature a chat interface where developer can request code modifications, UI testing, and other things that you do during the software development, all powered by Claude Sonnet automating both development and debugging tasks. As I mentioned, they're going to deploy this tool internally initially with no clear decision whether they wanna make that public in the future or not. I think that finally makes sense for Apple. It will allow them to start doing stuff that other companies have been doing for a while. They will be able to start doing it right now. They're gonna test it internally first without taking risks of another failed deployment. So all of this makes perfect sense to me. Two interesting government related news. One is that the FDA has held multiple meetings with OpenAI to discuss potentially using AI for AI drug evaluations. The FDA Commissioner Marty McCarey revealed that the agency completed its first AI assisted scientific review for a product, which is a step into modernizing drug approval. As you know, drug approval currently takes about 10 years for a single drug, and if that can be cut by whatever, it will benefit humanity by being able to deliver drugs faster. The FDA has recently established a new position that is while nominating Jeremy Walsh to be the FDA's first ever AI officer, so they're very serious about implementing AI as part of the FDA. An interesting participation in those conversations were two associates from the Department of Government Efficiency, also known by Doge, led by Elon Musk that has joined these discussions as part of a AI driven reform to the government. Staying on that topic of Doge and AI and government jobs, anthony Jano, who's the co-founder of Accelerate X, revealed the plans to deploy AI agents across federal agencies aiming to automate tasks equivalent to 70,000 full-time jobs within one year. Now, according to Inc. Gensco shared with 2000 Palantir's alumni, that Doge established a project to standardize 300 plus federal roles, so specific roles based on a very long list of tasks that they identify that AI can automate. And the goal is to free up at least 70,000 full-time employees for higher impact work over next year. What does that mean higher impact work? I don't know how many of them are actually gonna let go versus do higher impact work. I don't know. We talked about this many times in the past. I think you don't have enough higher impact jobs in any organization, including the government. And so a lot of these people are going to lose their jobs and not just be re-skilled to do more impactful work. I am sure there's gonna be a backlash from that, but I think it is inevitable in the government as it is inevitable in any other organization. AI will automate more and more tasks, allowing to grow faster, but also allowing to do things a lot more efficiently, which will require less people to do the existing work. And now to a company I don't think we've ever mentioned on this podcast, which is Visa. So Visa just launched Visa intelligence commerce, enabling AI agents to autonomously browse, select, and pay for products. Basically a global e-commerce shopping platform driven by ai. The goal from Visa is actually brilliant. People will use the platform to shop for anything while using Visa for the checkout part of this, I. So the platform is a mix of AI agents that know how to help you select products from multiple sources across the web while integrating it with visa's, checkout and safety capabilities. So all your data remains private and your payment information stays secure because that's what they know how to do. I think that's a very smart and interesting move by Visa. I think they, it will allow them to potentially capture a new share of a new growing market. The concept of e-commerce as we know it today, is gonna change dramatically because in the future, and that future may come in a year, two years, five years, but somewhere within that timeframe, less and less people will visit e-commerce websites and more and more agents will do the shopping online for us, which means companies will need to go through dramatic changes in order to adapt and stay relevant in this new future from the way they present their data instead of to people, to agents in order to be found and sell their goods and services. Visa obviously have a whole fraud management and trust infrastructure that is combined into this new environment and they're making it available through API right now. So the goal is obviously for other companies to develop on top of their infrastructure in order to build more secure, while agent driven e-commerce solutions. Another company that shared a big win when it comes to AI deployment is UnitedHealth Group. They have stated that they have deployed 1000 AI applications across its insurance, health delivery, and pharmacy divisions doubling from 500 use cases in May, 2024. So in one year, they've doubled the amount of use cases that they're using AI for. And according to the report in the Wall Street Journal, these applications streamlined aspects of the business, such as claim processing, transcribed clinical visits, summarize data, power chat bots. And assist 20,000 and assist their 20,000 engineers in writing software. So what you can see time and time again is that companies that figure it out and figure it out quickly are starting to deploy AI solutions across the board in every aspect of the business and not just in one initial component. This is a very big change from what we've seen in 2024 where companies were just doing tests and evaluations of potentially using AI in very specific niches or specific use cases. This is company-wide AI deployment that drives innovation and efficiency across the board, which now becomes a necessity almost in order to stay competitive. One of the data points that they provided is that AI agents handle 26 million consumer calls in 2024, and they're planning that it's gonna take more than half of all calls in 2025. I don't know what's the total number of calls, but it's higher than 26 million, which is already very impressive. That being said, there's a lawsuit from 2023 that alleges United Health AI tool that was used to evaluate claims rejected many claims, 90% of them were by error wrongfully denying Medicare claims. So while AI is becoming more and more available, is being used in more and more places, it is not always accurate. And in their cases like this one where not being accurate is just not acceptable. So you need to be aware of these risks when you start developing and deploying AI solutions for your business. And now to a whole segment about new startups or existing startups in fundraising or interesting achievements. Startup Decagon is in talks to raise a hundred million dollars at a$1.5 billion valuation LED by some of the biggest names in the VC world. Like understand Horowitz Decagon develops a customer service AI agent solution used by companies like Notion and Duolingo, and have secured over 10 million,$10 million in signed contracts, driving significant success for the company and driving significant labor cost savings for the people who use the platform. As an example, fitness Giant ClassPass is using Decagons ai, reduce their cost of reservation by 95% through 2.5 million customer conversations. What, as I show you, it shows you that the need for agents across the board is very strong. Customer service and customer engagement in general are some of the top use cases and the leading companies who are developing and delivering these solutions are growing very fast, driving significant savings from these components in their businesses. Staying on the topic of AI agents and their crazy growth in 2025, relevance ai, which is one of the platforms that allows you to develop AI agents without writing any code Just raised 24 million. Series B. Relevance is one of those companies that is seeing explosive growth with 40,000 AI agents registered on their platform just in January of 2025. So many, many, many people and companies are looking for ways to develop AI agents quickly and effectively. I. Without writing code and relevance is one of the leading platforms to do that. They just introduced Workforce, which is a no-code multi-agent system for non-technical users to build and collaborate with AI teams as well as they just announced Invent, which is a text-based agent where you can just describe in English what you want and it will spin up the agent for you. They are facing fierce competition from many other platforms that allow you to do similar things, which is develop AI agents without writing code. And AI agents in general, whether with or without code, is predicted to grow dramatically. Boston Consulting Group BCG is predicting a 45% compound annual growth rate for AI agents over the next five years. So whatever we're seeing right now, it just the tip of the iceberg to what's coming when it comes to AI agents development and deployment. This coming Tuesday, the episode that we're going to release will show you how to build AI agents and connect them to company tools using relevance ai. So this company that just announced a very big round, so if you wanna learn how to develop your own agents without writing any code across any aspect of the business while connecting it to tools that you're currently using, don't miss this episode on Tuesday. Staying on the topic of agents, but on a completely different aspect. In this case, for research and open source Future House, which is a nonprofit company backed by Eric Schmidt, release what they called Future House Platform. The platform is featuring different AI agent tools, including Crow, Falcon, owl, and Phoenix, designed to accelerate scientific research. Each and every one of them has a different task where crow answers literature queries, Falcon conducts deep database searches. I will identify search gaps in Phoenix plans, chemistry experiments, all leveraging a corpus of open source access papers. What is this showing you? It's showing you that the usage of AI agents that can work collaboratively as a team can be used for many other tasks beyond just the obvious things in business. This is a great example on how AI can accelerate scientific research even if it's not making scientific discoveries on its own. It can allow humans to do this much more efficiently than we could have done so far, which by itself will accelerate the research and discovery process. Now to a bunch of news about image and video Generation. Recraft, which is a San Francisco based startup, just raised$30 billion in a Series B. Their model that's called Recraft, version three, code name Red Panda, it's turning to be a very powerful AI image generator, but they have a unique approach to this. Unlike their competitors, they prioritize brand consistency. So the goal is to allow marketers in specific companies, in specific teams to create visuals that perfectly align with companies, brand guidelines, as well as specific logos and color schemes. I must admit I'm able to achieve that to an extent with some of the other tools, especially with some of the new functionality coming in from me Journey, as well as in open ai, new image generator. I haven't tested Recraft platform yet to tell you what the differences are, but there's definitely a need for a tool that will allow you to stay consistent to brand. Exactly. And not kind of, on brand as well as being able to place your logo accurately exactly where you wanna place it in an accurate way. So it would be interesting to see how this thing evolves. I do think that the other companies, such as Mid Journey, such as Flux and such as Google with Gemini and Chachi, PT with OpenAI will have these capabilities, which makes the concept of a new company that does only that very questionable. But right now I think they have an interesting solution. Staying on the topic of Image Generation Free Pick, which is an aggregator company for multiple models and image and video generation tools, unveiled F Light, which is a 10 billion parameter text to image AI model. The unique thing about it is that it was trained on 80 million copyright safe, commercially licensed images. So the idea here is obviously that you can generate images that will not be questions for their ip, and it's available right now on both GitHub and Hugging Face under Creative ML and Open Rail M licenses. And a very interesting announcement came on May 6th by a company called Electric from Israel who unveiled LTX video 13 B, which is a 13 billion parameter AI video generator model that is claiming to be more advanced than anything else out there, including SONA and VO two. One thing that they have that is very interesting is they're using a completely new architecture and concept that they call multiscale rendering, which basically means that they are starting on the big picture of what you're trying to do, rendering that in very low resolution first, very quickly, and then adding more and more details to the scene, including lighting specific image details as the production evolves and they're claiming and actually showing a 30 time faster rendering than the existing models, including the capability to run it on local consumer hardware like Nvidia, RTX 40 90 GPU. Now, it's also a fully open source model that you can get access to on GitHub and hugging face, and it's free to use by Enterprises under$10 million in annual revenue. I love it when I see these kind of pieces of news where a company takes a completely different approach than everybody else to get to a similar outcome, faster, better, and cheaper. And especially that it requires significantly less compute because I think we're gonna have a very serious issue with the resources that all of this compute requires from our planet. And so being able to generate a high resolution, 30 frames per second video, 30 times faster, with much less compute demand, is really appealing to me and I'm sure will be really appealing to anybody who creates video. And hence, I hope to see this platform grow very fast. In addition to being fast, it's also for all the obvious reasons, much cheaper. And their goal is to be able to lower rendering costs to sense per clip. Another interesting announcement that was made this week is that Perplexity is about to release Comet, which is an AI powered web browser. They're building it on top of chromium, that is the open source version of the Chrome browser, which means that existing Chrome extensions can run on top of it, which to me is very attractive. and comment is described as, and now I'm quoting Browser for Agentic Search. Come uses AI agents to automate tasks like retrieving past article, past articles, such as find that Sea Order article from last Tuesday, and integrated into Google Services, browsing history and contextual data. I find this very appealing. I am waiting for the day that there is going to be a real AI driven browser. I think it will completely change the way we browse the web. I think it's inevitable that everything will turn into that. It'll be interesting how that's gonna impact Chrome, because Chrome will have to change as well, and that may change completely the way Google monetizes it. Now that being said, Google may be forced to sell Chrome. So that's a whole, different story. But I definitely see more and more of these attempts to create AI focused browsers and change the way we engage with the web right now. And now to one of the two interesting pieces of news from Nvidia. One of them actually from Jensen Hong and then from Nvidia itself. So during the Healing Valley Forum, which is a gathering in Silicone Valley for the elites and policymakers of Silicon Valley. Jensen Hong, the CEO of Nvidia declared that all American companies will have to also become AI factories in order to be able to compete in the future. What he means by AI factories is mean integrated hubs of chips, software and infrastructure to produce AI models and use them just like you're building everything else in your business. So every business per Huang will have an AI business running in parallel to it. And the reason he calls them factories, he's claiming that basically electricity goes into the factories and tokens come out on the other side. Meaning in his eyes and his future, which obviously serves his company very well. Every single company will need its own AI capabilities in order to stay competitive in whatever market they want to compete in the future. Nvidia has been pushing this concept for a while, including across multiple aspects. They've been developing software to help companies do these kind of things, including creating digital twins of their operations in order to optimize these processes significantly faster and in a safe way and other solutions. I don't know if that's gonna really be the case. It's very obvious why Nvidia wants to paint the future this way, but aspects of this are definitely true, all ready. If you have your own AI capabilities in house, you'll be able to do things significantly better, faster, and cheaper than your competition. Meaning at least for some aspects of the company, this is 100% true and the companies will be able to figure that out faster. We'll be able to gain significant market share over those who don't. Another interesting piece of news from NVIDIA is they announced they're redesigning their AI chips to comply with the US export control. So we shared with you that the recent government broader ban on selling Nvidia chips to China, including the more basic H 20 chips, is going to cause Nvidia to lose$5.5 billion in unsold inventory and lost sales. So what Nvidia is doing right now is they're actually building a new set of chips that will be below the government threshold so they can get at least some of that market back in China. They're obviously clearly stating that by restricting the H 20 systems, the US regulators are effectively pushing NVIDIA's Chinese customers towards Hui AI chips, which is not necessarily the right thing to do. And that actually brings us full circle to the very first article. I really hope that the US government will work together with US companies in order to make sure that on one hand the US stays ahead in the AI race. But on the other hand, that we are keeping as many companies around the world, depending on US technology for their AI needs. That's gonna be a very delicate balancing act. And stay on the topic of the legal and stay on the topic of government and AI regulation involvement in ai, a US district judge in San Francisco has sharply criticized Meta's claim of using copyrighted books to train Lama model. So this is a lawsuit that was delivered by comedian Sarah Silverman and authors Richard Ka and Christopher Golden filed on 2023 copyright Infr lawsuit against Meta saying that they're using their books to train their models. And the judge said, and I'm quoting you are dramatically changing. You might even say obliterating the market for the person's work. And you're saying you don't even have to pay a license to that person. I just don't understand how that can be fair use. Now if training AI models, I. On cooperated material is not fair use, which has been the claim of these companies all along. This has profound implications on how AI can be trained in the future. Now, what's gonna be the outcome of this particular case? I don't know. What implications will that have on future cases? I don't know. But what I said time and time again, and this is that this will end up at the Supreme Court and the current setup of the Supreme Court. I assume this is not gonna be as harsh as this particular judge, but now there might be a first case that actually says that training AI models on copywriting materials is not fair use which as I said, has profound implications. That's it for today. Don't forget that. We also have a survey. I want to know what you think about this podcast and if you want to be able to impact what is going to be the content of this podcast in the future, please fill out the survey. There's a link for both the course and the survey in the show notes and filling out the survey will take you less than a minute and it will give us a lot of information that will serve you. So please go ahead and do that. As I mentioned earlier on Tuesday, we are releasing an episode on how to build AI agents with relevance ai, which is a fascinating episode and I'm sure many of you want to learn that capability. And until then, have an awesome weekend.