
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
164 | GPT-5 will change everything... again, Paris AI Summit - a missed opportunity, how companies are using AI right now, and more AI news you need to know for the week ending on February 14, 2025
AI’s Next Leap: Are We Ready?
The Paris AI Action Summit just shook up the AI world—but why did the U.S. and U.K. refuse to sign a major declaration? Meanwhile, Sam Altman and Anthropic’s CEO predict AI will reach “genius level” within two years. Are businesses prepared?
🔹 Big shifts in AI regulation & investment
🔹 OpenAI’s roadmap:** What’s next for GPT-4.5 & GPT-5
🔹 How businesses are *actually* using AI today
🔹 AI’s impact on jobs—who wins, who loses?
Isar's AI Rant on Linkedin - https://www.linkedin.com/feed/update/urn:li:activity:7296375833424723968/
Sam Altman’s Blog Post – "Three Observations" - https://techcrunch.com/2025/02/09/openai-ceo-sam-altman-admits-that-ais-benefits-may-not-be-widely-distributed/
Google’s "50 AI Use Cases in 50 States" - https://workspace.google.com/ai/customers/
🚀 Want to future-proof your business?
Join our AI Business Transformation Course (starts Feb 17). Limited spots—sign up now!
https://multiplai.ai/ai-course/
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a business news episode of the Leveraging AI podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar, Metis, your host, and like every week we have a jam packed week, but it's very different than previous weeks. In the few previous weeks, we focused. Mostly on releases and new transformation and innovation in AI. This week, we're going to talk a lot about the bigger picture. We're going to talk about three and a half big items and then a lot of small rapid fire items. So in the big items, we have the Paris AI Action Summit that took place this past week. we have a glimpse to the future from Sam Altman and OpenAI. And we have two very interesting data sources to tell us what is AI used for right now And the half deep dive is going to be the AI in the workplace report from McKinsey that was just released as well. So a lot to dive into. And then, as I mentioned, lots of rapid fire items, including some cool new releases from several different companies. So let's get started. As we mentioned last week, the AI Action Summit took place in Paris on February 10th through 11th and A lot of leaders from multiple segments of both industry and government and so on showed up there, mostly Europeans, but also with support from all around the world, including the second sponsor that was India. So a big international summit to talk about multiple things. The previous two summits focused on AI safety. This one was dramatically different. So the key main topics they discussed, one was accelerating AI development, so understanding that it has a transformative potential and how can we push that in order to better humanity and the investment and technological that comes across. The first one is accelerating AI development. The second was managing AI transition. So basically, how do we ensure a smooth transition into an AI powered future? And the third would be aligning AI with human values. All great topics to talk about. Now multiple things came as an outcome of the summit. The first one is the AI Action Summit Declaration. The declaration was signed by 65 nations and organizations and focused on six key objectives. First one is promoting AI accessibility, basically making the digital divide smaller, Which right now seems to be going in the wrong direction, ensuring responsible AI. So basically ensuring that open, inclusive, transparent, ethical, safe, secure, and trustworthy AI is what we deliver, which again, will be critical for our future. And it doesn't seem to be working that well right now. Fostering AI innovation. How do we enable conditions for AI development to avoid concentration in specific bodies and groups, positive impact on labor. How do we encourage AI Diplonium to benefit the labor markets by positively shaping the future rather than by taking people jobs, and trying to focus and foster sustainable economic opportunities, sustainability. So making sure that we don't burn the planet in the process of trying to build AI an international cooperation. Now, while 61 countries and organizations signed this, There were two countries who declined to sign it, the U S and the UK. So the U S vice president Vance, who was there, and we're going to talk in a minute about the keynote speech that he gave, he basically said that it includes too much regulation that could stifle innovation and hinders America's lead in the AI sector. And the UK mostly argued that there's no clear. practical solutions in it. And basically it's mostly fluff and hence there's no point and they're calling for more clear implications and action to be taken as part of this process. By the way, I a hundred percent agree with the UK statement. Also released the international AI safety report, which was the first one that was ever released. And the report was developed with inputs from 96 AI experts from 30 different countries, including some big organizations around the world. And they also announced a launch of two big European based investments in AI. One is called Current AI with 400 million investment by the French government. The invest AI initiative, which the European commission set up, and it's supposed to reach 200 billion euros in AI investments in Europe. Not a lot of the conversations in the summit were how to balance the right regulation and safety measures and fair distribution of AI with allowing innovation to foster. Well, as I mentioned, U. S. Vice President J. D. Vance gave a keynote address that was very, very clear in the direction that him and the current administration sees the right approach to AI. His opening sentence was, I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity. So that kind of set the tone. And his speech focused on four different things. One is maintaining the U. S.'s global leadership in AI. So he basically said we'll do everything we can to stay ahead in this process. His second point was creating pro growth AI policies. So again, nothing that has to do with safety and security. Everything that has to do with let's drive this forward as fast as we can. The third one was ensuring AI remains free from ideological bias. So that's actually a good one. The US government will work to ensure AI systems developed in America are free from ideological bias and are not used as tools for censorship or control. And the fourth one was maintaining a pro worker growth path for AI. Again, that's a dream that they're all talking about. I don't see how that's happening. Definitely in the longer run, but I'm glad that this is a topic that everybody's focusing on, including the U. S. Now, he directly attacked the level of regulation in the European Union right now. And he claimed That while there's logical reasons behind it, it only leads to bad results while slowing down innovation. He also said that, and now I'm quoting, when a massive incumbent comes to us asking for safety regulations, we ought to ask whether that safety regulation is for the benefits of our people, or whether it's for the benefit of the incumbent. Or in other words, he's saying that when big companies support additional regulation, it's because they're already beyond that point. And all it is going to do is slow down new, faster moving companies who can develop innovative solutions from making us move forward. So the bottom line is very, very clear. The conversations in the AI Action Summit are important. All of them are really important. I would love to see this becoming an ongoing group of people from industry and academia and governments who meet regularly and define these subcommittees who will address each and every one of these issues and will actually come up with actionable things that we can agree on to move forward versus making these fluffy announcements and just talk because there's a real sense of urgency. And we're going to talk about the feedback from some big names who address the output of the Paris action summit, but as I mentioned from my first personal perspective, it's a step in the right direction and they just need to make it ongoing versus once in a blue moon and to make it a lot more actionable with tactical steps that governments and countries and companies and organizations and our society can take to benefit from this amazing revolution while reducing the risks that come with it. One of the key people that addressed the output of this summit was Anthropic CEO, Dario Amadei, it and basically labeled it as a missed opportunity. He's calling for a faster, swifter, and clearer action from both industry and government, which aligns with what I believe as well. The exact quote is, the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage. Dario outlined specific concerns about AI security across the board from the wrong types of governments, also to non state actors. And also the development and deployment of military usages of AI by the wrong governments on the planet. Two additional points that he mentioned. One was that governments should deploy resources to measure. AI usage and impact across everything that's happening and that policy should focus on ensuring equitable distribution of AI economic benefits. That's a huge point that a lot of people are talking about. We're going to talk about Sam Altman and his address of this in his blog post this week, but that's a big, big question. What is going to happen with the benefits that AI will generate and will only very few benefit from it and everybody else will suffer. Or we will find ways to change. More or less everything we know about the global economy and actually change that for the benefit of everyone, by the way, he's not pushing to slow down the development of AI. He's just saying that we need to invest as much in the development of processes and tools that will allow us to control the AI and make the best out of it. So he's saying, let's take a lot of the money that's invested today and just running forward and invested in things like understanding how AI works, controlling it and distributing it equally. Now he specifically called it a race against time, basically increasing the level of urgency. He's saying that by 2026 to 2027, he's expecting that AI will achieve a genius level, basically means it's going to be smarter than most people or all people on the planet. And that's around the corner he's talking about next year. And then by 2030. It's his longest time estimate to achieving super intelligence, which is an AI entity that will be dramatically better than all humans at every thing. So the bottom line is that current governance, both in means of current government, as well as controlling IT resources will become obsolete. And we need to develop more effective controls in order to benefit from this and reduce risks. And in parallel to the summit, Sam Altman has released several different interesting statements. The first one is called Three Observations. It's a blog post that he released on his personal blog post. I will share the link in the show notes. But I will quote a few sections, and then I'll tell you what I think about it. So Sam says the following. Over time, in fits and starts, the steady march of human innovation has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people's lives. I agree with that. Then he has his three observations. The intelligence of an AI model roughly equals the log of the resources used to train and run it. Basically, what he's saying is the more resources we're going to put into this, the more benefits we're going to get out of it as far as intelligence on the other side. So if you remember a couple of months ago, we talked about whether. the curve of improvement is slowing down or not. And there was a lot of conversation because nobody was releasing new models. And that obviously has changed dramatically, mostly with those thinking models. But Sam is saying the following, it appears that you can spend arbitrarily amounts of money and get continuous and predictable gains. The scaling laws that predict this are accurate. Over many orders of magnitude. So basically saying what he said all along, there is no wall. We pour more resources. And we get more intelligence out of it. His second observation is, the cost to use a given level of AI fall and lower prices lead to much more use. And then he's continuing to state the following, you can see this in the token cost of GPT 4 in early 2023 to GPT 4 0 in mid 2024, where the price per token dropped about 150x in that time period. Moore's law changed us about 10 X every 12 months. And then he's stating that Moore's law has moved us forward about two X every 18 months. And this is obviously 150 X in the same amount of time, which shows you how quickly this innovation is moving harshly because costs of the same level of results is dramatically improving over time. And then his third observation is that socioeconomic value of linearly increasing intelligence is super exponential in nature. I know that sounds really confusing. So let's read it again. The socioeconomic value of linearly increasing intelligence is super exponential in value. What he basically means is that if you can create a worker that let's say writes code or does whatever specific admin work and it now knows how to do that work, meaning it doesn't replace one person, it you can now create thousands or millions. And so the more you increase the intelligence, there's a compounded output off that. A, because you can replicate it as many times as you want. And B, because that level of intelligence that can now help you write code faster can generate the next level of intelligence, even faster than you could before. So that's two different levels. Why there is a super exponential in nature as Sam relates to that. And to make it very practical, I will use Sam's example. And now I'm quoting. Let's imagine the case of a software engineering agent, Which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others. Still, imagine it as real but relatively junior virtual coworker. Now imagine a thousand of them, or one million of them. Now imagine such agents at every field of knowledge work. Basically what he's telling us, that every junior and mid level position will be able to be replaced by these agents sometime in the relatively near future. That obviously changes. Everything we know about the workforce and going back to what I said about the Paris summit. I think that's the biggest and scariest aspect in the near term of AI risk to our society. Those of you who've been listening to this podcast knows I've been saying this for a very, very long time. I do not see how we save jobs from this wave of AI. And I know people saying it's going to generate new jobs. First of all, I find it very hard to understand what kind of jobs because it's very, very different than previous revolutions. This time AI is taking over the one thing that made us more adaptable and be able to come up with new jobs, which is our ability to think because it can out think us in some cases already right now. And then the last thing he said is that ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological process suggest that most of the metrics we care about health outcomes, economic prosperity, et cetera, get better on average and over a long term. But increasing equality does not seem technologically determined and getting this right may require new ideas. He's trying to. Paint this in a positive way. But what he's basically saying is that technology has driven broader and bigger inequality in our society, and there's a very serious risk that this will be an accelerator off that particular phenomena. And as I mentioned before about the Paris summit and about the feedback from these people, I think that's a key thing that we have to figure out, and we have to figure it out very, very fast. But in parallel to dropping this blog post, Sam also tweeted an AI bomb on X. So again, I will read segments of his tweet and then we'll tell you what I think about it. So OpenAI roadmap update for GPT 4. 5 and GPT 5. We want to do a better job of sharing our intended roadmap and a much better job at simplifying our product offerings. We want AI to just work for you. We realize how complicated our models and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence. We will next ship GPT 4. 5, the model we called Orion internally as our last non chain of thought model. After that, a top goal of us is to unify O series models and GPT series models by creating systems that can use all of our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks. In both ChatGPT and our API, we'll release GPT 5 as a system that integrates a lot of our technology, including O3. We will no longer ship O3 as a standalone model. The free tier of ChatGPT will get unlimited chat access to GPT 5 as a standard intelligence setting, subject to abuse threshold, Plus subscribers will be able to run GPT at higher levels of intelligence. Okay, let's break this down. So surprisingly, a day before Sam wrote this, I recorded a video that I actually released this Friday of me bitching about AI complexity. And I started with Opens AI, chat GPT, drop down menu and all the different functions and what's available in one aspect of the product and not available in another. It's really entertaining and you should go and watch it. I will share a link to it in the show notes. If you have five minutes to see how frustrated I am, and probably you are, so you can relate to that. go and check out, you can literally open your phone right now and click on the link and watch that video. I'm sure you will enjoy it, but going back to Sam, what he's saying is that they understand that frustration. They understand that we have no clue which product we need to pick for specific tasks that we need to do. And people like me will try, I will actually experiment with it and try different things for different use cases. Most people just get confused and will either do nothing or will just pick, I dunno, a random one or the top one from the menu. That's not what this is supposed to be. This is supposed to be intelligent and supposed to understand our needs and supposed to. Work according to our needs. And that's exactly the direction that they're going, which is very, very exciting from all the comments that we've seen around GPT 4. 5 will be released within the next few weeks. But then the other interesting thing is that they're planning to give more and more capabilities into the free tier with. Quote unquote standard intelligence settings. So it's probably going to be smarter than everything we have today, and still it's going to be free. Going back to his other statement about the dropping costs, this is obviously driven somewhat by the competition that is coming from other competitors, mostly open source models like DeepSeek R1. And now to our third big topic today, which is what is AI used for right now? And we have two interesting sources that provide us a glimpse into that information. The first one is Anthropic. Anthropic just released their economic index study. What they basically did is they anonymized Claude AI conversations, basically everything everybody's doing with Claude, and they looked at the data set and try to compare it to specific tasks and jobs. And what they found is very interesting. So, the first is that AI usage shows dramatic patterns. So 36 percent of jobs using AI at least for a quarter of their tasks, while only 4 percent of jobs use it for three quarters or more of their tasks. Now, even within these tasks, they found that 57 percent are augmenting their work with AI versus 43 percent that just automate specific parts of their work. So still most employees use it to help them do what they do versus to replace what they're doing. Now. Well, this sounds great. I'm like, okay, most people are using it for one quarter of their tasks, probably the tedious stuff they don't like to do anyway. And they're augmenting the work while they're replacing the work. Well, my problem with that statement or the positive aspect of that statement is that we are very, very, very. Early in the game. I teach a I courses. I do workshops for companies. I speak a large stages. I talked to a lot of CEOs and business leaders about a I and still most companies. Most leadership teams are clueless about where a eyes going and how to leverage it. Properly and lack of training is the biggest and number one thing I hear again and again. And we're going to get more to that in a minute. but combine that with the fact that AI itself is getting better and AGI and ASI are coming sometime in the next few years. And this whole concept of it's doing only a quarter of our tasks and we're just augmenting what we're doing is going to change dramatically. Now to dive a little more specific into some of the findings, computer and mathematical roles dominate in 37 percent of usage of anthropic arts and media with 10 percent education at 9. 3 percent and office and admin at 7. 9%. That's not surprising to me, by the way, I think Claude 3. 5 sonnet is just really good at writing code and it's really good at creative writing. And so that's the two things that show up in the Anthropic study. I'm sure if there was a study done by other tools, including video generation, image generation, other modalities, and so on, we will have a much broader and more accurate view of what AI is actually being used for versus the study. I'm not saying anything wrong about the study. I love the fact that Anthropic is sharing it. I'm just saying it's very, very biased. As an example, Anthropic 3. 5 SONNET. Is the number one pick tool in code generation tools, such as cursor. And so a lot of people are using cursor are using Claude. And if they're using Claude in a code generation tool, then obviously that's the only thing they're using it for, which shifts the results of this particular survey. Another interesting finding in this survey is that the highest AI adoption happened in mid to high wage occupations. So very low usage on low paying jobs and relatively low usage in very high paying jobs. By the way, I'm not surprised by that one, because very high paying jobs I usually more strategic in nature. So the CEO and C suite people of the world and very long paying jobs, I think are just not exposed to what's possible with these tools and many of them are also manual labor hands, they're still not as impacted. So that makes perfect sense to me. Now in parallel, Google released 50 use cases in 50 states as a follow up to their ad in the Superbowl. And they basically give us 50 examples of companies on how they're using AI right now across multiple sectors. I love this particular information from Google. Again, I will share the link in the show notes, so you can go and check it out yourself, But they have found multiple examples across multiple industries across the entire U. S. on how smaller Small to mid sized companies are actually using AI in interesting ways just to give people ideas of what AI can be used for. The common threads are increased efficiency and productivity. So they're saying that AI tools that automate tasks, freeing employees to focus on more strategic and creative work, which is awesome. Enhanced communication and collaboration. So the fact that AI can analyze large emails or summarize meetings or translate things is another common usage of AI across all these different use cases, data driven decision making, companies who did not have a big BI or data scientist capability can now analyze data and make better decisions. It's something I do with my clients all the time, and it's nothing short of magic and improve content creation, which is usually the first thing that people go to, despite the fact that probably generates the least amount of value compared to all the other stuff you can do with AI. Now, The examples they gave come from across every aspect of businesses from agriculture, where farmers leverage AI to analyze data and predict crop yields and optimize resource allocation, education, where institutions are providing personalized learning experience and automate administrative tasks and helping teachers prepare and students work and so on. This is, by the way, from my perspective, maybe the biggest promises of AI right now, that we should start using a lot more intensively food and beverage. So restaurants are using AI to manage inventory and predict demand and optimize pricing, and stuff like that. And then there's healthcare and manufacturing and non profit and retail and technology and travel and hospitality. So you can go and read these examples. They actually will give you great ideas on what you can do in your business, even if you're a small business and don't have a lot of resources and don't know a lot about IT. And so I think this is just A great quick read for you to go through and just brainstorm ideas on what you can do in your business. And as I mentioned, we have a half item today, which is AI in the workplace, report from McKinsey, and it explores the transformative potential of AI in the workplace and what leaders should do and have to do in order full potential. Now, the report was built based on surveying 3, 600 employees and 238 c suite executives from US, Australia, India, New Zealand, Singapore, and the United Kingdom. So a lot of Western like economies. The findings are very interesting, despite significant investment in AI in the last couple of years, only 1 percent of companies consider themselves mature in their adoption of AI. That's not surprising to me at all. Again, I talk to companies all the time. Most people are still in very early stages, but only 1 percent of companies say, yes, we figured it out. That being said, 92 percent of companies are planning to increase their AI investment in the next three years. Another very interesting finding is that employee readiness are ahead of leadership strategy. And I see that all the time more and more employees, standalone initiatives of people to embrace AI and they're trying stuff on their own without any leadership or direction from above. And in addition, employees are open and excited to see their company implementing AI, despite the lack of action from their leadership. Now, they mentioned in the report, the super agency concept, that's a concept that was coined by Reid Hoffman in his book called the same thing, super agency. And the idea here is that individuals in part by I can supercharge their creativity, productivity, positive impact and so on. Basically looking at the positive impacts that I can have on people and companies in our society. But what they're saying is that part of the approach that we're seeing right now is the wrong approach. The report is basically saying that making small incremental steps is not the right approach to really capture AI. And it really requires transformative strategic thinking, addressing The core way a business runs actually breaking existing frameworks and creating new ones in order to really harness the power of AI, not to replace specific tasks, but to replace and change and augment complete processes within organizations. Now, the good news is that McKinsey estimate a 4. 4 trillion productivity growth potential from corporations starting to use AI and use cases across different industries. So two very interesting views for the near future. 87 percent of executives expect revenue growth from Gen AI with the next three years. So not just efficiencies, not just, we're going to spend less money doing the thing that we're doing. We're going to make more money. So higher revenues, that is great news for all of us, that is obviously good news, but B is that skill gap of themselves in means of being able to build the right strategy as well as employee skill gap is the number one barrier for AI adoption that aligns with a similar report I shared with you last week. So what can companies do? Invest more in AI education and training. And if you're an individual and you want to get hired or have a higher chances of having a job in the next few years, invest yourself in this kind of education for you. Now, in support of this process, I'm teaching AI courses. It's called the AI business transformation course. I have been teaching the AI business transformation course since April of 2023, at least once a month. So either hundreds or thousands of business leaders and business people have taken the course and are transforming their careers and their companies and teams with the knowledge they gain from the course, and I mostly teach private companies and organizations just for them. But about once a quarter, I opened a public course. The next public course starts this coming Monday, February 17th. So if you're listening to this podcast through this weekend, you still have a chance to get in. We have closed most of the open seats for this upcoming course, but there's a few seats left. So if you don't want to wait another quarter for my next course, then come and join us this Monday, you can sign up. There's going to be a link in the show notes as well. And now let's jump into rapid fire items. So we'll start with a lot of small news from OpenAI. OpenAI is changing the way O3 mini is showing its reasoning process. So initially O3 did not show any of its reasoning process. It just showed you what the output is, but then people really like the way DeepSeq R1 actually shows you how it's thinking. If you haven't tried it, you should, it's really cool. You can see how it's debating with itself and how it's, Considering different options and how it achieving the output that it's achieving. And so open AI are trying to mimic that. But on the other hand, they're afraid that this provides a competitive benefit for competitors to see how their models think. So basically what they have done is they've added a middle step that distills the thinking process. So not showing everything, but showing a summarized version of the process, it's still very cool. So again, if you haven't used all three, go check it out and you can see how the model thinks and what it does and what they're trying to balance is obviously transparency with protecting their own IP. Now, the big news from democratization of AI perspective is that all three deep research, which was So far, only available to their top tier of 200 a month pro level is going to be available to everybody else, including free and plus users. So free users will be able to use it twice a month and paid users with the plus version will be able to use it 10 times per month. The idea is obviously to provide access to more people while limiting the amount of usage. So if people will find it valuable, they will pay the 200 a month option. Or not, because you can do something very, very similar with Google's deep research for free. And also open source HuggingFace just released something very, very similar built on open source models through HuggingFace. Doesn't achieve the same level of success and resolution and capabilities as O3, but as I mentioned, it's free. Now, Sam specifically said the following about this. And I'm quoting, it probably is worth a thousand dollars a month to some users, but I'm excited to see what everyone does with it. So go ahead, use it, makes them happy, test this out. It's actually a great tool and it's going to be rolling out to everybody in the very near future. Now, in addition, OpenAI this week has made their model less limiting on what it will not allow you to ask it and work with it about, which will allow people to ask broader questions without getting them blocked by their protocols, including topics on mental health, erotica and fictional brutality, all these things were 100 percent blocked, and now they're less blocked than before. I'm not sure I'm happy about this. I'm on the fence on what level of scrutiny should be placed on this. But I do believe that we need to find the right balance. And I'm not sure each company on its own needs to be the one deciding it. Maybe it should be a government action that defines what is allowed and not allowed to be used, just like in other media channels. Now, OpenAI took their research and investigation about how DeepSeek took their data and their models in order to train the DeepSeek model and shared that with government officials. That being said, that's a double standard and a lot of people feel this way because what DeepSeek did is use their data that is available on the internet to train their models, which is exactly what OpenAI is claiming in their lawsuits by multiple groups, such as the New York Times. So many, many groups are saying that OpenAI have used copyrighted data in order to train their models, while OpenAI is claiming that if it's open in the internet, then it's fair use for anybody to use it to train AI models because that's not considered stealing any IP. Well, they're now claiming the opposite side of that argument in their conversation of what DeepSeek is doing with their models. So you can decide whether that's double standard or not, but that's the current situation. Now, the most interesting piece of news regarding to OpenAI this week actually comes from, not surprising, Elon Musk. So Elon Musk, with a group of investors, is offering just over 97 billion in cash transaction to buy the nonprofit arm of OpenAI. So those of you remember OpenAI has a nonprofit arm that is controlling a for profit arm, and they're now in the process of trying to convert OpenAI to be a for profit organization with the nonprofit arm. owning a part of the shares to benefit from the revenue and that's their loophole in order to do that conversion. As you probably know, there's an open lawsuit by Musk plus some other people to try to stop open AI from doing that conversion. So why is Musk doing it? first of all, his offer was immediately rejected by Sam Altman on X about seconds later after he posted it, and Sam also says that the board will also reject that acquisition offer. That being said, what it's doing is it's trying to increase the valuation of the nonprofit arm. So if in the conversion process from nonprofit for profit, the nonprofit arm is supposed to receive 25 percent of the shares of the broader open AI holdings. This new quote unquote valuation, because there's an actual real offer on the table, puts the amount of money that they need to give the nonprofit arm at a much higher valuation point, which means the overall valuation will be higher, which means they have some problem. Now, in addition, if the court will force OpenAI as part of this conversion process to at least consider this fair or more than fair bid, because it's actually a 50 percent increase on the valuation of the non profit arm right now, then if the courts will force them to at least go ahead and look at this. it will provide Musk through the due diligence process, access to a lot of open AI knowledge, data, and information, which I'm sure he craves and I'm sure they would hate to give him. So overall, I don't think Musk is buying it. I don't think that's gonna move forward. But it's definitely a very interesting play by Elon to try to stop open AI, converting the nonprofit organization to a for profit organization. Quick recap, Elon was one of the first founders and the first person who put significant money into open AI with the goal of making it open source and available to everybody. And he was then trying to take over it and he lost that contest. Sam took over, Elon left, and there's been beef between these two people ever since. And Elon is trying to do everything he can right now to stop OpenAI in their path forward. In addition, he obviously started x. ai, which is a direct competitor, so he will also personally benefit from that. shifting from OpenAI to Anthropic. Anthropic is predicting a huge growth in sales. They're targeting a 34% billion dollars in revenue by 2027 in the best case scenario or 12 billion dollars in their base worst case scenario. They're projecting 3. 7 billion dollars in revenue in 2025 which is a huge growth in a very short amount of time. They're also predicting that their burn rate that was 5. 6 billion in 2024 is going to go down to only 3 billion in 2025 because their revenues are growing dramatically. They're expecting to achieve cashflow positive by 2027. One of the interesting facts in that report is that their biggest growth have actually been through the API channel, which is not too surprising because of what I said before, their models code very well, and many of the coding platforms use SONET 3. 5 in the backend through the API. Now connecting to both OpenAI and Anthropic, John Schulman, who was one of the co founders in OpenAI, that about five months ago left to Anthropic, is now leaving Anthropic and joining Mira Moradi, which was the CTO of OpenAI and left in September of 2024 to start her own stealth AI company. We still don't know exactly what they're going to do. It's pretty obvious that it's gonna be some level of tooling or agents on something on top of existing models. They're probably not developing their own model at this point, but They just hired John Shulman, who, as I mentioned, just recently joined Anthropic from OpenAI, so a senior person that's jumping ship as well. And speaking of former AI employees, Ilya Saskever, some companies, SSI, Safe Superintelligence. Is in the process of raising another round, so they previously has raised 1 billion. And they're now in the process of raising a new amount that will value them at 20 billion up from the 5 billion. In just September of 2024, so this is a four x jump in four months to an insane valuation to a company that has not clearly defined what they're going to develop, has no plans of releasing any products or generating any revenue, and yet they have Ilia over there running the show, which is maybe the top AI scientist on the planet right now, but definitely very high up on the list. And so people are willing to write crazy checks right now for that to move. forward. They already have some really big investors like Sequoia Andersen Horowitz. So they're planning to raise a huge amount of money. And I assume they're going to get that amount of money in the very near future. This is obviously in parallel to similar jumps and growth and insane valuations and insane fundraising from both OpenAI and Anthropic. So this is moving forward full steam ahead from all cylinders. Interesting technological piece of news is that Meta has released a new framework that enables LLMs to process multimedia items. So basically to understand what's happening in images without retraining. So if you think the way it works right now, the way these models know what's in an image is by seeing a gazillion different images and being trained on them. And they've developed a Process they called MILS, which stands for multimodal iterative LLM solver that allows you to understand what's happening in images without being trained on these images. So basically a zero shot processing for both images, videos, and audio. Very cool. And it will save a lot of training data while achieving similar results. Now, as you probably know, Meta is developing its own chips. And this will just allow them to generate additional chips. The specific chips that Furiosa AI generate is chips that accelerate AI models performance. So this will be a great complementary solution to the chips that they're developing in house and will allow them to run their models faster and probably cheaper as well. If you remember, we talked about this, that Meta is planning to invest 65 billion in AI infrastructure in 2025, a crazy amount of money, and this particular investment of 61 million to buy this company is just cheap change in the bigger scheme of things. Now, On some bad news for Meta, or bad news as far as I'm concerned, Meta is significantly reducing its privacy teams and its oversight over the release of new products. They're claiming that they're not reducing the oversight, but they're just replacing it from human oversight to AI oversight over what is going to be released across the multiple products that they're releasing every single year. I'm not a hundred percent sure I'm happy about this as obviously benefits because these AI systems will be able to review more things with hopefully less bias than humans. But I do think that having humans oversight over products that reach 3 billion people is not necessarily a bad idea. So again, I'm on the fence on this particular news from meta. And now some news from Google. So one of my favorite tools, notebook LM has a plus version. It has been around for a while and now notebook LM plus will be available as part of the one AI premium subscription. So that's the 20 a month subscription from Google that also gives you two terabytes of storage for 20 bucks a month and 9. 99 a month per students. And what this plus capability gives you in notebook LM is five times higher limits for audio overviews, notebook queries and sources and enhance sharing and access to the Gemini advanced models. I must admit, I use the free notebook LM version and I'm extremely happy with it. And it's an incredible tool that I use almost every single day. So I don't see a huge benefit in that. But if you do have the plus, if you do have the one and I premium subscription, you now have more of it. Now, the big news from Google this week is that they are releasing cross conversation, long term memory. So it builds on their existing memory features, but you can now get a summary of a previous conversation and use it in a new conversation, almost seamlessly. That is a really big benefit. If you want to continue working on stuff, not in the same conversation. Other interesting thing from my perspective that is getting us very close to the point that it will be a limitless context window is that if you think about it, Google's models right now have the longest context window of every other tool by a very, very big spread. So their top model right now has a 2 million tokens context window, which is about 1. 5 million words, which is. By far the highest token we know that we have. And yet you can now roll over a summary of this and start a new conversation while remembering everything that happened before. This is obviously huge for any really large projects or ongoing research or stuff like that. So kudos to Google for enabling that capability. Every user will have control of what it's remembering and you can go in and you can go in and delete any of these memories, if you don't want Google to remember it. They specifically stated that they are not training on any of that data. Another interesting release this week comes from Adobe. So Adobe just released to public beta their video generation capabilities. It can generate 1080p, 24 frames per second videos of up to five seconds in about a minute and a half. So you give it a text prompt or an image and text prompt and it will generate a high quality, high fidelity video. The biggest benefit of it compared to a lot of other models that can do this right now is that the training data for this is video content that Adobe owns. So it was trained in a way that doesn't break any copyright laws, and that's basically their promise to the users. I think most people who create video don't really care how they train the model. They just want a model that works great, that generates amazing results. I'm not saying that's good or bad, I'm just saying that's the current situation. But that being said, Adobe will definitely be a huge player in that. The cool thing is that they're providing some access to this on the free Firefly tier, and then you're going to get a lot more on Firefly standard and Firefly pro for either 10 a month or 30 a month, depending on how much video you want to create. And two interesting developments when it comes to voice usage of AI, I love using advanced voice in Chachapiti and in Gemini as well. It's such a huge difference from how we engage with computers before. So Microsoft just announced that they're providing co pilot voice assistant with 40 new languages accessible for free on their co pilot platform. And since it's built on OpenAI's capabilities, then you can turn the conversation in either direction. You can interrupt it at any point you want, and it understands emotional cues and can relate accordingly. And as I mentioned, different than Google Gemini Live and Advanced Voice Mode on ChatGPT. This is available for free on the Copilot platform. on the flip side, Google had just announced a major update to the Google Voice Assistant, providing it additional capabilities, such as better translation capabilities, better listening and understanding capabilities, and it's pushing forward with its voice capabilities that is obviously going to be integrated into everything we know in the near future. And what we'll probably be able to control our oven and our microwave with voice in the near future. But obviously everything else, including computers and interactions with systems around us. If you are enjoying this podcast, I would really appreciate it. If you share it with other people that can benefit from it, open your phone right now, unless you're driving, and click on the share button in the podcast that you're listening to think of four or five people that can benefit from this kind of podcast and share it with them. And while you're at it, I would really appreciate it. If you can leave us a review on your favorite podcasting platform. And in addition, don't forget the AI business transformation course starts this. Coming Monday, basically 48 hours after this podcast gets released. So if you still want to jump in, or if you know somebody that should take that course find the link in the show notes and come and join us on February 17th. We'll be back on Tuesday with another detailed how to episode. There's a few incredible episodes that are coming in the pipes. So stay tuned in the next few weeks. Don't miss any of the coming episodes. Trust me, you would want to hear what's coming, but until then have an amazing rest of your weekend, and I will see you again on Tuesday.