Leveraging AI

261 | Davos goes AGI: 1–2 years vs 5, energy = GDP, ChatGPT adds ads, Elon v. OpenAI explodes, Anthropic’s 12× surge—plus more AI news for the week ending on January 23, 2026

Isar Meitis Season 1 Episode 261

📢 Want to thrive in 2026?
Join Isar’s flagship AI Business Transformation course and save $100 with code LEVERAGINGAI100.
🔗 Sign up here and secure your competitive advantage for 2026.  https://multiplai.ai/ai-course/

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/


Are we racing toward Artificial General Intelligence—or just racing to beat each other to it?

From the snowy panels of Davos to billion-dollar boardroom drama, this episode of Leveraging AI unpacks the breakneck pace of AI development and its very real geopolitical, business, and ethical consequences.

We break down the predictions, posturing, and power plays driving the future of AI — with practical takeaways for business leaders who want to ride the wave (not get crushed by it).

If you’re leading a business in 2026 and still think AI is something “for the tech team,” this is your wake-up call.
This episode is your fast pass to understanding what’s really happening—and what to do about it.

💡 In this session, you'll discover:

  •  A preview of AI’s dominating presence at Davos 2026
  • AGI timelines: Why DeepMind says 5 years, but Anthropic says... 12 months?
  • Geopolitics and AI: Why China is keeping OpenAI and Anthropic up at night
  • Satya Nadella’s warning: AI growth without broad benefit = a tech bubble
  • Why electricity cost may define a nation's AI competitiveness
  • OpenAI’s projected $20B revenue and its push into ads
  • Elon Musk vs. OpenAI: The lawsuit, the diary entries, and the circus
  • AI agents at work: When your coworker doesn’t have a body
  • Anthropic hits $4.5B in revenue, but losses still loom large
  • Claude’s Constitution: Anthropic’s AI ethics now open-sourced
  • Gemini surges: 85B API calls and 8M enterprise users
  • Apple’s secret AI pin: Why not just use the AirPods?
  • Meta’s Avocado and Mango models: Can they catch up?
  •  Tesla’s AI chips: A plan to power cars and data centers
  • Deepfakes and the death of truth: Runway’s chilling study
  • 11 Labs' AI album: How Liza Minnelli and Art Garfunkel joined the future of music

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business in advance your career. This Isar Metis, your host, and we have many things to cover this week. we will start with talking about the World Economic Forum event in Davos that had a lot of AI components to it. We'll continue with open ai, adding ads to ChatGPT and what does that mean and exactly how that's gonna roll out. Our third large topic is going to be about lots of human drama, mostly focusing around Elon Musk versus ChatGPT or maybe Elon Musk versus the world. But we're gonna dive into that. There's some other human drama in other aspects as well. And then we have a lot of small rapid fire items where we're going to cover many interesting releases from ChatGPT, anthropic announcements from Google, and many others. And we'll end up on a really interesting positive note, like I always try to, but for that you have to wait to the end and see what we're talking about. So with that in mind, we have a lot to cover. So let's get started. the World Leaders, got together this week in Davos in Switzerland to talk about the economy and the future of our planet as it relates to it. And a lot of the conversations and a lot of the main figures that were interviewed, were the leaders from the AI world, including people like Satya Nadela and Demi Asab and Dario Amay and multiple people from OpenAI. And I'm not going to cover all of what they said because I could have spent the entire episode and probably. Several different episodes just on the AI topics that we're covering in Davos, but I want to talk about specific key things that caught my attention. The first one was the timelines to a GI. So in an interview by Axios that had Demi Sabbe, the CEO of deepminds, together with Dario Amide, the CEO of philanthropic, and they were discussing their timelines and assessments to a GI, Demi believes it will be about five years until we get to an AI that he would call an A GI. And he believes that once we get there, we'll see an incredible new world. And the specific quote he said, I think that if we do that, which means achieve a GI, we'll accelerate science and human health. And I think we'll be in a world of radical abundance. So this is from Demis. We know he's an optimist about the capabilities of ai and he is always been very loyal to his own truth and timelines and trying to avoid hype and stuff like that. So his timeline is five years, however, Dario Amadeo, on the other hand, thinks we're gonna get there in about two years. And to be fair, if you listen to what he said, he may think that it can happen faster. So I'm gonna give you the quote from Dario. Dario said we might be six to 12 months away from when the model is doing most maybe all of what software engineers do end to end. I have engineers within Tropic who say, I don't write any code anymore. I just let the model write the code. I edit it. But what he means by end to end, and I need to take you back to the end of Q1 of 2025, in which Tar Ade said that AI will soon write 50% of the code, and that within a year it will write a hundred percent of the code. And yet, right now we are learning from the past few weeks, and if we've been following this podcast, you know that Anthropic now writes most of their code may be close to a hundred percent with Claude Code. This is similar in their competitors as well, that are using a lot of Claude code to write code of their new products. So we're already at a hundred percent. So what is this new prediction about another six to 12 months away from doing everything? And the key difference is what he means by end to end, when you are creating software, other things needs to happen for it to work. It's not just writing code. It is about writing requirements. It's about setting tasks. It's about assigning the task. It is about testing the code. It is about the entire tech stack. It is about your databases. It is about the APIs and how they talk to one another. It is about the architecture of how the software components are going to be designed and talked to one another. All of that, he believes we're six to 12 months away from the AI being able to do. All of it, not just writing the code, which is what AI is doing right now. And basically what he's saying, if we can do that in six to 12 months, the acceleration is inevitable because the software will write new software faster and faster. The models are gonna get better and better and better. And hence what called a fast takeoff. We'll get to a point of really fast recursive AI that just gets better every cycle and the cycles becomes shorter because it is writing its own code and doesn't need to wait for humans to do the work. And Dario actually said that he really hopes that Demi's timelines of five years are the right timelines because he believes that nobody is ready for a world with a GI, with an AI that can improve itself and can do everything humans can do as good as humans, and in many cases better than humans. And shortly after everything, much better than humans because it's just gonna keep on getting better and better. And he believes that. He wishes we could slow down and think about this all together. So when they were both asked, would they slow down if that was possible, they both said absolutely yes. And their biggest fear is the competition with China. So the exact quote from Dario is saying, I wish we had five to 10 years, but assuming I'm right, it can be done in one or two years. It's very hard to slow down when geopolitical adversaries are building the same technology. So basically his fear is not Google or OpenAI. His fear is China. And that's why he was very loud in Davos about stopping the release of advanced chips to China. And he said is like selling nuclear weapons to North Korea because that produces some profit to us, meaning US companies. So he's talking about the reversal in the current administration now allowing Nvidia to sell advanced chips to China. And he's basically saying that is like selling nuclear weapons to North Korea. I do believe that the US needs not to provide China with the most advanced chips and to even limit the previous generation. Yes, it will force them to develop their own chips, but they're doing that anyway. And this might slow them down just a little bit for this madness to be a little more controllable, but this is not the case right now. By the way, DE's reaction to the timelines and why he believes it will be five years or not two is. What I would call life. He basically thinks that beyond code and code generation and software, there's a lot of other things that needs to happen, and it's a combination of creating all the right infrastructure and training people and so on, as well as he believes that it's not just about the software, but a lot of it is about the ideas and basically, and now I'm quoting, coming up with the question in the first place or coming up with the theory. I think that's much, much harder than coding. You got hardware in the loop that may limit how fast self-improvement systems can work. That being said, if you remember, we spoke in the past few weeks about companies that are working on that, on AI developing new hardware in a faster pace than we're developing hardware right now. So that might be. Solvable. But either way, Demis thinks that there's just bigger complications than just writing software. That's why he believes it will take a longer amount of time. He also said that he wishes that everybody could slow down and work together and collaborate and have the biggest minds in the world sitting together and thinking, what does AI and a GI and A s, I mean to the world, and how do we prepare for it as a global society? But he's saying the same thing that he doesn't think that is going to happen to explain why everybody thinks it's not going to happen. I will give you one simple quote from Howard Lunik who is a key figure in the Trump administration. Many describe him as the Commerce Secretary, and he is a big part of the economic advisory to. To the president and he said in his speech that was before Trump's speech, he said, globalization has failed the West and the United States. Basically what he's saying that the globalization has taken jobs away from Americans and that needs to change in order to strengthen America. In other words, collaboration on a global scale is less of a focus, and focusing on what the US needs is more of a focus. I assume it is the same thing on the other side, and hence, sadly, I don't see that right now slowing down or coming to any kind of a global collaboration to reduce the risk of where AI might take us. And to be fair, that scares the hell out of me. Dario also mentioned something that I, again, I agree with him a hundred percent and again scares me as well. He said it is now possible for the gross domestic product GDP to grow by five to 10% annually while the unemployment rate reaches 10%. An unprecedented combination, and I agree with him very much. There were a lot of conversation about the growing gap that will happen with AI between people, organizations, and nations that have it with versus those who don't. And it definitely is going to enable the GDP to grow because you'll be able to generate stuff significantly faster with significantly less people while having growing unemployment, which is not a healthy situation to be in. But I think if you think something else is coming, you are delusional. We are going to grow the economy much faster by using ai, and it will come at the cost of having a lot of people unemployed. This connects well to one of the speeches from Satya Nadela, again, the CEO of Microsoft. He was talking about the fact that an AI will have a bubble if the output of AI is gonna be focused on very specific companies, which are the AI vendors and the large tech companies, instead of being distributed across everybody. And the specific quote is, if all we talk about is what is happening on the technology side, then that's a bubble for this not to be a bubble. By definition, it requires that the benefits are much more evenly spread. And what he means by that is he means that if the AI will drive growth across the board for multiple industries, across multiple segments of the economy, then it is not a bubble because it is providing real value to real companies generating more value to more people and more other organizations and entire nations. And if it's not going to happen, then it means it is a bubble. Right now it is very, very clear to me and anybody who is involved in the AI space that companies that learn how to implement AI effectively are gaining incredible benefits, and the last few weeks have accelerated that dramatically. The ability to now use cloud code and the even newer version, Claude Cowork, to generate more or less any kind of. White collar job inside your company in an incredibly accurate and consistent way at scale is nothing like we've ever seen before. So from a technological perspective, we're more or less there. The difference is going to be which companies, which organizations, which entire nations are gonna learn how to harness this technology in order to provide more value and generate more revenue. And I see incredible results in companies that I work with that are doing things like that. And the same with individuals. Individuals who learn these skills are able to dramatically shift and change the trajectory of their own personal careers, whether within the companies they're in right now, or opening doors for new opportunities for them in other places, whether going the entrepreneurial path or just finding better jobs in other companies because they have the skills and the knowledge on how to leverage advanced AI tools to do more with significantly less time. We're going to talk more about the financial growth that both Anthropic and OpenAI have experienced, and that information was shared in Davos as well, but more about that in the following segments when we dive deeper into Anthropic and OpenAI. But another interesting point from Satya Nadela and his viewpoint on how the world is evolving, what he basically said is that the future success of entire nations is dependent on their ability to deliver AI in a cost-effective way, which is tied directly to the cost of electricity. The way he framed it is GDP growth in any place will be directly correlated to the cost of energy in using ai, which basically means that developing new energy sources, whether carbon based on others are not just going to have an environmental impact, but is going to be directly correlated to a company or a nation's ability to be successful in the AI future. Because AI will do more or less everything. And if you can do more of it for less money because you have the electricity in a cheaper environment to do so, you will come out ahead. Or the way he phrased it. The job of every economy and every firm in the economy is to translate these tokens into economic growth. If you have a cheaper commodity, it's better. Basically, tokens, the output of AI is going to be the direct impact on economical success. And if you have a cheaper way to get these tokens, you are going to win. The potential pushback from society if the tokens do not translate into actual benefits to society, and he said the following, we'll quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate those tokens if these tokens are not improving. Health outcomes, education outcomes, public sector efficiency and private sector competitiveness. This is a very interesting view of the future, a future in which if AI is a necessity for growth and a necessity for success. But on the other hand, if it doesn't deliver immediate value to the people will prevent it from getting access to the electrical resources that it needs in order to operate. And that creates a very interesting conflict between the companies who need this power in order to run this technology and the access to this power because people still need to run their homes and so on. More about that conversation later on where we're going to talk about open AI new pledge to generating their own power, but definitely something worth thinking about on how the future is evolving right in front of our eyes. Now connecting the final dots. Before I do a quick summary of everything that happened in Davos, I want to go back to the timelines for a GI, so. Google has a longer horizon to when we're gonna get to a GI, again, five years based on Demi and yet their co-founder and chief a GI scientist at Google DeepMind. Shane lag just posted a new open position. For Chief a GI economist, and he wrote on XAGI is now on the horizon and it'll deeply transform many things including the economy. I'm currently looking to hire a senior economist reporting directly to me to lead a small team investigating post a GI economics. What does that mean? It means that Google and definitely DeepMind are understanding that everything we know and take for granted, all the assumptions that we have on how the economy works is going to change. And they need to start researching this right now to have a chance of actually doing the right things and taking the right measures in order to benefit from it, rather than just collapsing everything we know without necessarily a good solution on the other end. So what is the bottom line? And if you want the 30,000 foot summary of what we heard in Davos first, we are running fast and we are running faster and faster towards the unknown with the AI future with a GI and beyond. We have Dario Amay, who's definitely one of the leading voices and one of the most cautious ones. Like he usually doesn't make crazy predictions, and actually he nailed his previous predictions. When it comes to timeline. Who thinks we are getting to a fast takeoff within one to two years? This is literally around the corner. We also are learning that there's a very low chance in the current global political arena for a global collaboration or any kind of a slowdown, and that the risks are getting higher and higher. And you're hearing that from all directions that the leading people in this field are saying they wish they could slow down. And they're even said that if they had to collaborate just between themselves, so let's say Demi and Ade, they said that they can figure this out, but their problem is with China and until China doesn't stop, they cannot stop. And so on the last components is that there is a growing gap that will keep on growing between people, organizations, and nations who have access to ai, to those who do not have access to ai. And that gap is gonna keep on widening. And those who are gonna have cheaper more of that access will be able to be more successful and do more things and get to a world of abundance versus the ones who don't, who might be left behind. And that by itself leads to higher inequality and to a lot of other issues, uh, in the world that already has enough issues as it is. Overall, not a very optimistic view of where we are in the broader big picture. The other big point that was very obvious is how far behind and really outside of the race is the European Union right now, without any serious horses in the game and without any real huge effort from Europe, they're going to be left behind in this race between China and the us. So now we'll switch to the second topic, which is open ai and its push to drive to start testing ads, which is going to start this month and into February in the us. But before that, some interesting financial news that became obvious from OpenAI during the meeting in Davos. So apparently their revenue for 2025 is going to be over$20 billion. If you remember, the previous assessment was around 13 billion, and now we know that it's over 20 billion, which is incredible because it is 10 x what it was just two years ago. So they ended 2023 with about$2 billion in revenue, and they're ending 25 with over$20 billion in revenue. The other interesting thing from the blog post that they shared is that the growth of revenue is perfectly aligned with the growth of compute that they have. So compute has grown from 0.2 gigawatts in 2023 to 1.9 gigawatts in 2025. Again, roughly 10 x, which is exactly the growth that they've seen in revenue. And this is not by luck. These things are tied to one another. Going back to what we discussed earlier, the ability to generate tokens is what generates revenue and generates value. The more compute you have, the more tokens you can generate. Now, in this blog post on the Open AI blog, they're talking about a positive future with ai, which is not surprising. And Sarah f Fryer. The CFO is predicting that AI is going to enter and push forward. Fields like drug discovery and energy and new revenue models will emerge that do not exist today. One of the things that they're going to be testing and potentially putting in place is licensing IP based agreements and outcome based pricing. Basically, they're going to partner, let's say, with drug developers, help them develop better drugs faster and enjoy the value that the new drugs will generate. So compared to current models where the technologies just an infrastructure you pay for, this is an outcome, value based monetization that will allow OpenAI and potentially other model developers to enjoy significantly higher revenue while keeping the risk relatively low for the companies who are going to use it. From a focus perspective, what Sarah Fryer said is that the company's primary goal for this coming year, so 2026, is practical adoption, and if you are asking yourself what that means, she said close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. And I agree 100%. The capabilities of AI right now are incredible. It is a complete game changer for most companies in the world if they knew how to implement it successfully. What I see as far as transformation for people who are taking my courses or companies and organizations who are taking my workshops is nothing short of incredible and the ability to make these amazing changes are ready right now. From a technological perspective, the gap is adoption, and so OpenAI is going to focus on that, which probably means we will see a lot more partnerships with industry, a lot more training programs that they're going to deploy, and a lot of ways of making their existing technology, not necessarily future technology, more relevant to drive impact economic growth and value across multiple sectors. Speaking of courses, by the way, our next cohort of the AI Business Transformation course is starting. In just over a week from the day this episode comes out. So, if you haven't taken any structured AI training, you deserve this. You owe this to yourself. Take this as a New Year's resolution to give yourself a solid baseline on how to use AI in 2026 and beyond. Come join the course. It is going to be the best thousand dollars you invest in yourself and in your future, and in the future of your business. If you are a leader in a company, and since you're a listener to this show, you get a hundred dollars off with promo code leveraging AI 100. There's a link in the show notes, so click it right now and come and join us. The course is starting on Monday, February 2nd. So don't miss this opportunity. The next course will probably be in Q2, so you don't want to wait. But going back to OpenAI, in addition to the huge growth they also have a huge cash burn that comes with it and they don't generate enough money and they need to raise significant capital all the time. More about that in a second. But one of the things they're pushing forward is ads inside of the lower tier versions of ChatGPT. So one of the things we've learned recently in a article by the information at the, in November of 2025 is that OpenAI with its 800 million users, have less than 5% of them, so about 35 million that are actually paying subscribers for plus and pro plans. This means that more than 95% of AI users of open AI are not paying open AI anything to use the platform. That is obviously very, very costly when you need to produce these tokens, and hence, OpenAI needs to find a way to offset the investment in generating these tokens in order to keep on supporting this massive audience. So they're doing two things. One of them is they're launching their cheaper subscription. That's called Chachi Go, that they've already launched in other countries. They're bringing it to the US and it is going to be$8 per month. It gives you lower limits compared to the 20 bucks a month plan and definitely compared to the$200 a month plan. but it allows people to pay a lot less and still increase the limits compared to the free tier. But whether you're gonna be paying$8 per month or use the free plan, you are going to start seeing ads in the ChatGPT feed. Now because there are a lot of negative potential implications on how ads can impact either the actual usage or perception of users as they're using the platform. OpenAI is actually for the first time maybe taking a serious PR step on explaining exactly what's their focus and how they're gonna deploy it and what is important before actually starting the experiment. And they've released a detailed blog post on exactly how ads are going to work, and Sam Altman and other leaders tweeted about it just to make sure that everybody sees it. So Sam's tweet was. We are starting to test ads in ChatGPT free and go New$8 per month option tiers. Here are our principles. Most importantly, we will not accept money to influence the answer. ChatGPT gives you and we keep your conversation private from advertisers. It is clear to us that a lot of people want to use a lot of AI and don't want to pay. So we are hopeful a business model like this can work. An example of ads I like are on Instagram where I found stuff. I like that. Otherwise, I never would have. We will try to make ads ever more useful to users. If you remember multiple times in the past, Sam Altman was saying in interviews that he actually doesn't like the ads as a model, and that's gonna be used as a last resort by OpenAI. Well, I guess we got to that last resort. They keep on needing to raise crazy amounts of money. And if they want an IPO, they need to show that they're generating revenue in other ways, and this is definitely a solid way to do that. So in their blog posts that Sam is referencing in his tweet, and he is available on their website and there's gonna be a link in the show notes, they shared the following principles. First thing is their mission alignment. Our mission is to ensure a GI benefits all of humanity. Our pursuit of advertising is always in support of that mission and making AI more accessible. So that makes perfect sense. And then there are four core principles that they have shared. The first one is answer Independence ads do not influence the answer. Che Chippie gives you answers are optimized based on what most helpful to you. Ads are always separate and clearly labeled. So that's number one. Number two, conversation privacy. We keep your conversations with ChatGPT, private from advertisers, and we never sell your data to advertisers. Number three, choice and control. You control how your data is used. You can turn off personalization and you can clear, and you can clear the data that is used for ads at any time, or always offer a way not to see ads in ChatGPT, including a paid tier that's ad free. And number four, long-term value. We do not optimize for time spent in ChatGPT, we prioritize user trust and user experience over revenue. On top of that, you had Fiji CO, AI, CEO of applications saying people trust Chachi PITI for many important and personal tasks. So as we introduce ads, it's crucial. We preserve what makes Chachi PITI valuable in the first place. That means you need to trust that chachi's responses are driven by what's objectively useful, never by advertising. Now according to one article, OpenAI projects$2 billion in revenue from new products including ads and commerce in 2026 alone, and they're expecting that number to be$10 billion in 2027. Again, compared to the fact they're currently generating about 20 billion, that increases their income by 10% just by doing that, and it is offsetting the cost of the free users by driving some revenue to be able to cover the cost of inference. But what does this mean overall on the bigger picture? The first thing it means is that if you are using ChatGPT for free, you're going to start seeing ads that are clearly. Ads and are separate from your timeline and you can control whether they're gonna be personalized to you or not. The second thing is that it is going to drive potentially change in user behavior. I assume some people are not going to like it and they have other options. You can go to Anthropic. You can go to Gemini. By the way, Google already said clearly that in this point in time they're not planning to put ads into Gemini ads are gonna stay in the regular Google search domain. Now, where does that land in the AI assisted results that appear on top? That's still not a hundred percent clear, but it is not coming to Gemini. At least in the near future, this may push people from open eyes free platform to Gemini's free platform in order to get a similar level of AI assistant, especially in the current growth that Google is experiencing with its AI solutions. And that is something that OpenAI will definitely have to take into account. The other thing that it will do, it will add another significant large scale distribution platform to buy ads on. So right now, the two main ones are google, including YouTube, and then meta across all its different channels. And this is going to add another platform that has over 850 million active users that can now start seeing ads on the platform. That will definitely shift some marketing budgets around the world towards this channel. Whether it's gonna be better or worse from a return perspective, only time will tell, but I am certain that they will find ways to optimize it and provide the right value to their publishers, which again, is just going to shift how budgets are spread right now when it comes across multiple marketing channels. The main thing here is that OpenAI serves most of their clients right now for free, and it costs them an arm and a leg and they need to find a way to offset that. And ads is just gonna be one of the options on how they can do that. And they're gonna keep on testing it and optimizing it, unless they're going to see that it drives a lot of people away from the platform. And I'm sure within the next few months we will know all of that. Now I told you the next big topic is human drama, and the biggest human drama right now comes from the expected trial between Elon Musk and OpenAI, that as I reported to you last week, is going to trial. So it's going to a jury trial that is going to start in April of this year. What happened on X this past week and beyond as a result of this trial and surrounding it has been completely out of control. It's nothing like I've ever seen when it comes to a legal proceeding where usually you keep your cards, close and you let it play out in court. And in this case, you had people all across the board sharing sensitive and relevant information on X and beyond. So obviously, several different posts from Elon Musk and Sam Altman. We're gonna talk about this in a minute, but people like Greg Brockman were in the mix, and many other people who jumped in and shared lots of relevant information from the documents that were revealed as part of the trial process. Now Elon shared multiple segments out of information that was revealed mostly from from Greg Brockman's personal files. So he kept like a diary, journaling, everything that happens, including lots of communication that were internal to OpenAI with or without Sam Altman's knowledge. And Elon shared multiple segments from there. Basically proving per him that he has rights to X percentage of OpenAI. He's talking about a valuation of about$135 billion right now. If he gets it his way, basically saying that they broke the oath that they were established on, and because he is the one that gave them a lot of the money to get started, he deserves to be compensated for it. So as an example, one of the entries in Brockman's Diary says, this is the only chance we have to get out from Elon. Is he the glorious leader that I would pick. We truly have a chance to make this happen financially? What will take me to 1 billion? Accepting Elon terms, nukes. Two things. Our ability to choose, though. Maybe we could overrule him and the economics. And there are multiple other quotes that basically saying, showing that Greg Brockman wanted to kick Elon out, however, shortly after that, OpenAI on their formal blog post, wrote a blog post called The Truth. Elon left out what it has. It has the exact snippets that Elon shared on X to show why he's entitled to what he believes he's entitled. But then they're adding the components that Elon left out basically selectively just choosing specific sentences and getting them out of context. And in there there are multiple segments that are color coded for blue, what Elon claims he said. And then in red, what Elon left out from the original quotes. As an example, in the first segment that they're showing the statement from Elon said, coming weeks top priority, essentially philanthropic endeavor. And while what Elon left out is coming weeks top priority, gotta figure out how do we transition from a nonprofit to something which is essentially philanthropic endeavor and is B Corp or C Corp or something, basically a for-profit organization. And they came out with multiple examples of what are the things that Elon quoted versus what was the entire conversation, including the context around it. But with Elon being Elon and Sam being Sam, that was in the end of it. And they kept on exchanging blows even beyond just the items of the case. So a person on Twitter with the handle Doge Designer, which is obviously in Trump's court, wrote, breaking Chad, GPT, has now been linked to nine deaths tied to its use. And in five cases, its interactions are alleged to have led to death by suicide, including teens and adults. And Elon retweeted this and wrote, don't let your loved ones use ChatGPT, to which Sam responded. Sometimes you complain about ChatGPT being too restrictive, and then in cases like this, you claim it's too relaxed. Almost a billion people use it, and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge responsibility to do the best we can. But these are tragic and complicated situations that deserve to be treated with respect. It is genuinely hard. We need to protect vulnerable users while also making sure our guardrails still allow all of our users to benefit from our tools. So far a great response, keeping it in the professional realm. But then Sam continues. Apparently more than 50 people have died from crashes related to autopilot. I only ever rode in a car using it once sometime ago, but my first thought was that it was far from a safe thing for Tesla to have released. I won't even start on some of the GR decisions. So. Does this stand right now? And where I think it is going, first of all, I don't think that Elon can get the 130 something billion he wants as far as his quote unquote fair share in OpenAI. I don't see that happening. I do think he will keep doing everything he can in order to slow down open AI and to put as many sticks in their wheels to hold them in their tracks as he's growing grok. And if he can slow down their progress one way or the other, he is going to do that. Why? Because he has the money to do that. Because he has a personal beef, with Sam and other people in OpenAI, and he has a true business competition with them, which all are going to benefit him one way or the other. Now, obviously both sides are saying that they have a real case, otherwise they would not go to court. But we need to remember that this is a jury trial. In a jury trial. Literally anything can happen. Now, I am pretty sure that most jurors, regardless of who they pick, can understand the nuances of every single thing that happened in the early years of open ai and how does that translate to the current state of technology and the current demands that they have. So this can literally go either way because it is a jury trial. Now to make it even more interesting for us, a lot of really high profile people are going to get summoned and are going to testify. In this case, these people include figures like Ilia Susko and Mira Ti and Sya Nadela and Dario Amee. All of these people that I just mentioned have been a part of the early discussions in OpenAI. All of them have one stake or another, some of them very, very significant in billions of dollars of stock options and or stock that they actually have in OpenAI. And all of them are in competing positions to OpenAI right now. So if you want to see the best soap opera that has been online, maybe ever, this is a great opportunity to sit back and see how this thing evolves. And having like people like Sam Atal and definitely Elon Musk as part of the process, I'm sure it is going to be entertaining. Now speaking of drama and issues, and in this particular case also involving Elon Grok has been under great scrutiny in the past few weeks as far as enabling the grok image generation model to undress actual people, including minors, and now they're under investigation for that in Indonesia and Malaysia and California and the UK and several other countries. In some places it's already been banned and in other places they're just being investigated for that. They've put some bandaids on top of that, but for now, not a huge change when it comes to what X enables you and does not enable you to do when you generate images. In addition, the European Union have set up a 120 million euros, which is about$140 million fine against X, which is a penalty for the platform's deceptive blue check mark system, and a failure to maintain transparent advertisement repository and the way they're claiming they're manipulating their algorithm, in order to promote specific agendas. And in return on January 20th, X has released the code for its for you feed recommendation engine. Basically open sourcing and showing the world how their algorithm actually works in order to show that transparency is important to them despite what everybody's saying. another interesting human related aspect in X is that X is working very hard to develop swarms of agents that can work together as part of x AI and beyond. That's part of their micro hard initiative, which is a tongue in cheek twist on micro soft, so it's exactly the opposite of that, where they're trying to develop a corporation that will be all or mostly simulated by agents. Apparently, it's creating a lot of confusion inside of X AI many of these agents are having conversations with humans, and the humans are not always sure whether they're talking to another human or they're talking to an agent. In some weird cases, the agent will call a person and say, come to my desk. Let's have a conversation about it. For that person walks to that desk. There's obviously nobody there, and it's not really clear in the org chart who is human and who is not. And that's creating a lot of obviously, issues, both from a human perspective and from an HR perspective on how you actually run an organization where a growing percentage of your employees are not human. Well, this is something that is currently being faced by very few organizations, like in this case, XI. This is something we will all have to learn to live with and develop the right mechanism and processes to be able to do this at scale because all companies will go down that path. Some will take longer than others, but it is definitely something that we need to start thinking about from a leadership perspective, from an HR perspective, from a logistical perspective. How do we run organizations that have a mix of humans and agents working side by side and collaborating on tasks? Because more and more of the tasks are gonna be done by agents, less and less by humans, and we will have to learn how to navigate that from an organization management control, and human relationships perspective as well. Now staying on topics of human drama, and in this particular case related to OpenAI Mira Mirati, one of the co-founders of OpenAI, who has been one of the leading voices inside of OpenAI, who left and founded Thinking machines just fired her co-founder, Barrett Zoff, who has held the title of Chief Technology Officer. Now, it is unclear if he was really fired or he left out of his own will, but this was a combination of a professional reasons as well as personal reasons when Mira Marti found that he was having personal relationships with a colleague from work. Either way, ov again, the CTO of thinking Machine Labs has left thinking Machine Labs and went back to OpenAI together with three key researchers that left with him and are coming back to OpenAI as well. Where does that leave thinking Machine Labs? Well, not in a great place, especially that there's a lot of conversation that potentially additional other people are going to leave and follow ZF as well as the other researchers. re thinking Machine Labs has raised significant amount of money at a$12 billion valuation. I'm sure they will find ways to incentivize other people to come in or to stay, but I think Mir is learning the hard way that keeping talent is not going to be easy. And now we have lots of really interesting rapid fire items. We're going to start with the revenue numbers for Anthropic. So apparently Anthropic hit 4.5 billion in revenue in 2025. That is 12 x what they did just in 2024. So in 2024, they finished the year with$381 million of revenue, and they just hit 4.5 billion. This is insane growth, especially at that scale. At the same time, they have lowered their gross profit projections to 40%, which is 10 percentage points lower than what their initial expectations were for the year. Now they're attributing the lower margins directly to inference costs, which is basically generating tokens and using the ai, which is higher than they have expected. Currently running mostly on Google and Amazon servers. That being said, the 40% gross margin is significantly better than the 94% negative margin that they had in 2024. So they're definitely doing something right. And I must admit, and I said that several times in the past few weeks, it feels like anthropic are on a roll. In the past couple of months, uh, their latest releases are incredible. Running Claude Code and Claude Cowork has been magical, and allows me and other people to do things that are absolutely the future. Like I can feel what a GI is that everybody was talking about, and I can see it in the actual work. And so philanthropic are definitely doing some good things, including on the financial aspect. By the way, in a comparison to open ai, open AI are running at around 46% of gross margins. And so the 40% from Anthropic means they're not as efficient as of right now compared to open ai. By the way, despite these incredible growth numbers, philanthropic are expected to lose$5.2 billion in 2025, and they're already looking to raise$10 billion at a$350 billion pre-money valuation in the near future. Another interesting piece of news from Anthropic is they just released Claude's constitution to the public domain. So what is Claude Constitution? That is basically the sole, if you want, of all their AI models and how they have built them in order to be what they are. And they have open sourced it under Creative Commons, which basically means that anybody can use it for whatever they want without asking for any permissions. Now, what this document does is establishes a cleared tiered hierarchy for the priorities. If you want the core values of the Claude models and these core values clearly position safety over helpfulness. So they have three things that the model needs to do. The top one is being broadly safe. That's the single most critical property of the Claude models, followed by being broadly ethical and only then compliant with PHILANTHROPICS guidelines. And the final thing is genuinely helpful. So being helpful is the last component after being broadly ethical only then following specific compliance from anthropic, and then being helpful. The text also establishes three levels of trust with three distinct principles that Claude Mass navigate and these levels are anthropic. So basically the creator of the model operators, which are developing using the API and then users, which are the individuals that are interacting with the model. And it explains to the model how to navigate the priorities of each and every one of these. Now the interesting thing here is that they are not really defining rules for the model, but as I mentioned, more define core values to help it navigate a wide variety of use cases. Or as they're saying, they're arguing that clear rules often fail to anticipate every situation, and therefore they favor cultivating good values and judgment, which then the model can apply as needed in different scenarios. So kudos to philanthropic for A, creating the models the way they are and B, for delivering this to the world so other companies can learn from it and potentially apply similar concepts to their models, which I believe is the right way forward. You heard me talk about this many times in this podcast. I used to read Asimov's. Books When I was a teenager, I read most of them more than once. And in the Robots series, he's talking about the rules that the robots must obey. And this feels like the same kind of approach, which I truly really like. And I think that it's necessary, well that prevent catastrophes in the future. I'm not sure, but I think it's a step in the right direction. This past Tuesday, I released an episode about Claude Cowork, and since then I had a lot of more experimenting with it, and I find it to be an incredible tool. But in a report on Medium by JP Capari, a user identified as Hutch utilized Claude's cowork in order to research and organize the digital legacy of his late grandmother, Dr. Sally Roche Wagner, I hope I'm not butchering her name, and that included over 60,000 files. This is obviously an endeavor that a human will take a lifetime to do. And using Claude Cowork, he was able to go through all the files, including sorting 40,000 word documents, and he discovered four unpublished books that they didn't even know she wrote. A different person called Leon Lim said who runs an automation agency, reported that he and I'm quoting, cleared nine weeks of documentation backlog overnight. So basically he took a huge amount of data that would've taken him nine weeks to clear and using Claude Cowork. He did this in one single night where the tool produced 23 detailed workflow breakdowns and drafted 31 client updates that perfectly matched his professional tone. The bottom line is if you haven't tested Claude Cowork and you have a Mac, go test it out. It will blow your bind and the sooner you learn how to leverage it, the faster you're gonna get incredible benefits from it. Going back to the news. Big news from Gemini, Gemini, API calls more than doubled from 35 billion in March to 85 billion in August of 2025. This is according to internal data that was reviewed and released by the Information. Gemini Enterprise has reached 8 million subscribers across 1500 companies, plus over 1 million online signups. We talked about this several times in this podcast. This has definitely been a huge year for Google, where both the deliverables as well as the sentiment has changed dramatically and they've seen amazing growth. Taking a lot of market share away from OpenAI, both on the individual level as well as on the company and enterprise level. Speaking on quick news on Google. Google DeepMind just hired the leaders behind Hume ai, including their CEO and founder Alan Cohen. Those of you who don't know Hume Ai Hume, is specializing in empathetic voice interfaces that can detect emotion and adapt responses to reflect emotion while using voice interfaces. So this is another kind of like acqui hire scenario, where the leading researchers and the CEO are gonna jump ship, they're gonna work for Google, the company is going to stay independent and Google is going to get access to the IP for the company, but the company can still use it for its own stuff and keep serving its clients. Google is obviously doing this because like everybody else, the future of communication with AI is through voice and just like OpenAI is focusing on that in many other companies, google is accelerating their voice. Ai. Through this integration of hues technology into everything DeepMind, staying on quick news, apple is secretly developing a pin, an AI pin that is going to be about the size of an air tag, slightly bigger, and they're planning to release it if they end up doing this in 2027 with a projection of selling 20 million units in the first year. Now there's still no final decision, but it seems that initial tests are very positive and they're moving in that direction. That pin is going to be a flat circular disc that is going to have, that is going to have two cameras and three microphones in order to be able to capture its surroundings, and it will most likely be able to do some of the compute on the actual device itself and probably communicate with an iPhone in order to do more advanced things. This is Apple's play into the world of wearable AI devices and they're obviously planning to compete with the release of the whatever it is going to be that Johnny Ive releases together with OpenAI as well as competing with META'S glasses, which is currently the only device that is really selling at a very large scale. To me, the pin is an interesting approach by Apple from several different reasons. Reason number one, the previous known pin came from Humane, and we know how that story ended. They had a huge hype. They sold about 10,000 units and everybody thought it is crap and the battery life wasn't there and it was really slow and not really working. And then they ended up selling components of the technology to HP and they disappeared from the world as fast as they showed up and appeared in the world. The other reason I am a little surprised by Apple is the fact that it is very clear that glasses are working. The meta glasses are selling very well. Some competitors are doing not bad, and it's a form factor that people are already using. Many people are wearing glasses either way, and other people are wearing sunglasses a lot of the time. And so this is something people already have. Why make them wear something else to use? AI is unclear and again, failed in the previous attempt, not necessarily because it was a pin, but maybe because there's other easier form factors to be used. And the third reason I don't understand Apple is that they already have Apple watch and they already have AirPods, which is maybe the best selling. Wearable, if you want device that people already have that is voice activated that people already want and love and using, and to come up with a version of that, potentially maybe with a camera that it can also see around will be probably the most reasonable form factor that they're already selling. Extremely successful in really high prices and high quantities. So why not leverage something that you already are successful in that you know that people want, that you know that people are willing to pay for and just integrate AI into that. I'm not exactly sure why they're going down the path of the pin, but this apparently is the direction that they're taking right now. An interesting piece of news comes from meta super int intelligence lab. So with all the term oil and the crazy changes in leadership and people departing and people coming back and lots and lots and lots of craziness and failures and drama that happened around this new group of people they have just released internally their first set of models. So as of January, 2026, they have finished the training of two different models and the first inputs is that they are, I'm quoting very good and showing a lot of promise. That being said, they also said that there's tremendous amount of work to do post-training for AI to actually deliver the model in a way that is usable internally and by consumers. Apparently, they are two models that they worked on. One is called avocado, which is going to be text-based, and the other is an image slash video focused model called Mango. Both of them are expected to be released in 2026 with the avocado. Text-based model is expected to be released potentially in Q1 of this year. So where does that put meta in the AI race? Unclear yet. The jury is still out, but at least this new department that has been around for about six months is already delivering models that are presumably good. So we just have to wait and see when they actually finish working on them, how good they are compared to what the competition has, how they're going to integrate it into the entire meta universe of products. And how are they gonna monetize it, if at all? So still a lot more that we don't know than the things we do know, but we do know that they were able to deliver the first set of models as of this month. Speaking of companies that are developing new things or resurrecting old things in order to make them even better, Tesla is on a quest to develop a significantly larger volume of chips. So their new set of chips called AI five is targeting not just Tesla usage, but potentially being implemented in the data centers. So running the same chips in Teslas and in the data centers targeting Nvidia hopper level performance for both single Die and Blackwell performance with dual dies combined together. This is a huge push by Tesla to be independent of other companies when it comes to using their own silicon. They're also talking about. Iterating very, very quickly and doing a nine month product cadence for ai, five through ai. Nine. So basically every new chip every nine months, similar to the pace that NVIDIA is moving forward at. You've heard me share that before that one of Elon's visions is to have a distributed AI computer, basically to be able to tap into the compute power capabilities in Teslas when they're not driving in order to provide more compute for Tesla and Xai and so on in a distributed global way, making it the largest distributed computer on the planet. They're already doing this, but if they will be able to develop even more advanced, more capable chips that are tailored to do exactly that, it will provide Elon and his empire across the different companies, even more compute power than anybody else, allowing him to do things that other companies will not be able to do. And out to two interesting news on the creative side. One of them comes from runway. So Runway just released Gen 4.5, which is their latest image and video generation models. And what they have done is their research team has done an experiment to see if people can distinguish between AI generated videos and real videos. And the way they've done this experiment is they took the first frame of the real video, and prompted to create an AI video of a similar situation. Based on that first frame, what they have found is that just 9.5% of the people were able to identify consistently in a statistically significant accuracy to be able to identify between the AI generated and the real videos. The overall accuracy was 57.1%, which is basically as accurate as flipping a coin and pure luck in order to identify the videos. So still some people are able to identify a little better what is AI generated and what is not. But the vast population cannot, and that is in a test where people knew that some of the videos are AI generated, which basically means if you don't know your chances of actually finding the cues that will help you identify that it is AI generated are basically zero. Their summary, and I agree with them is that, and I'm quoting, detection is an inadequate strategy for trust and verification. Basically what they're saying is that other tools like C two PA, metadata or any kind of tagging on the pixel level, that the video and images are AI generated are the only way forward in order to build some kind of trust in what we're seeing now. I've been saying this for a very, very, very long time. You can go all the way back to episode 13 that was called The Truth is Dead. I recorded it probably in May of 2023 and it was very clear where this was all going. We cannot tell apart. Anything, whether it's real and whether it's not, that is really bad for society. And it's even worse when we are in a mid election year and we're going to have deep fakes, basically anywhere, whether by people or by organization that are gonna finance them. And the big thing is that most people are unaware that this is even an option. Most people do not know that AI can generate videos that are indistinguishable from real life. And so how do you trust anything you read here or see if you don't know if it's AI generated? And again, the problem is most people will trust it because they don't know it can be AI generated. And in general, I think AI is gonna play a very big part of this coming mid election, and I'm not happy about the directions that it's taking right now. I think we need a lot more scrutiny on what AI can and cannot do. I do think it needs to be slowed down. I do think the administration needs to start thinking about the implications of a GI and beyond on the Society, on the Economy, on education and so on, and I don't think they're doing that right now. And I told you we're gonna end up on a positive, interesting vibe and note. So 11 Labs, the company behind the 11 lab voice has released a while back a new tool that is 11 music that allows to produce original studio quality tracks across multiple genres. Well, they just released the 11 album, which is basically the first album that is released professionally using their platform. The way they're doing this is the right way, so they have license the voices of multiple creators and working with multiple singers as well as producers in order to improve their platform. So they have in their voices of people like Liza Minnelli and Art Garfinkel and big name producers, and they've used all of those to create a multi artist AI music project that has the rights to the music and the voices of everything that is in it. It has multiple genres in it as well. The cool thing is you, when using 11 music, you can use the voices of real people, known singers, and build new music using it while licensing it from the creators that are getting compensated every time their voice is being used. Liza Minnelli herself said, I've always believed that music is about connection and emotional truth. What interested me here was the idea of using my voice and new tools in service of expression, not instead of it. This project represents the artist's voice, the artist's choices, and the artist's ownership. Art Garfinkel said, music has always evolved alongside technology from microphones to multi-track recording. What impressed me about this experience was that it res was the respect for music for musicianship. The human remains in the center, my voice, plus the technology simply opens another door. Now the album includes rap pop, r and b, EDM, cinematic and other global sounds, and you can stream it right now on Spotify or on 11 Labs website. Now, why I think this is awesome. First of all, I play bass and I'm very well connected to music, and I sang in a band for many years, and so it totally talks to me as far as being able to create new music with ai. But I really love the solution. I love the solution because they involved the creators. They found a way to compensate them for their voices and for their art. And it is allowing a co-creation of new music, new artistic impression while combining those creators together with allowing anybody to participate in the creation of new thing. These are the kinds of solutions that give me the optimistic swing in the AI pendulum overall. So first of all, I wish all the success in the world for 11 labs with this launch. I'm definitely going to listen to this album. I find it really interesting, and I really hope that we find these kind of solutions with any kind of AI implementation moving forward. That's it for today. We'll be back on Tuesday with another fascinating episode about some of the new capabilities from Anthropic that I find incredible and how you can implement it to get benefits from it in your business as well. If you are enjoying this podcast, please share this with other people. Click on the share button right now on your phone and share this with four or five people that you know. It will take you 10 seconds to do. They will be grateful for you. I will be grateful, and you'll be helping us educate more people about ai. And if you are looking for more structured learning, go and check out the AI Business Transformation Course. As I mentioned, it is launching a week from the time that this episode goes live, so if you're still not signed up, you still have time, but not a lot of time to sign up and join us for this coming cohort. Keep on exploring ai. Keep on testing new things, share what you learn with the world, and have an amazing rest of your weekend.