Leveraging AI

247 | AI bubble sirens: the $8 T bill, OpenAI’s “code-red” scramble, Anthropic Opus 4.5 dethrones Gemini 3, AWS rolls out Frontier Agents — and more AI news for the week ending on December 5, 2025

Isar Meitis Episode 247

Is AI building the future — or inflating the next great tech bubble?

This week’s episode of Leveraging AI peels back the layers of billion-dollar optimism and uncovers the quiet panic sweeping the AI giants. From OpenAI’s internal “Code Red” to Anthropic’s cautious warnings, host Isar Meitis walks you through the pivotal shifts shaking the foundation of the AI industry.

While tech giants throw trillions into compute power and bold promises, business adoption is… shockingly stagnant. Are we seeing signs of a bubble about to burst — or is the market simply catching its breath before the next leap?

If you’re a business leader navigating the hype, headlines, and high stakes, this is your briefing.

In this session, you'll discover:

  • Why OpenAI froze development and went into Code Red mode
  • How $8 trillion in projected AI infrastructure is colliding with disappointing ROI
  • The growing disconnect between enterprise AI adoption and corporate AI spend
  • What top CEOs like IBM’s Arvind Krishna and Anthropic’s Dario Amodei really think about the AI investment surge
  • A jaw-dropping look at deep AI infrastructure costs and the five-year obsolescence cycle
  • Why Claude Opus 4.5 is being called the best coding model on the market
  • The bold new moves from AWS, including agentic AI factories
  • Political pressures: PAC wars, employee protests, and the race to regulate
  • And what your company should (and shouldn’t) do right now to stay ahead


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we have another week with a lot of really important and big announcements, new models that came out. Lots of discussion about AI bubble or not AI bubble, lots of activity in m and a and partnerships, lots of politics, as it comes to AI and so on. And I actually went and took a look back on the 24th of May of this year, I released episode 1 91 that was called the craziest Week in AI News History, Microsoft, Google, Antrophic, and Open ai. Major announcements. And it seems to me that in the past few weeks this is becoming the norm. Like all the big labs come up with new announcements or new models or both all at the same time. And this week you add to the mix AWS that had a big event and so on. And we have so much to talk about. I could have probably recorded two episodes, but I'm gonna try to give you as much information as possible in just one. And the rest is going to be in our newsletter. So everything that is not gonna make it into this episode, and I can tell you it's gonna be a lot and things that you probably want to hear about and I will tell you a little bit about it afterwards, will be in our newsletter and you can sign up for the newsletter by clicking on the link in the show notes. But let's get started and talk about what's actually happened this week. I was able to mostly avoid the AI bubble conversation in the past few weeks, even though it's been brewing and growing every single week. But I cannot ignore it anymore because it's literally coming from every single direction, including some of the leaders in the industry itself. So let's do a quick review of some of the things that happened this week and some of the statements that we got this week. The first one is I-B-M-C-E-O. Arvin Krishna on the Verges Decoder podcast basically sounded the alarm on the current cost of infrastructure. And he broke this down to simple math. He said that filling a single one gigawatt AI facility with the necessary compute hardware comes to a price tag of approximately$80 billion. That's one gigawatt. Now public and private sector announcements currently suggest plans to deploy roughly 100 gigawatts of capacity dedicated to a GI class preparation. Combining all the different announcements from all the different company. When you do the simple math and you multiply the cost per gigawatt by the amount of gigawatts you get to$8 trillion of potential investment that these companies are talking about as a needed infrastructure for what they foresee in the not too far future. So what Krishna is saying is in order to just finance the capital, just the cost of capital to pay back the debt and the loans and the raising of money that these companies need to pay the 8 trillion. They need to generate 800 billion in annual profit, not revenue, but profit in order to just pay for the investment and sustain the payments on the loans that they're taking or the money that they're raising in order to get this kind of thing. To make it even worse, he's saying that most of this infrastructure have about a five year refresh cycle. If you want his exact quote, he said, you've got to use it all in five years, because at that point you've got to throw it all away and refill it. And he's not saying that because it becoming obsolete. The soft, the hardware still works. It is because the newer versions of hardware just make the old one make look like a joke. Both in means of cost of operation as well as in means of capabilities. And so the lifespan is about five years. We're gonna get to talk more about that in a few minutes now while Krishna acknowledges the benefits of using AI tools and how it's gonna drive significant EN enterprise productivity and growth. He does not think that the ROI is there and he also doesn't think that the current path that we are on that large language models can lead to a GI or Super Intelligence. He specifically said that the chances that the current LLM based AI will reach a GI chances is between zero and 1%. So he's a strong believer that what we have right now is helpful, but it is not the full need in order to get to a GI. And there's many others that agree with him, and he also thinks that the current investment is outrageous compared to the potential returns. To put things in perspective, Google announced this week the incredible number that they're going to a thousand X grow their capacity to support ai they're planning this staggering target comes directly from Amin Adat, who is the head of Google AI infrastructure. Who is the guy that is going to actually make this happen. So this is from a highly reliable source. And he revealed a plan in one of the recent all hands to double the server size capacity every six months until they get to a thousand x the current capacity they have right now. Now, if that sounds insane to you, his specific quote, which we heard similar things from other people in the past. He said, and I'm quoting, the risk of under investing is pretty high. The cloud numbers would've been much better if we had more compute. So basically what he's saying that currently their supply is significantly below the demand they're seeing right now, not to mention the demand that they're expecting to have in the future. And hence, they are planning to double the amount of compute every six months, which is absolutely insane. Now. It's not just about getting more compute, it's also about getting better compute. They're planning to deploy their seventh generation tensor processing unit. Their TPUs, which is their homegrown hardware that they've been training and deploying all their AI on, and which connects very well to the previous point we talked about. They're going to deploy new hardware, which will probably also replace at least some of the old hardware that they have right now because it's faster, better, and cheaper. Now, according to this article, the aggregated investment in capital expenditures by Google, Microsoft, Amazon, and Meta this year will exceed$380 billion. Add some of the smaller players to that and you're getting to more than half a trillion in just a single year. Another person that made similar statements as far as the re investment right now does not make any sense, is no, other than Dario Amer the CEO of Anthropic. He was on stage at the deal book Summit of the New York Times, and he made some interesting statements. One of them is that he believes that some of their competitors, which he did not name, but we can assume who that is, are, and I'm quoting, pulling the risk dial to far, and you also use the term yellowing billions of dollars. And those of you who don't know the. Term yolo. It's you only leave once basically. What the heck? Let's try this out. You only leave once, but betting hundreds of billions of dollars on that. Again, he did not name OpenAI specifically, but I think it was pretty obvious who he's referring to. He also made some very interesting comments. You know, their own growth has been staggering, right? They made a hundred million dollars in 2023, and they're on trajectory to making between eight to 10 billion this year, just two years later. This is an insane growth, but he said, and I'm quoting, it will be really dumb to assume this trajectory is guaranteed for the future, and he's emphasizing that he's finding it very, very hard to project how their future looks like. He's saying that next year they can make anything between 20 billion to 50 billion, which is a pretty broad range if you're trying to plan on how much income you're going to have and align your expenditures accordingly. He also mentioned the same concept of the obsolete of chips after a relatively short amount of time. He said that while the chips are still working, they're not gonna fail. They're becoming economically obsolete because newer, faster, cheaper versions arrive, and then it doesn't make any sense to use the old ones anymore, which means the amount of billions that you're investing are depreciated very, very quickly. And he's saying that Anthropic and all the other scalers have the same real dilemma, that if you underinvest, you might miss this huge wave. And if you overinvest, you face a complete catastrophe of destroying both your company as well as all the other people who invested money in you along the way. Now he's saying that from the way they are planning their future and the amount of money they're raising and the way they're spending, they can survive. And he, and I'm quoting almost all worlds basically whatever economic future is around the corner. But he also added that, and I'm quoting, can speak for other companies, again, implying that other big scalers may not be in the same safety zone as far as Antrophic right now. We're going to continue to talk about the bubble in a second, but since we're already talking about this interview by Dario Amede, he also reiterated things that he said in the past as far as his fears to the economy because of the loss of jobs in the very near future. He said that he believes that the free market cannot alone observe the, and I'm quoting massive labor market disruption that AI is going to unleash. He repeated something that he said in the past that he believes that AI could wipe out 50% of entry level white collar jobs within just a few years, and he's suggesting that the government will have to step in order to stabilize the economy in a situation like that. This is a very clear warning by one of the people who is most knowledgeable about what's coming. We're gonna talk about it in a minute. They just released Claude Opus 4.5, which is an incredible model already, but that is not what they're testing, right? That's what they're releasing. Meaning he knows what's coming in six months and 12 months, and when he's saying it will be able to replace 50% of entry level jobs, he knows what he's talking about. He's not assuming any of that Going back to the AI bubble discussion, an article at the futurism website was talking about the fact that the corporate spending and overall AI intelligence, crazy growth that we've seen in the corporate world in the past few years is slowing down, and in some cases even revers. The most critical stats reported in this article comes from no other than the US Census Bureau, so this is not a private entity that has anything to gain from this. This is actual statistics that they collected, which found that the percentage of Americans, which are using AI to produce goods and services, so not just general use, but something that is used to generate work at large companies, fell from 12% to 11% between the two recent surveys. Now, this is not a big decline, but it is a decline compared to very significant growth that happened in the previous events of doing the same survey. Maybe the most staggering stats in that report said that 81.4% of employees in mid-size companies, a hundred to 250 employees reported they did not use AI in the last two weeks. 81.4%. Of midsize company employees in the US reported not using AI in the past two weeks. That is an increase from the 74.1% that was reported in March. Now even major corporations with more than 250 employees had a increase in the number of employees who reported that they haven't used AI from compared to the previous survey. The numbers here are lower, but they're still very high. 68.6% reported not using AI in two weeks compared to 62%. Reported that earlier this year. Another data point in that article came from an economist at Stanford that is tracking generative AI usage, and he found the same results. He found that usage has fell from 46% to 37%, in just a few months, basically in a single quarter. Now, as somebody who is meeting and working with companies large and small every single week, I can tell you that I believe. These numbers. I don't necessarily believe the decline, but I definitely see the numbers being realistic. I meet with solopreneurs when I teach my courses or small business owners when I teach my courses. Plus most of the work that I do is private workshops to large companies. Most of them are in the hundreds of millions or in the billions, and I can tell you that the level of usage and understanding of how to implement AI effectively as a company initiative, not specific individuals that are doing cool stuff, is very low across the board on all these different companies, which going back to the bubble discussion has two sides to the story. One aspect is there's a lot lower usage than these companies are investing for. On the other hand, very few people are currently using ai, which means there's a huge room for growth as far as AI adoption and implementation. Before I tell you what I think personally, I will add one more thing. Michael Bur, who is the guy from the movie, big Short, the guy who predicted the crash of the subprime loans and the whole housing market collapse he now made a very strong statement, and we talked about it a few weeks ago, that he is making that bet and putting his money where his mouth is saying this is gonna be a big crash, but he explicitly now set up a statement that says that the stock market could face a crash that is worse than the two thousand.com bubble. And he is tying it to two separate things. One, he's citing that he believes that valuations are currently dangerously inflated, but also he's adding a structural investment concept that is now very, very different. He's saying that currently over half of US equity assets come from ETFs and not held and managed by individual investors. And because of that concentration, he believes that small changes in the direction of the AI market could trigger a very significant shock because of the concentration and individuals not taking advantage of immediate changes in the market. And so that's one aspect that he's saying. The other thing that he's saying is that NVIDIA's price to book ratio now surpasses any stock from the.com bubble period and that NASDAQ's one hundreds price to sales multiplies are climbing back towards their 1999 to 2000 peaks just before the market collapsed. Now add to that, his specific warning that says that, and I'm quoting, many tech firms are stretching depreciation on AI hardware. This reduces expenses on paper and inflates reported profits. And he's warning that when the real cost hit and he's stating as we've stated, that everybody else said, that the actual depreciation is relatively short, then the profits will not be sustainable anymore and earnings could plummet because these costs are actually coming way faster than what they're showing up in the books right now. So overall, he's making some very strong points. Now, connect the dots to everything that we talked about and what I personally think about this. As I mentioned, I work with. Large and small companies, and I see the lack of adoption right now across the board. Very few people and very few organizations actually know how to implement AI effectively, and the level of usage right now is ridiculously low to what it could be if people just knew how to use today's technology. Forget about what's coming in the next few months or years. If we stop development right now, the amount of scaling that will happen just because companies will learn from people like me or on their own or in different methods, how to implement AI effectively. They will use AI not twice as much as they do right now, but a hundred x as they're using right now, which means the demand is not even close to what it is going to be, and AI is going to be embedded into. Everything, whether we like it or not, which means from a demand perspective, it is definitely there. The thing that is not clear, and with that, I cannot obviously argue with Michael Bur because he knows about this way, way more than I ever will, is, is the current level of investment and expenditures and commitments align with that growth in the demand? And to be fair, I don't know, the one thing that is clear to me is that not all these investments can be successful. The amount of investments right now is insane. It is across the board and not all these companies will survive. Meaning there are going to be companies that right now are darlings of the industry that will either take a serious hit or will not exist a few years from now because there will be a correction. Is that a doom scenario for everything? Ai? A hundred percent not, again, I do not see how we can put this back in the bottle or in the box, or call it, whatever you wanna call it. AI is moving forward. It is going to be embedded into everything we do from our personal lives, to businesses, to relationships, to everything you can imagine. and there's no going back. So the demand will keep on growing, but there will be corrections and there will be significant hits for specific companies. This is my personal opinion. Now switching gears to our next topic, which is a big topic. Sam Altman, basically hit the panic button inside of OpenAI and announced a code red as a result of the release of Gemini three and my amount of great press it has been getting and the interesting growth that it has been driving to Google. So based on an internal memo that is quoted in the information, which is a magazine that's been very good at getting internal information and sharing it accurately. Altman issued a company wide code red, freezing new products and slowing down or putting on pause other products to focus on ChatGPT reputation. In his statement, Altman said, and I'm quoting, we must concentrate all capabilities to improve the overall everyday experience of ChatGPT. and then he explained that these needs to be in the areas of speed, speed, personalization, and the range of answers it is going to answer versus refuse to answer. To put things in perspective, OpenAI is at the 800 million weekly active users range. They've been on that range for the past few months, but Gemini is reportedly just hitting 650 million monthly active users as of October 25, up from 450 millions just a couple of months ago. So huge growth for Gemini while Chachi's growth, more or less plateaued. So with projects are gonna be put on hold Pulse, which is an announcement that they made a few months ago, which provides people a push notification every morning on topics. They might be interested in an ad platform that they were planning to roll out in order to support, the revenue that they need to support and any initiative around agentic ai, specifically in healthcare and shopping. Now as part of this focus, they are already planning to release a new reasoning model as early as this coming week or weeks. This new model was internally labeled garlic. And once it gets released, it's probably gonna be GPT 5.2 or 5.5 or whatever, uh, number they're planning to call it. They're actually planning to release two different variations. One is an immediate one, which is not the actual garlic model. It's is just an improved thinking model that is presumably better than what they have right now and potentially aligned with Gemini three and Opus 4.5. Uh, but they're planning to release the garlic model, the full garlic model in the first quarter of 2026. Now the interesting thing about this model that was named garlic was shared by Mark Chen, who is their chief research officer who said that it is a milestone in the way they are pre-training the models. So if you remember when they were about to launch GPT five, they delayed it several times, and there were many rumors that many of their pre-training runs failed. Every one of these training runs cost, hundreds of millions of dollars, and sometimes can take months. And so when it's failing, it is a very big deal. And if they have some kind of a breakthrough, which is similar to what Google said that they did, one training Gemini three, that will allow them to not only deliver the immediate model better, but potentially to deliver future better models faster, better, and cheaper. Or as Mark Chen said. It will allow them to, and I'm quoting, infuse a smaller model with the same amount of knowledge. Something that required previously significantly more infrastructure and architecture, and now could be done in much faster, cheaper, and better ways. Now if you remember last week, I told you that there's are rumors about an internal model named Shallot Pit, where apparently Shallot Pit is this new way to do pre-training and the garlic model is using that concept in order to deliver a full new model that, as I mentioned, will be released sometime in Q1 of 2026. But we are presumably getting a brand new thinking model sometime in the next few days or couple of weeks. And obviously as with all these labs, this garlic model that we're getting in either a month or two is not the final thing. Chen stated that, and I'm quoting OpenAI, has already moved on to developing an even bigger and better model thanks to the lessons it learned with garlic. So quick summary. We should expect a new thinking model within a few days or weeks. We should expect garlic, whatever it is going to be labeled to be released in Q1 of 2026. And there's already a bigger, better model in the making and everybody's in OpenAI is focusing on that right now. As far as getting back on top with ChatGPT, the interesting thing for me in all of this is it was exactly the other way around when ChatGPT came out. So if you remember, Google announced called Red after the release of GPT-4 and everybody was all hands on deck working on try to get their models to par. If you remember, Google in the beginning failed miserably and they delivered models that were unacceptable. And I said back then that there is no way Google is not going to win this challenge because they have everything they need in order to win it. They have potentially the brightest minds, the most successful with the most amount of history lab, the most amount of data because of Google indexing the internet and because of YouTube, the biggest distribution, really deep pockets, their own hardware, literally everything they need in order to win this race. And not surprisingly, they, they're now ahead of everybody else when it comes to both the underlying model as well as the integration of it into everything Google. And I'm really excited about some of the new and latest updates that they made across the entire Google platform. I will probably record a Tuesday episode about this in the next couple of weeks, talking about all the little tweaks that they added, which are absolutely amazing as far as providing value on the day to day. Now speaking about open AI and the value that it provides on the day-to-day OpenAI a week ago, updated voice mode in something that in face value looks completely unimportant, but it is actually a huge improvement. So voice mode inside of Chachi piti, which I use. More or less every single day is now embedded into the regular chat user interface. Previously, every time you clicked on voice mode, it would stop the chat and it will bring up this bubble that will be blinking on the screen and you would be talking to the bubble. And then when you were done and you would stop it, it will take it a few seconds and then you will write what it has had a chat with you. And that wouldn't always work. It actually happened to me on the app yesterday. I had a strategic conversation with it about my software company that does agentic based invoice reconciliation at scale. Basically automates the entire process of vouching invoices at whatever level you want by using different agents in different steps of the process. So I'm doing a lot of strategy with. Chat GPT about how to grow the business and how to, deliver more value and things like that. And I, the last step of that conversation was not saved, which means I will have to do it again. So this new update when you're running it on the computer, and again, I'm sure it will come to the app very shortly, transcribes what you're saying and what the AI is saying more or less in real time, and it can do more than that. In one of the examples that they showed, the user was asking it about the weather, and it was showing the graphics of how the weather is going to be in the next few days on screen while having the conversation with ChatGPT. The benefit and the value of using the voice model just increased dramatically because you can see the results on what it's doing as it is doing it, versus having to wait to the end of the conversation and then hoping that it's actually going to capture all of it. Huge change, which does not surprise me because if you think about where they're going specifically with their hardware initiative, with Johnny, ive, this is the direction that it's going, right? They're going into a, I assume, an interface with no keyboard, potentially with no screen, that will still allow you to engage with AI and the world around you. And this requires this kind of integration where you can speak with it and it speaks back and it's not stopping any other process that is happening. And so I'm personally, as a heavy user of voice mode, am very happy about this change. And again, it makes sense because of the direction that they are going. So maybe a bubble, maybe not a bubble. Open AI is in Code Red and there are also similar news from their partner, Microsoft,, Microsoft that in the beginning of 2025 defined the year as the year of AI agents is hitting a serious speed bump when it comes to actually getting clients to adopt the technology. And based on another report in the information, the tech giant is quietly lowering the sales and growth targets, more or less across the board for all of its products that has to do with AI implementation. Microsoft fiscal year ends in June, and 80% of their sales teams did not hit their targets by the end of the year, and hence, they have updated their targets for this fiscal year to lower than they were before. These projections are cut not by a little, but the initial projections, the initial sales target was 50% growth year over year are now cut to roughly 25% over the last year. Now it's still a very significant increase. Again, 25% year over year growth at the scale that Microsoft is significant, but it was 50% as far as the projection that they had. That again, is driving their investments and the expenses when you expect this kind of return. So you cannot completely say, oh, 25 is great. It is not great when you're already spending money a year in advance to build the infrastructure for a 50% growth. The article on the information even showed a few examples. One of them is a private equity firm, Carlisle, that has reduced its spending on Microsoft copilot this fall dramatically because they were struggling to, and I'm quoting reliably tap data from other applications. So basically connecting copilot to things that are not in the Microsoft environment did not work well and hence they were cutting back on their spending. Based on this article, OpenAI has done a similar process where they have reportedly lowered its five year revenue projections for AI agents, specifically reduced by$26 billion. Now there's several causes obviously for this slowdown, but some of the main ones that this article mentions is that the AI is still a statistical model that does not always do exactly what it expects, there are areas such as finance and cybersecurity in which, and I'm quoting, small mistakes can be costly. Basically, you cannot afford for it to be wrong 5% of the time or 2% of the time. It's just not acceptable. It has to be accurate every single time. Since we're already talking about Microsoft, another interesting point that was shared by another article on the information shared that Microsoft is planning to potentially replace the headcount of licenses because of agent takeover to paying licenses for agents. So Rajesh Ja, who is Microsoft executive vice president, basically thinks that the reduced human headcount for Office 365 licenses that will come because of automation that will replace these humans will be more than replaced by licenses that will be sold and used for agents to do the same work the way he's looking at it. These agents, and I'm quoting, they're going to have an identity. They're going to be in an address book. They're going to have a mailbox. They're going to need a computer to do its computation in a secure way. To me, all of those embodied agents are seat opportunities. When it comes to addressing the fears of revenue loss because of job losses of humans, just stated in a UBS conference, and I'm quoting, I'm not seeing AI as driving down seats. If anything, I think AI is going to be an opportunity for us to drive seat growth. My personal opinion on this is that it's, it's total bs and the reason I'm saying that is I'm seeing what I am implementing with many of my clients. Many of them are in the Microsoft world, and a lot of the stuff that AI can do completely reduces the amount of seats, and it does not require, at least right now, to buy any additional licenses because one user, a human user, with the right AI tools can do the work of three different users. So while his statement makes sense conceptually, let's start charging agents for licenses, I don't exactly see how that is going to happen, especially that they don't live in a vacuum and there's other companies providing similar solutions that may not require buying additional Microsoft licenses. So if, if the only option was that, then maybe they could twist their hands of their clients to actually do that. But they're not the only gaming down. And so I do not agree with this projection from a personal perspective. Now speaking of competition that is offering similar technology. AWS just had their big event called Reinvent 2025. And it was all about agents and new models, as you would expect from a company the size of AWS when it's doing a big event. So they just introduced a new class of AI agent, they called Frontier agents, that are designed to act as autonomous extensions of a team. Now, unlike previous bots, these agents can handle complex multi-step tasks for hours and sometimes even days. In this initial release, there are three agents that are included, but they're planning to develop more. The existing agents are named Kiro, which is a virtual developer that can navigate code repositories and fix bugs and create code at a very large scale. There is an AWS security agent that acts as an always on consultant that looks at new code that is being created and deployed and looks for potential security risks and loopholes in the code to fix them. There was a very big applause in the event when this was announced because as of right now, humans can create a lot more code because of the usage of agents. So if you're creating a lot more code, you're potentially creating a lot more risks, and having an agent that continuously monitor these risks is obviously very important. Definitely at the enterprise level. And then the third one is a DevOps agent that serves as an on-call operational team member to respond to outages in your DevOps infrastructure. Again, a very important solution, especially when we've seen the recent outages that happen across the board on several different platforms. This is a very important tool that will be available to anybody who decides to use this. They also announced their third generation of train servers. That is a completely new generation of chips that can run in what they call ultra servers and run at a very high capacity. There are almost four and a half times more compute performance and four times greater energy efficient compared to Traum too. So just the previous version. They provide four x faster response time and three x higher throughput on a single chip compared to the previous model. What does that mean? It means you can do everything you wanna do at less price, and at a quarter of the time. They also introduced a new family of the NOVA models, which is their homegrown AI models. These models are good, but they are not competing as of right now with the top frontier models. But they came up with a very interesting concept that they called Nova Forge. What Nova Forge is, it's a service that is allowing companies to blend their own proprietary data with the Amazon curated data sets to train custom models based on the Nova models. That basically means you can embed your company's know-how data and so on into the weights of the actual model itself inside a training run. And this, I assume will be something that many, many companies will be highly interested in using. And they also made another really interesting announcement, which they call AI factories. And what AI factories is allowing enterprises and governments, basically any organization who's interested to deploy dedicated AWS AI infrastructure running Nvidia GPUs. And or Tanium chips directly into their own data centers. So this is a major shift from their previous strategy. Previously, if you wanted to use AWS infrastructure, you had to quote, unquote rent it from AWS, right? It would run on AWS premise, but now they're allowing you to use the technology that they have developed that can run either their own chips or their competitor's chips, which comes from Nvidia and run it in your own company's data center, while leveraging all the technology and infrastructure and capabilities that they have developed. Why do I think this is happening? I believe this is happening because they understand that with the current level of demand and with the amount of flexibility that companies will require, and with big fears on data and security when running ai, they understand that they may not be able to twist everybody's arm to run it on AWS ecosystem, and hence they are seeing this as an opportunity to still be competitive and providing what they have developed to other companies to run on their facilities as well. I believe this is something we'll see from all the major players, and we will see a lot of frenemies relationships where these competitors on paper will collaborate to deliver more flexible solutions because this is what the market demands. Now as part of all these updates, they're also added some new capabilities to Bedrock Agent Core. And this includes policy controls for setting boundaries to the different agents and everything that's happening on Bedrock as well as episodic memory. So agents can learn from past experience and get better over time. And to summarize it all up, I will quote Matt Garman, the CEO of AWS, who said the next 80 to 90% of enterprise AI value will come from agents. This shift is going to have as much impact on our business as the internet or the cloud itself. That is a very big statement when it comes from the company that more or less invented cloud computing in the version we know it today, and that are still the leader in that space. Now, before we continue with the big players and their announcement, since we're already talking about Amazon in a interesting uprise, and I can't use a different word to say it, over a thousand Amazon employees together with over 3,600 external supporters from companies such as Microsoft, Google Meta, and SpaceX have signed a letter to the CEO that is warning that the company's frantic race to dominate AI is causing significant damage to the workforce and the planet. To sum up the letter, and we'll put a link to it in the show notes, it is basically saying that they all cost justified warp speed approach of AI development will do incredible damage to democracy, jobs and the planet. One of the examples that they mentioned is that while Amazon has been committed to getting to net zero by 2040, the annual emission have grown roughly 35% since 2019. A lot of it is due to energy demand when it comes to building new data centers. They also note in the letter that while Amazon has cut 14,000 managerial roles to quote unquote get lean, it is at the same time planning to spend 150 billion on AI data centers that comes to address the job risk that comes with it. And they're also talking about a internal culture that is basically a swim or sync culture where you are forced to either work with AI or lose your job. And this has become the norm inside of Amazon. And they're making three different demands in this letter. One is no AI with dirty energy basically to halt using fossil fuels to power data centers. The second one is no AI without employee voices. So creating a ethical working groups with non managerial staff to oversee AI development and deployment inside the company, and no AI for violence. Basically banning the usage of Amazon AI for surveillance or must deportation efforts Now while this letter is really interesting, there have been many similar letters in the last two years, and none of them have done anything to slow this process down unless there's going to be a massive strike across multiple of these companies with employees pushing back or by the government slowing this down by regulation, which is definitely not happening. I don't see any of these letters are meaningful. It is important that people are signing their voices. We are in a democracy, and I by the way, agree with everything that they're saying. I don't think it is going to make any difference on the actual speed in which these companies are going, despite the fact that many of them are raising the flag saying, we're running too fast. Please slow us down. And to sum this whole section up of all the new announcements and all the new tools and the emergencies and things that are happening at the biggest companies in the industry, I will share with you some of the latest findings from similar webs preliminary data for November of 2025. So Google Gemini's platform have seen website visit rising 14% in just one month to one point 35 billion visitors, which is a huge jump, especially at that scale. ChatGPT, on the other hand, had seen a traffic of five point 84 billion visits. So about three and a half x more traffic than Gemini. However, that is a decline from October that had 6 billion visits. So Cha GT's visits has shrunk by about three to 4%. While Google Gemini has seen a 14.3% growth. Another big growth has been experienced by Grok on much smaller numbers, but it also has grown by 14.7% to 234 million visits. And so the trend is clear. As of right now, more people are going to Gemini and using it more and more. While Chachi's growth has plateaued or even reversed somewhat, at least in the recent few weeks, hence the code red inside of OpenAI and focus on better delivering a better ChatGPT. Now overall, the summary of this makes a lot of sense. The sector as a whole is still growing like crazy with gen AI apps, downloads, surging 319%, so more than three X year over year, and that's obviously in addition to the overall growth of web traffic. And since we just finished speaking about the very large announcements from the very large players, let's switch to talking about new releases this week. And there's been some significant releases. The first one is deep seek, just released deep seek 3.2. If you remember at the beginning of the year, we had the deep seek moment where a company that nobody has heard of out of China came and released a model, an open source model that was as good as the top models in the US and now they have done it again. So they released two different models, a regular V 3.2, and then V 3.2 special, or, I don't know exactly how to pronounce it, but that has reportedly achieved a gold medal performance in both 2025 International Mathematical Olympiad and International Olympiad of Informatics, placing it in par with Google's Gemini three Pro and ahead of GPT five, and they have done it at a cost of$0.03 per million input tokens, which is about 10 x cheaper than the Western based models. Now the special model is specialized in reasoning first, that natively integrates thinking capabilities including into tool usage, which basically means that this model was built specifically to be really good as the underlying infrastructure for agents, which is aligned with where everybody is going. Now, to make it even more interesting, the model is built on a new concept that deep seek is calling deep seek sparse attention, with the acronym DSA mechanism which they're claiming that is an architecture, and I'm quoting that substantially reduces computational complexity while preserving model performance effectively having the cost of processing in long context tasks up to 128,000 tokens compared to traditional models. So what does all of this mean? It means that deep seek is positioned for exponential growth in the age agentic era. A, because it is an open source model, and B, because they achieve the holy grail. It is a better model at lower cost that is built specifically to do tools usage, which is what everybody is looking for right now from a global competition perspective that places it as a direct threat to US models and first and foremost to meta. So if you remember, meta was trying to position themselves in 2024 and into 2025 as the world's top open source model. And really they have not delivered anything meaningful for a very, very long time. There was a huge disappointment in the release of LAMA four and then the establishment of the super intelligence team that has sent the whole AI team into a complete turmoil. And they haven't delivered anything for a while now other than negative news as far as HR and changes in the organization. And so this is just another nail. I don't know if in the coffin of meta, because we can't ignore the fact that they're a huge company with huge resources and with huge data and distribution, but right now they're not in a good shape. Another really interesting model release that happened this week is runway 4.5. Which is their latest image and video generation tool, and the demos are absolutely mind blowing. They have integrated several capabilities that used to be distributed into one model that can now create switch characters, keep consistency, extend videos. Basically all the wet dreams of everybody that creates videos with AI are available now in this one model. Just a quick reminder, runway has been there delivering AI video capabilities before Sora. Before vo. There were basically the first real model out there that could generate something that was worthwhile looking at. And 4.5 puts them back ahead of the game. Go watch the examples. They're absolutely incredible. And if you are a video creator, you just got another really amazing goodie that can do some very sophisticated stuff almost seamlessly. As if we did not have enough models that are really good on our plate to choose from a new company that is a spinoff out of an MIT research that is called Open a g. I just came out with a brand new model out of stealth that is called Lux. And Lux is built as an agentic platform versus a large language model, and they built it this way from the ground up. It has achieved a staggering 83.6 success rate on the online mind to web benchmark. Now, what is online? Mind to web? It forces agents to interact with 136 live changing websites to perform over 300 diverse tasks from booking flights to cross-referencing e-commerce data. And it's basically evaluating real world resilience on actual websites versus textbook, problem solving. And so as a benchmark, it achieves 83.6% success rate compared to gemini's, CU, a computer usage agent that achieves 69%. Open AI operator with 61% Antrophic CLO with 56%. So it has blown out of the water, the leading models in the world in agentic usage of actual tools. It is also extremely efficient in doing this, and they're claiming it is going to cost one 10th of the cost of these major competitors to run this model. Now, to make it even more interesting, while most of these competitors like ChatGPTi, Atlas runs in a browser and can only do browser, the open a GI model can also run applications on your desktop, meaning it can operate anything on your computer and engage with the entire tech stack that you have. It has three different modes. Actor mode, which is optimized for speed thinker mode that is designed to handle vague multi-step goals, and then tasker mode, which offers maximum control, accepting Python lists of steps and reiterating until it actually successfully completes tasks. Now they're claiming that the reason they are so good at this is that they're not training this as large language models are trained, but they're using a pros that they call age agentic, active pre-training. Or the explanation that open a GI provided and I'm quoting. Most LMS are trained to passively absorb knowledge like learning to drive by memorizing thousands of manuals without ever touching the steering wheel. In contrast, our agents active pre-training allows the model to learn by doing. Now, if you remember earlier in this episode and in previous episodes, we shared that some of the leading brains and people in this crazy race, I've said multiple times that LLMs alone will not get us to a GI and some new breakthroughs. Will be required in order to get there. This might be one of those breakthroughs on how to train agents on an agent, more flexible environment versus just on static data that will allow them to perform, as we've seen in this benchmark, significantly better than large language models on such tasks. Another huge release this week was Anthropic Opus 4.5, which is receiving raving reviews on both Reddit and X, especially from people who are computer developers who are saying that the model just writes code in a completely different level than any of its competitors. It has taken the top ranking for web development on the El Marina from Gemini three by a big spread, so 15, 11 points versus 1476 with a huge number of votes. Putting it a clear number one when it comes to writing code. Pushing Claude Sonnet 4.5, which was the darling of the code writers, all the way down to number five. So right now then the ranking is Claude Opus 4.5, thinking in number one, Gemini three Pro in number two, Claude Opus, 4.5, not thinking in number 3G, PT five, medium in number four, and then Claudes on it in number five. But again, the reviews is not just by the numbers. It is people are saying this is just the best code they've ever seen, and it's the first time it really feels like a partner that can code with you and sometimes for you in an effective way. so kudos to Antrophic for the release of this new model. From my own personal experience, I find it really good in writing in general. I always like Claude, but this model is just even goes beyond that. It's just fantastic in understanding what I want and writing exactly the kind of content I expecting it. Find it to be a much better strategy companion than Sonnet 4.5, potentially being maybe better than GPT five at this point. However, I'm still favoring GPT five when it comes to brainstorming and strategic thinking, just because of the voice mode inside of ChatGPT, which I believe just gives me, from a personal perspective, a much better way to develop ideas and think about new strategy and evaluate things that I'm thinking of.'cause I find it a lot more intuitive to just speak to it versus type, and especially with the new capabilities of the voice that is integrating it into the chat. I am going to stick to doing this in 5 1 1, but when I tried it with Opus 4.5, I was seriously impressed. And while this model was released, a researcher was able to find the quote unquote soul document that is embedded into the model's training data that gives it its well soul Antrophic researcher, Amanda Ascal has confirmed the authenticity of this text. So this text includes several very interesting concepts. The first one it is. Sharing with the model that Anthropic occupies a particular position in the landscape, a company that genuinely believes it might be building one of the most transformative and potentially dangerous technologies in human history, yet presses forward anyway. This isn't cognitive dissonance, but rather a calculated bet if powerful AI is coming regardless anthropic beliefs, it's better to have safety focused labs at the frontier than to seed that ground to developers less focused on safety E. It is also defining the personality of Claude as a brilliant friend who treats the user like an adult offering frank, high level advice like a doctor or a lawyer would, rather than watering down answers out of fear of liability. The guidelines in this document distinguish between hard-coded safety rules, such as never help somebody build massive, weapons of mass destruction and soft coded traits such as tone and helpfulness, in specific scenarios. And those soft coded traits should adapt to users context versus the hard-coded ones that it should never deviate from. So very interesting approach by Anthropic. I don't know if other models do the same thing if they do, maybe just wasn't found. The thing that I thought about from a safety perspective, if one researcher can find the core instructions of a large language model, what does that mean to the safety and security of everything else that runs in this world, in that the way the data is used and so on? And I'm not sure I have the answer to that. And it really scares me to think what is the security level of these systems if a single user can find the core instructions of one of these models. A single user can find components of the core instructions of the model that defines itself as the safest model out there with a company that prides itself of being the more, the most safe in the industry. We're gonna switch to some quicker topics. A lot of them are really interesting and I would've loved to have the time to dive into them, but at least I think you need to know about them. There's been a lot of partnership and or m and a happening this week as well. So Open OpenAI announced an investment in Thrive Holdings, which is a new investment vehicle launched by one of open AI's, largest packers, thrive Capital. So while that sounds like a circle investment to you, it sounds the same to me and to everybody else. So how does this work? Open AI is going to take equity in Thrive Holdings, which is a private equity company that is owned by Thrive Capital that invested a lot of money in OpenAI. The deal does not include any cash transactions. Instead, OpenAI is going to provide technology, staff and services to Thrive Holdings portfolio companies. So it is being presented by OpenAI and Thrive as a way for OpenAI to embed its technology in real life environments and benefit the companies in the Thrive Capital portfolio. Which means if they are successful, OpenAI will make more money. But again, going back to fears of a bubble, this is a great example of a vicious circle of investment where the money that OpenAI gets gets invested into companies that Thrive Capital own that OpenAI can benefit from if they are successful. I do, by the way, agree with the fact that giving OpenAI the opportunity to directly implement its technology and companies to prove and to help them be successful as a result, make sense to me. Maybe they shouldn't have picked a company that have invested in them in order to do this thing in another deal. OpenAI has acquired a startup called Neptune for a valuation that is just under$400 million. They will get all the technology and the employees and will embed them into open AI's capabilities. Neptune AI specializes in building tools for tracking and debugging machine learning experiments. OpenAI has already been using them in the past and very happy with the tool, and now they're going to own the technology and they're going to wind down the support of external services that have been provided to other companies such as Samsung, and Roach, and HP and some other large companies. In a similar move, anthropic has acquired bun, which is a high performance JavaScript runtime company that can build, test and package JavaScript, apparently much better than node js. In a very similar concept, Antrophic has been using BUN for a while. They've been using it to dramatically improve the results of Claude Code and they are now going to own it and embed these capabilities into Claude Code to make it even better. This come bys the way together with an amazing milestone for cloud code of reaching$1 billion in annualized run rate in less than six months since its launch, and it's already being used by major companies like Netflix and Spotify, and Salesforce and KPMG all are seeing amazing results with cloud code and this is just gonna make it even better. Now, speaking of money and Antrophic, Antrophic is currently looking at a new round that is going to evaluate north of$300 billion. Raising most likely over 15 billion that they already have committed. So this may grow beyond that and may grow beyond the$300 billion valuation. But they're also apparently are looking into a IPO opportunity, potentially as early as 2026. They have hired one of the top law firms in the world to do these kind of things. Wilson, Nessy, Goodrich, and Rose and Rosati, who is the company that took public, small companies such as Google, LinkedIn, and Lyft. and they are in conversation with'em on how to structure their IPO. They have downplayed the move. Dario basically said that in their current scale and size, this is something they have to do, but it doesn't mean that they're actually going to do it or do it in the near future. But the rumors are talking about a relatively near future IPO, which makes perfect sense. Now speaking of Anthropic and partnerships in a really interesting strategic partnership, anthropic has partnered with Snowflake and they've announced a multi-year,$200 million partnership designed to bring a Gentech AI based on Anthropic capabilities to over 12,000 global enterprises that use Snowflake as the database backend of their operations. So the idea is embedding Anthropic Cloud models directly into snowflake's secure platforms to deliver one of the biggest problems in the corporate AI right now, which is allowing models to access and reason over the proprietary data of the entire corporation without compromising security. So by bringing the models into the already secured environment, they're going to achieve that. I think that makes. Perfect sense for both companies, and I think we're going to be seeing similar approaches from other partnerships in the industry. A great example of that, we talked about earlier in this episode when it comes to Amazon AWS, allowing you to run agents inside the AWS environment in order to gain similar benefits as this partnership delivers. Another company that has dropped a nice chunk of change this week to get a gent capabilities into its environment is ServiceNow. ServiceNow just spent just over a billion dollars to acquire Visa, VEZA, which is a company that has developed technology for identity governance capabilities and ServiceNow are planning to integrate those into their enterprise platform. Creating an AI control tower that ensures that the next generation of autonomous agents can be deployed securely and managed by human employees. So very significant m and a activity this week. The numbers are absolutely staggering. The fact that every one of these deals is either in hundreds of millions or billions of dollars, and they're allowing to combine some of the most advanced technologies with other really advanced technologies showing you how active this market currently is. And I expect this to continue happening. And our next quick rapid fire topic is going to be around politics and how it gets now tangled with ai, or not now, but how it's increasing dramatically. So a new report from the New York Times reveals that a network of Super PACS is mobilizing back candidates who favor stricter AR guardrails, pushing direct head on to counterbalance, the super PAC that was created by the regulator efforts that we reported a few weeks ago that is pushed forward by Anderson Horowitz and people from OpenAI and some other people who are trying to reduce regulation. This new super PAC is planning to initially raise$50 million. The other super PAC has a hundred million, uh, but they're planning to push from there, and their goal is to push safety first and to increase regulation in order to make AI development safer. To make this even more interesting, the super PAC is actually working across the aisle, so you have Democrats and Republicans are a part of this new super pac, pushing it forward. It is obvious that aI will become a key topic in the 2026 mid elections, and I think some of the politicians see it as an important point. Some of them just see it as an important point that they have to cover, whether they agree with it and know anything about it or not, but they have to learn the talking points in order to gain more votes, and we'll see more and more of that in the next few months as these politicians are going to test different approaches to gain more votes and see which way they want to lean. Right now it is very clear that both Republicans and Democrats have not decided which side they actually want to support or need to support to get more votes and I think over the next few months we'll start seeing more and more clarity and alignment with each and every one of the parties in which way they're leaning, which right now is not totally clear. Still in politics, the battle between central federal laws and state specific patchwork, that battle is intensifying on both sides of the equation. And it's pulling in the top players in the industry into that debate. Sundar Pcha, the CEO of Google, just spoke to Fox News on Sunday and has issued a warning to us policymakers saying that the rapidly expanding maze of state level AI regulations is becoming a competitive liability in the tech race against China, which is, has been the statement by the people who are pushing for deregulation and for federal control. But as I mentioned, there are many others, people who are, who believe it should be the either way around. One of the things that Sundar shared is that more than a thousand AI related bills are currently moving through state legislatures across the us, which will obviously make it very, very hard for companies to comply and deal with each and every one of them separately and staying on government and politics. The White House has launched the Genesis mission or what they call the Manhattan Project of the AI age. The Genesis mission is an executive order that is. With a goal to be a national initiative that aims to integrate the world's largest collective federal scientists and data sets with cutting edge AI infrastructure to accelerate discoveries in critical fields like clean energy, biotechnology, and advanced manufacturing. And they're planning to do this by mobilizing the Department of Energies, national laboratories, private sector partners and top universities, all in combination to create an American science and security platform that will invest in the developing of all these fields using ai. This executive order has a very aggressive timeline and it is requiring the secretary to, and I'm quoting, demonstrate an initial operating capability of the platform within 270 days, basically three quarters of the year, which is extremely fast when you're talking about a national level infrastructure project. Do I think about this? I think it's a great idea. I think the government pushing forward, not just AI for the sake of ai, but AI for improvement of clean energy, biotechnology, and advancements in things that actually matter on the day-to-day is a great initiative. It'll be very interesting to see how they combine government agencies with private sector knowledge base, and I am hoping we will see very positive results that will actually have real impacts on our day-to-day future lives. So what didn't we talk about that is just available in the newsletter? Well, Microsoft is announcing price hikes and major updates to their AI platform in 2026. We have several different shakeups in AI leadership in Apple. We have a Harvard Business Review, new concept that they call the C-D-A-I-O, basically chief Data Analytics and AI Officer as a new position. They think every large enterprise should have a safety report that is claiming that more or less, all short when it comes to their safety index. Warnings by Demi Saba of a GI arriving within the next five to 10 years with a 50% chance it's coming by 2030 and many other really important and interesting articles. So if you wanna learn more, go and check out our newsletter and you can just browse through it quickly or dive deeper if you are interested. We will be back on Tuesday with a fascinating episode that is going to show you and teach you how to use Claude Projects to write amazing content for any purpose, whether professional needs internally like sales content or emails, or creating marketing content across the board, all with an amazing framework that's coming this Tuesday. And final notes. If you have not yet done so, I would appreciate it if you click subscribe to this podcast so you do not miss any episode that we drop. I am doing everything I can to get you the best information possible in the most effective way, and so if you subscribe, you'll be able to get all of that and while you're at it and you are inside your podcast player, please click the share button and share this podcast with a few other people. and if you are on Apple or Spotify, I would really appreciate if you write a short review of this podcast and give us a five star rating or whatever you think I deserve I would really appreciate it. It helps us get to more people and it helps you help other people be more aware of what's going on in the AI world. That's it for today. Keep on exploring ai, keep sharing what you learn with the world, and have an amazing rest of your weekend.