Leveraging AI

241 | Who Rules the (World) Models? 🤔 LeCun’s new lab, Fei-Fei’s Marble, GPT-5.1 vs bargain Chinese giants, Gemini 3 murmurs and all the vital AI news for the week ending on November 14, 2025

• Isar Meitis • Season 1 • Episode 241

Are current AI models smart enough to rule the world — or just house cats with fancy vocabulary?

This week, a tectonic shift is happening in AI: Meta's chief scientist Jan LeCun quits to chase world models, Fei-Fei Li launches Marble, a spatial intelligence engine, and DeepMind drops CMA-2, a self-taught gamer bot that might be the blueprint for AGI.

Meanwhile, OpenAI releases GPT-5.1 — and China’s Kimi K2 and Ernie 5.0 roll out shockingly powerful, ultra-low-cost models. The AI race isn’t just about intelligence anymore — it’s about who can afford to scale.

If you lead a business, this episode explains why spatial intelligence, not language, may soon be your competitive edge.


The next wave of AI isn’t just about better answers, it’s about deeper understanding, real-world interaction, and models that scale affordably. If you’re not watching spatial intelligence, you’re already behind.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is. Isar Meitis, your host, and today is gonna be a short news episode because I just came back from a trip from Chicago doing a keynote in a workshop, and I'm leaving today to Europe to do two different AI workshops. So I have a limited amount of time, and I actually considered not doing an episode today, but there are two really big things to talk about. So we will do a shorter episode in which we're going to focus on these two things. One is going to be world models, and the other is gonna be models who are ruling the world. And you'll understand in a minute as we get started, there are a lot more news from this week. A lot of really interesting news, and you can find all of those in our newsletter, but we will focus on these two topics. So let's get this started. So world models is something that we mentioned several times previously on the show, but it was never a big deal and definitely never the main topic of the episode. But it is today because we got three different things in the world models that happened this week. And the first one we're going to talk about is the departure of Jan Koon from meta. So we talked about young Laun many times before in the show. He's the chief scientist at Meta. He's a touring award winner. He was an NYU professor and he's one of the pioneers of the current AI era. He has been the chief scientist in. Back then Facebook since 2013. So he started AI research way before the whole ChatGPT craze, and he built the Facebook AI research lab, also known as Fair. But before that, he has developed Alex net, which he developed together with a student of his, that was a really advanced visual analysis net, which was able to beat the legendary image net that existed before that by 10%. For the first time. Any other model was beating that. So he was in the world of researching and developing visual. Aspects of AI way before the current era, he won the Turing Award together with Jeffrey Hinton and Yoshua Bengio, uh, also known as the Godfathers of ai. So he is been around the AI space longer than most scientists that are in it today. And he just announced that he's living meta and he's living meta to develop his own company that will focus on world models. So a little bit of background. Laun has been very loud since the launch of ChatGPT stating that large language models will not lead to a GI. He always suggested that the only way to get to a GI will be to develop models when understand the world as the world is and learn like babies as we learn from the environment and not through just language, which is a very narrow way to view the world. he's known for a tweet that says, it seems to me that before urgently figuring out how to control AI systems much smarter than us, we need to have the beginning of a hint of the design for a system smarter than house cat. Basically comparing large language models to a house cat in their level of intelligence. Now there are multiple reasons of why Jan would leave meta. The biggest one is obviously the turmoil that's happening in meta right now. So he was the head of the AI show. And after the not so successful release of LAMA four, he was, and I'll be very gentle, sidelined by Mark Zuckerberg who spend$14.3 billion into scale AI in order to poach Alexander Wang, their CEO to become the new head of Meta's Super Intelligence Lab, which has also poached multiple scientists from the top leading competitors spending hundreds of millions and sometimes billions in order to get this kind of talent. And since then, there's been a complete chaos over there. But it was very obvious that Y Laun is not in the center of this. And there's other people who will be. And so from a political perspective, it was a good time for him to leave, but I am pretty sure it is also going to make him extremely wealthy because I'm pretty certain he'll be able to raise a few billions of dollars with evaluations of even more than that to build his AI world's vision because. There are more and more voices that are saying that this is the real way to move to an A GI future. Again, he's been definitely one of the key people that was driving this forward. Now, is that tied to his sources of visual aspects of ai? Potentially because there's others in a similar path that think like him. So since there's not much to report right now on exactly what Jan Koon will be doing, uh, I'll switch to another person with a similar background who is making similar waves, and that is Fei. FEI Lee. So who is Faye? Faye Lee. She is another one of the early scientists in the modern era of AI and one of the leading voices, and she was a Stanford professor. Since 2009, and she's the one that in Stanford developed ImageNet, which we mentioned earlier as a early AI tool that could read text, and even handwriting, which was then surpassed later by alexNet that was developed by Koon, so these, how all these people are tied together and been working on similar things for many, many years. She developed ImageNet back in 2009, but she also co-founded AI For All, which is a nonprofit two Boost AI education, and she did that in 2017 where most of us couldn't spell ai. She won multiple award and she's considered the godmother, if you want, of AI in the modern era. And in 2014 she founded World Labs, which is a company that is building AI driven worlds. Now she just published a substack called From Words to Worlds Special Intelligence is AI Next Frontier. A quick summary of what is in that letter that will help you potentially understand why this is so important, so the idea is that ai, in order to make the next progress, need to move beyond language and maybe images to understanding and interacting with a 3D physical and or virtual environments. She says that current ai, like ChatGPT, are wordsmiths in the dark, that they're eloquent, but not grounded in physical reality or with a real deep understanding. Of how our world operates. So she's asking the question of what is spatial intelligence? And she's saying it's the ability to perceive reason about and interact with the physical world in a 3D space. And she's also saying that this is the foundation of our intelligence of human cognition and how we navigate, how we create, how we imagine, how we solve problems. It's all built on how we learn as babies before we even have a language. And she gives examples. We know how to park a car. We know how to catch keys when somebody throws it at us. Firefighters knows how to navigate buildings when they can't see anything, and it's all because we have general understanding of the environment around us in a very deep level. And to prove why that is a problem. She's saying that current AI is highly limited in understanding the actual environment. She's saying that LLM struggle with very basic spatial tasks, like estimating distances or rotating objects in a space, or placing them correctly or navigating mazes or. Predicting physics, all these things, ai, current ai, large language model based AI is not very good at even AI generated videos that seem to understand the world physics, which by itself is amazing, are limited to X number of seconds, and now maybe mere minutes, but that's it. You can't run a persistent video with clear physics and continuity. That will run for 30 minutes, not to mention 30 hours right now. And this is why there are completely new sets of models to run robotics who need to operate in the 3D world. So the solution that sees she's suggesting is what she's calling world models. And world models have three main essential capabilities. One, it's generative. They can create geometrically and physically consistent virtual and or. Real worlds what you're saying. Real world is obviously a representation of a real world versus a completely made up world. The other thing is they're completely multimodality. They. Process, diverse inputs. They understand image, video, text, gestures, and physics to produce complete world states. And the third one is that they're interactive. They can predict the next states based on actions and goals that are driven inside that world that they have created or that they operate in. So from a timeline in application perspective, she sets three different times. One is the near term, basically right now, which is creative tools for filmmakers, game designers, architects, that allows you to build 3D environments without the need and the overhead of traditional software development. Just by giving it a prompt and it will generate everything you want with accuracy and consistency, which is the key. The mid to long term, three to five years, the idea is to develop whole environments for robotics and what's called embodied ai. That we'll be able to drive a wide variety of different kinds of robots that will be able to operate effectively and safely in a wide variety of environments. And in the long term, she's talking about scientific discovery, healthcare, diagnostics, and immersive education. Basically being able to build worlds to teach people anything that you want to learn, being inside that environment, most likely with some kind of virtual reality headset. Now, she's not just talking or writing blog posts, she's also making it happen. But before I tell you what she's making happen, I want to quote one line out of this manifesto that she shared, because I found it very, very powerful. She said, and I'm quoting extreme narratives of techno. Utopia and apocalypse are abundant these days, but I continue to hold a more pragmatic view. AI is developed by people, used by people, and governed by people. It must always respect the agency and dignity of people. Its magic lies in extending our capabilities, making us more creative, connected, productive and fulfilled. Spatial intelligence represents this vision. I think this is beautiful and wonderful, and I really, really wish that all the people who are driving AI today would feel the same way and act based on these principles. But now to the big news from World Labs, again, FEI Fey's Company. So after raising$230 million, world Labs is finally launching its first commercial product just a year after emerging out of stealth, and it even has a freemium access. You can use it for free with limited generations, but what it knows how to do is it knows how to generate either realistic or completely made up worlds from props. So, as I mentioned, you can use it for free for up to four generations. Uh, the pro tier that is$35 a month give you 25 such generations. So she quoted on this launch, the new generation of world models will enable machines to achieve spatial intelligence on an entirely new level. So this model is called Marble and in addition to the ability to generate worlds from a prompt, which by itself is really cool, but there's other companies who do this. Google has a similar tool as well, but Marble comes with a few really cool tool. One is called Chisel. Which allows to decouple the structures such as walls and 3D blocks from the visual styles, so you can control both the building blocks of the world as well as the style of the world. Separately. It allows you to move objects in the world and place them in different places to have better control on how the world is built or like their co-founder Justin Johnson said, I can just go there and grab the 3D block that represents the couch and move it somewhere else. But then you can also define how the couch is going to look like. So this allows more control to creatives to create the worlds they want. Exactly. They what they want them. You can even expand the worlds by providing them a growing edges, and you can even use what they call a composer to merge scenes, to create vast, bigger spaces, whether photorealistic or game like outputs. That can be exported in several different format, either GA and splats, which is these environments that you can look through or meshes, or videos that can be ideal for either gaming or just to navigate with VR headsets. Now, in addition, they released a tool that allows marble to create assets Existing graphic engines such as unity and unreal. So instead of just creating entire new worlds you can create accurate 3D objects for existing engines, which dramatically will increase the pipeline of generation of 3D objects into the existing gaming universe. So to summarize this component about Lee's ambitions, she said, our dreams of truly intelligence machines will not be complete without spatial intelligence. And this aligns perfectly with what Jan Koon is doing. And this is, and this aligns perfectly with what Jan Laun is saying and now going to be doing. As well. And there, as we mentioned, coming from a very similar aspect of the AI universe, so it makes sense that they think the same way, but this is not the last piece of news about understanding and operating in 3D environments. DeepMind just released CMA two, which is a. AI that can reason in 3D worlds and play extremely well in every computer game, including really sophisticated computer games. How does it work? Well, they basically let it play any game, including really advanced, sophisticated multiplayer games in complex environments. And it is learning on its own how to play the game. Well, I'm not a gamer, but in the release notes, they shared several really sophisticated games like Mind Dojo or asca, that it was very successful in playing, learning over 600 skills and contextual reasoning in these games on its own. So how does it work? It's powered by Gemini models behind the scenes and it. Understands the environment of the game and it understands how the game works. And it can even communicate that such as I'm going to the village center, or I'm finding a campfire, or things like that because it understands how the game actually operates. It understands how to do really sophisticated aspects like mining different things in order to achieve other goals in the game. And it comes with zero human assistance. It basically. Plays the game on its own Gemini's reasoning, trying to understand what the roles and the goals of the games are, and it learns how to operate in these worlds by itself. In the release notes of CMA two DeepMind highlights that gaming is a, quoting incubator for general intelligence, and it is currently available only for a gated preview, but they are planning to release at least reports on everything that it does and can do later on. Now, why are we talking about an AI that can play games, and why does DeepMind think that it's an incubator for general intelligence. Well, here's a little story for you. I've been traveling literally every single week in the past, I don't know, eight to 12 weeks, something like that. Seems forever delivering either keynotes or AI workshops to companies and organizations. And I just came back from Chicago and I had the opportunity to finally watch the Thinking Game. The Thinking Game is a movie about Demis AEs. And his quest for AI for Humanity. If you want, those of you who don't know Demi, he's the founder and still the chief of Google DeepMind. Well, when he founded, it was just DeepMind before Google bought them. I highly, highly recommend watching this movie. It is really, truly inspiring. But it connects well to what we just talked about because it shows how AI learning from games can deliver in the end, very significant results. So before I tell you what that is, two words about the movie and about Demis. First of all, the movie made me feel completely worthless. I mean, I really liked Demi's approach before the movie. It always felt a lot more genuine, realistic, and really grounded in. True drive to make the world a better place with AI compared with some of the other leaders of the other labs. And this movie made me feel significantly more like this about Demi. He's absolutely incredible. He's really driven by making the world a better place with the usage of AI and watching his journey and thinking how he feels and how he drives other people around him. Made me feel completely worthless compared to what he is doing. It is really inspiring, but to connect the dots to this segment of this podcast. As you probably know, Demis won the Nobel Prize for alpha fold. Alpha fold is a model that can accurately predict protein folding. Now why the hell does that matter? Because proteins are, if you want the machinery of life. Like everything we know as a living, anything is built on proteins that are folded together, but trying to understand how they're folded, basically, how they create a 3D structure that then becomes life is a crucial component of A understanding life. And B, being able to understand disease and designing new drugs to solve every illness on the planet. Well, previously, before alpha fold, it was extremely difficult and very inaccurate to understand the structure of a protein. It would take years in a lab to figure out one protein. Well, alpha fold is a prediction model can predict protein structures in minutes. And get it highly accurate. And they've used it to predict structures of over 200 million proteins and then open sourced it so anybody can use it for research. So how does that connect to gaming? Well, in the path to getting to alpha fold, the previous iterations that gave DeepMind the capability to even go down that path was gaming. They have developed, many of you probably know of Alpha Go, which has beat Lee Seel and afterwards the Chinese World Champions in the Game of Go. so these attempts to have a model learn through a gaming process with clear goals and learning on its own how to play a game, were the building blocks that allow them to later on develop alpha fold. And most of the big breakthroughs that DeepMind have created, and there's a long list of them, came from teaching AI to play games. So going from being able to play a single, really sophisticated game like go to, being able to play any computer game on its own is a very big step on the path to a GI. And this is what DeepMind released right now. So that ends this topic. Now what you need to do is go and watch the movie and we'll switch to the next topic, which is the arrival of GPT five one and the other models around it, and maybe the imminent release of Gemini three. So OpenAI just released GPT 5.1, which doesn't sound like a big spread from five. It just adds a 0.1 at the end, but it's actually a dramatically different model that has a lot of big, significant improvements. So on the high level, they released two different models, GPT five, one instant, which is faster and more conversational, and it has a warmer tone for everyday tasks and conversing and GPT 5.1 thinking, which is slower, but has much better complex reasoning, math, logic, multi-step planning, and all these kind of things that thinking models are good at now. What are the biggest differences? Well, first of all, it is. it is a good combination of being a smarter and B, more natural and human-like. So it is better at instruction following an accuracy. It is more conversational and less robotic. Tom, which was one of the big pushbacks against GT five, and it has fewer hallucinations. The GPT five, all of these are awesome. It also has adaptive thinking, so they dramatically improved the amount of time that it needs to think for specific tasks. So for simple things, it will answer faster than G PT five, and for more complex things, it will think longer than G PT five. In both cases, providing you better answers, aligned with the relevant amount of time that needs to be invested in actually getting there. It also comes with several different personalities that you can select from professional, candid, quirky, friendly, and efficient. And the basic model is fine tuned for being warmer and more conversational. and it even proactively offers to adjust the tone during conversations depending on what is actually happening. It is supposed to be better at creative writing, better at coding with more tools that it can use, and it has enhanced planning and multi-step task, ex execution. So on paper, a significantly better model than GPT five. Now I didn't get a chance to play with it enough again. I just came back from one workshop and I'm shortly leaving to the airport for another, but. I can say a few things. On a high level, better, prompt coherence is always a good thing, especially as we're moving from basic, simple things to more advanced and complex tasks that we want AI to achieve. And building agents and multi-step agents and multi-level agents. So following instructions accurately is a very big deal, but also being able to optimize the amount of time that the model thinks is a big deal. I've been recently using Claude a lot more than I've been using ChatGPT and a lot more than I've been using Claude previously. So right now Claude is my number one go-to tool. I feel that it's better than Chachi pity in basically. Everything. I have only two annoying things with Claude, and one of them is that I feel that it thinks way too long on some things that are sometimes really basic. Like when I ask a really complex things, I understand, think as long as you want and get back to me. It actually dings you when it's done, so it's actually really cool. You don't have to actually sit there and wait. So what I've been doing recently is a lot of AI multitasking. I have multiple tabs open, some with Cha g pt, some with Claude, some with gr, some with vibe coding tools and they're all running in parallel and I'm jumping between them, uh, and as they're thinking and doing their thing, I'm checking the status of another task, giving it my input and moving forward. And I find this to be extremely effective in being productive and generating a lot more in a given amount of time. And yet Claude, in some cases, things way too long on stuff that is very basic. And if GPT five one can fix that and maybe write as good as Claude, that will be fantastic and it will give more points to chat. GPT. By the way, the best tool right now from my perspective as far as balancing speed and efficiency is grok. Grok will spit out answers immediately like the old models did when it has the answer and will think longer for more complex tasks. And I, again, I think from a balance perspective, they got it right. So if GPT 5.1 moves in that direction, that would be great. From a personality perspective, I actually mentioned something that I think about it on our community Friday, AI Hangouts. Our AI Friday Hangouts is a community of people that care about ai, learning about ai, and we meet every Friday at 1:00 PM Eastern. More on that in a couple of minutes, but yesterday in this meeting, I shared that I think that AI personality should adapt to the task versus to the person. Because for brainstorming, I need a specific kind of personality. When I'm assessing strategic decision in my different businesses, I want a different personality. And when I'm asking for tactical, mundane tasks, I don't need any personality. Just do the thing that I'm asking you to do. And so if the model will learn to adapt to what I am doing and understand which personality I'm looking for at that time, that would be the most impactful for me. So when I share that, one of the community members actually shared with me in real time during the meeting, a link to Ethan Malik's post on LinkedIn, so those of you who don't know Ethan, Ethan Molik, he is a professor and he's been sharing amazing insights both on LinkedIn and on x and on his blog about ai and its actual real day-to-day impact and different use cases, or definitely a person you want to follow. And he basically said similar things. He said that OpenAI has an interesting task and that 5.1 is really trying to balance between, and I'm quoting people who want to chat with an AI seeking a quirky old buddy against pros that are craving. Every ounce of smarts around the stuff that they need to do for their business. And he shared something very similar to what I thought, that he thinks that AI should take different roles in specific instances versus having a personality that is fixed for specific individuals. He argued, and I'm quoting who wants to talk to a cynic all the time, but then he said that you actually would want somebody cynic when you're trying to get real hard feedback on something that you need the feedback for. The bottom line, it adds only a 0.1 to GPT five, and yet it seems to be significantly better across multiple different things and while maybe the technology's not a big jump forward, the access to the technology becomes significantly more effective, which as I mentioned multiple times in the recent few months, the way we interact with these models, the tooling around these models will play a much bigger role than the technological advancements in these models because it just makes them more helpful, which is what you actually need. So whether the underlying model is the same, but you can get more out of it right now, that is a big step forward. Now, the release of GPT 5.1 without any lead time to it, and not too long after releasing G PT five. is hinting together with a very serious rumor meal on X that Gemini three is imminent and it's potentially might be released in the next few days. However, other stuff happened this week that puts a question mark to that. So several different, extremely powerful Chinese models were released in the last 10 days. Moonshot just released Kmi K two thinking, which is not just another model. It is a model that plays a very significant role in the crazy AI race between the US and China. So Kmi K two scores 44.9 on humanity's last exam, which we talked about several times on the show. It's 2,500 questions of the hardest questions that the people who put this benchmark together could harvest from experts across multiple disciplinaries. 44.9 puts it at the number one spot in the world on this benchmark outpacing G PT five and anthropic CLO sonnet 4.5 and it's currently number one and it's outpacing GPT five and anthropic clo sonnet 4.5, which are the most advanced models. In the Western Hemisphere now, this new reasoning focused variant of Chemic K two also excels in coding and agentic tasks and logical problem solving, and it is showing very, very clearly that China is not behind in the AI race. Did das, who is their partner at Menlo Ventures said, and I'm quoting, today, is a turning point in ai, a Chinese open source model is number one. So this is toward, so we are very close to the end of the year and this is another deep seek moment, which is how we started the year. So if you remember in the beginning of 2025. The world was shocked by deep six's. First release a Chinese model, open source model that was at par or very close to that with the leading models from the West at a significantly lower cost of development and significantly lower cost to use the model. Well, this release of Chemic K two is that on steroids. It's actually better than the Western models on several different benchmarks. Now, to put things in perspective. On the LM Arena, on the overall chart, the one that combines all the different aspects, including, uh, hard prompts and coding and math, and creative writing and instructions following and all of that. It is currently sharing the eighth point where it's number one in math. Number two in creative writing, and number three in coding. And again, number eight overall, but it is not the only model that was just released from China. Two different models from Baidu were released this week. One is a thinking version of Ernie 4.5. Which is using only 3 billion active parameters out of 28 billion total parameters through mixture of experts architecture, which many of the new models are doing. But the fact that it can run with only 3 billion parameters enables it to run on a single 80 gigabytes GPU, which is something you can have on the computer in your house. One of the things that excels at is visual reasoning, and it can, as they're saying, zoom in and out on specific aspects of the image, finding specific details. It is very good at tracking events across videos. And even solving STEM photo-based problems outpacing GT five High and Gemini 2.5 Pro in document and chart benchmarks, despite using a fraction of the resources that they need to use. But they also released Ernie 5.0, which is the next version with some additional tools that comes with it, and it's acing. Several different benchmark like the OCR, bench doc VQA and the chart QA benchmarks. Again, all in visual understanding, recognition and comprehension of visual cues and data analysis from both visual and numerical information, which enables it to do stuff that is really, really important for any business process. So the ability to analyze data, visual data, and numerical data is the key of running successful businesses, and it can do that better than the top models of the west. Robin Lee, the CEO of Baidu said. When you internalize ai, it becomes a native capability and transforms intelligence from a cost into a source of productivity. And I agree with him a hundred percent if you understand how to use AI effectively. And now that they're developing these extremely efficient models that are practically free. You can completely transform businesses as they run today with the ability to analyze huge amounts of data and make better decisions on how you're gonna run your business, grow your business, operate your business, and so on. Now, when I say they're significantly cheaper, here is the cost of this model compared to the models from the West. So if you don't know all these models, when you run them through the API are measured by the cost per million tokens. GPT 5.1 price is one point$25 for every million tokens for for every million input tokens. Basically what you type into the model, the prompts that you're giving, and it's$10 per every million tokens on the output. The answer you're getting from Chad GPT, so a dollar 25 and$10 Ernie five, cost 0.000. Eight$5 per input token and 0.0034 for an output token. That is almost 1500 times cheaper for input tokens and almost three times cheaper on the. Output tokens. But they're not the last model that we're going to talk about that came out of China. That is extremely powerful Zippo, which is a new company that just launched a model in September called GLM 4.6, just claiming really high scores on the leaderboard as well. So as an example on the web development leaderboard on the LM arena, GLM 4.6 is now number five. Ahead of it it's just Claude Opus 4.1, Claude Sonnet, 4.5 GPT five Medium, and another version of Claude Sonnet 4.5. Behind it there are Coin three, minimax, which is a Chinese model. N Gemini 2.5 Pro and Grok Code and other tools. So on numbers five, six, and seven on the web development, we now have AI tools that are significantly cheaper than the equivalence from the west. As you remember, I told you last week that Jensen Huang said that China is winning the AI race. Now while he has a clear agenda, when he's saying that it is currently unclear who is actually winning the race and definitely not who is going to win the race eventually, but there's one thing that is as clear as daylight and that is when it comes to building highly capable and highly efficient models. China is currently kicking ass. It is developing models that are significantly more efficient than the Western models, both in the cost of creating them and definitely in the cost of running them. And why does that matter? It matters because while people are right now willing to pay a premium for the most advanced model in the world. This will stop sometime very, very soon because when the models become good enough to do a task, then you start looking at cost, right? So maybe the other model, maybe Claude 4.5 sonnet is the best coding tool out there right now that is obvious across multiple benchmarks and on the Ella Marina, but it is 15 x and in some cases a hundred x more expensive. Is it worth it when you're going to generate huge amounts of code with it? The answer is, it depends on the use case, but it's definitely not the correct answer every single time, which means more and more companies are going to move to Chinese models and use them as the backbone of what they are going to develop next, just because of cost. Which leads me to the big critical question that is the outcome of that, which is Gemini three really far ahead with a big enough gap from these Chinese models? Will it be able to compete with them on quality and on cost? Because if not, Google may delay the release of Gemini three. So if you are open ai, going from G PT five to GPT 5.1 makes a lot of sense. It's just a 0.1, it's not a big deal and yet we got a better model out there. But if you are releasing a major model that is supposed to be the backbone of your AI deliverables in the next six months. It cannot be behind open source, nearly free models from China across more or less any important benchmark. And so I'm certain that a lot of people inside of Google are now testing their models and trying to optimize their model. And if it's not dramatically better or at least close in price to the Chinese models, I don't know if they're going to release it. They may release a Gemini 2.7 or something like that in between, and I obviously don't know what's the level of readiness of Gemini three. I obviously don't know how good it is. I only, all I know is the rumors that I'm seeing on X, which are rumors, but it will be very interesting to see how this evolves. And I will obviously report as soon as I hear what's happening. Now there are dozens of more important pieces of news this week, but sadly, I need to go to the airport and you can still learn about all these news just by signing up to our newsletter. So there's a link to that in the show notes. You can click on the link and sign up and get access to all the news every single week. By the way, it's not just this week. Every single week there's news that do not make it into the recorded version, and they all exist in the newsletter. But I promised you more information about our AI Friday Hangouts. So there is an incredible group of AI enthusiasts that meets every single week. Friday, 1:00 PM Eastern, for over a year now. And we are talking about ai, we're talking about big picture, where the world is going, how it's gonna impact our society, but we also talk very tactical, reviewing tools and use cases. That everybody from the community is sharing and showing other people new tools, that they found new use, cases that they implemented, discuss difficulties and other people are trying to help solve them. It's an incredible community, and I really want you to learn more about this because I would love other people to join us as well. And so I decided to give you a glimpse into what's happening in the AI Friday Hangouts, and that glimpse is going to come this Tuesday as a mix of several different discussions that we had in the Friday Hangouts, and it is going to be magical, I think for some of you, because you will see a great mix of different kinds of discussions. Some of them are highly tactical, explaining very specific things and how they work, and even sharing specific use cases with prompts and everything else. And some are very big picture and. In both cases today and on Tuesday, there's a link in the show notes for you to come and join those Friday Hangouts. It doesn't cost anything it's not mandatory. You just join the community and you can join us whenever you can on Fridays. That's it for this week. I hope you all keep on experimenting with ai, sharing what you learned. If you are enjoying this podcast, Please subscribe on your podcast and platform, and if you're on either Apple Podcast or Spotify, please give us a review that helps us get to more people and share it with a few people that you know can benefit from this. Just click the share button on your podcast platform and send it to a few people that you know can benefit from it. I'm sure you know some of these people. That's it for this week. We'll be back on again, a unique episode on Tuesday. And until then, have an amazing rest of your weekend.