Leveraging AI

257 | We are in the singularity" (Elon), Claude Code writes 100% of Claude Code, AI in everything (CES) 2 mega acquisitions, and more important AI news for January 8, 2026

• Isar Meitis • Season 1 • Episode 257

📢 Want to thrive in 2026?
Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/


Is AI about to replace your developers... or your imagination?

We’ve just crossed a line — and business will never be the same. From Claude 4.5 writing 100% of its own code to Elon Musk casually dropping the “singularity” word, the last few weeks in AI have been pure acceleration.

In this solo Weekend News episode, host Isar Meitis dives into a whirlwind of groundbreaking announcements, insider scoops from X (formerly Twitter), and strategic shifts from the world’s top AI players — all pointing to one thing: the age of AI consumption is here, and the inflection point is now.

If you lead a business, manage a team, or want to stay ahead of the curve, this is your must-listen roadmap to what’s coming in 2026 — and what to do about it.

In this news, you'll discover:

  • How Claude 4.5 is outperforming OpenAI's best — and what that means for coders and non-coders alike
  • Why a top Google engineer admitted Anthropic's tool beat her team’s year-long work in an hour
  • Elon Musk’s take on the singularity, AI-generated entertainment, and building space-based data centers (no, really)
  • The hidden dangers of “vibe revenue” in AI startups — and why stickiness beats novelty
  • How Nvidia's massive acquisition of Groq signals a new era of AI infrastructure
  • The silent rise of AI-generated misinformation — and what it threatens in society
  • Why OpenAI is hiring a “Head of Preparedness” to prevent catastrophic misuse of its models
  • What Microsoft, Meta, and others are betting on as we enter the utility era of AI

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello and welcome to a Weekend News episode of the Leveraging AI part. A podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and first of all, happy new year to all of you. This is the first news episode of 2026, and I was on vacation taking some time off with my family over the holidays, and so we did not have a news episode last week. So a lot of what I'm going to talk about has spread since mid-December through now, roughly, we have some very interesting topics to cover. The first topic will be the inflection point in which we are right now based on a growing chatter on x. That has exploded over the last, couple of weeks, and so we are gonna start with that. We're going to talk about some interesting acquisitions and what they mean for the future and some additional interesting developments overall. To be fair, I am recording this episode on Thursday, January 8th, instead of on Saturday when it usually goes out because in a few hours I'm going to be on a plane going snowboarding in Europe. And I initially thought I'm not going to release an episode at all, but too many things have happened and I don't want delay three weeks of news and try to rum it into one episode. It'll be hard enough to do the two weeks that are behind us right now. I will try to focus on only two topics today and then do a lot of quick rapid fires just to keep you updated with other things that happened. And then I'll be off to the airport and I will be back for a regular news episode next week. There is an episode coming out on Tuesday that's actually really, really interesting and it's based on my own experience in vibe coding in the past two months or so. In which I'm sharing exactly what my process is and how you can start vibe coding and creating your own applications. So check that out on Tuesday. But now to the news, this episode is brought to you by the AI Business Transformation Course. Since we are in the beginning of 2026, I want to start with a New Year's resolution for each and every one of you. If you have not yet taken critical steps to make sure that you and or your company or team are ready for using AI effectively, this is the right time to do this. Consider this your New Year's resolution, either a personal resolution or a company wide resolution. We are a week and a half away from the launch of the coming cohort of the AI Business Transformation course. It is starting on january 20th, and we still have some open seats. The next course will probably be a quarter after that. So you do not want to miss this opportunity to jumpstart 2026 and to create a very solid baseline to your AI knowledge in a business context. I have been teaching this course since April of 2023, so it's gonna be three years in just a few months. And I have obviously updated it numerous times, but it is most likely the best course out there when it comes to providing solid baseline for business people on how to effectively implement ai, how to change your mindsets, what things you need to change in your company, in your business, in the way you approach data analysis, content creation, and so on. Providing solid baseline for growth in the future. So if you haven't. Take in any such steps. If you didn't take serious courses, and even if you have, but there were more general high level courses, and you wanna learn practical knowledge with ai, come join us. You can use promo code leveraging AI 100, all uppercase in order to get$100 off the price of the course. Just for being listeners of this podcast, I would love to see you within a week and a half from now in the course. The course is four weeks, two hours per week. All the details and the way to sign up are in the link in the show notes. So you can open your phone right now, click on the link and go and sign up and join us, for this amazing opportunity. In addition, if you are running a company or a department and you're looking for custom training for your team, just reach out to me on LinkedIn or on my email at again, it is available on the show notes, and I can share with you what I have been doing with multiple companies, large and small, from the largest enterprises in the world, to small startups and everything in between to get them ready and accelerate their AI adoption. That's it for that. Let's go to the news and the crazy storm of acceleration or realizing the acceleration in which we're in that happened on X in the past few weeks. So I reviewed many of these posts and I follow many different people and organizations on X in order to stay up to date with what's actually happening. And I try to put these back in a timeline to make some sense in what actually happened. So the first post I want to talk about is from meter. And we talked about meter several times in the past. They're a company that has this new benchmark that they created for success of ai. And the way their benchmark works is they look at tasks that AI can complete at a 50% success rate. So they don't try to see tasks that it can do every time, but just that it can do half. At the time successfully, and you're like, why do I care about stuff that can work half of the time? It's not really helpful for me. Well, what they're looking at is actually not the success rate, but the length of the tasks that AI can complete successfully 50% of the time. So the success rate is not what that matters, but the trend on how long these tasks get over time is what matters. And what they found is that the length of tasks that AI can do successfully at 50% of the time doubles about every seven months, which by itself is very, very impressive. But they share the post on December 19th that says the following. We estimate that on our tasks. Claude Opus has a 50% time horizon of around four hours and 49 minutes, and then they're adding afterwards. While we are still working through evaluations for other recent models. This is our highest published time horizon to date, and have a graph showing that it is way above the trend line. Now, in their graph on X, it doesn't look really impressive, but if you go on their website, you can switch to a logarithmic view of the same graph, and then you see a huge spike from the previous model. So to put things in perspective on the logarithmic graph, which accelerates almost in a vertical line in the past few months. GPT five, which was released on August of 2025, achieves two hours and 18 minutes. GPT five one Codex Max Achieves two hours and 53 minutes. That was released in November of 2025. Claude 4.5 Opus, which was released also in November of 2025, so the same months achieves four hours and 49 minutes. So more than one and a half times, almost two times the amount of time that GPT five one Codex Max can do and more than double what GPT five was able to achieve. That was released just in August of 2025. This is a huge jump forward. So this post again from meter was on December 19th. On December 26th, jackson Kian, who is a researcher in Anthropic, posted the following. I'm trying to figure out what to care about next. I joined Anthropic four plus years ago, motivated by the dream of building a GI. I was convinced from studying philosophy of mind that we are approaching a sufficient scale, and that anything that can be learned in an RL environment. But then he continues. I feel like Opus 4.5 is as much a GI as I ever hoped for, and I'm not sure I know what I want to spend my waking hours focused on some ideas. And then you list different things that he may consider doing and he's looking for feedback. And that post has over 470,000 views and over 150 responses, which is not huge, but he's definitely very interesting. So this is a person that wanted to dedicate his life to achieving a GI. And now he's like, well, I'm more or less there and I am thinking of what we need to do next. Now he got a lot of responses for that. And so the next day he posted another post that said the following. Some reactions to my A GI framing are a refreshing reminder of people's current experience with chatbots. To use Claude code is to see Claude write arbitrary software run into errors, reliably fix them, make helpful suggestions, and perfectly follow any given instructions. Now as I mentioned when I talked about the episode that's coming out on Tuesday, I've been doing a lot of vibe coding in the past couple of months, the vast majority of it with cloud code, and it is absolutely mind blowing to see cloud code works. Now, those of you who don't code can still use cloud code for other things, so you can actually use cloud code for non-coding tasks and see exactly how it works, and you will be amazed. I must admit that using Claude 4.5, all the versions of Claude 4.5 in the regular Claude license is also incredibly impressive across everything that it can do, including the ability to create highly well designed documents to my brand guidelines while integrating graphics and charts and graphs into the product without me having to copy and paste everything, which is not doable in any of the other platforms. So definitely Claude 4.5 is beyond what other tools can do today, but we continue. So on the day that Jackson posted his first post, there was an even more interesting post by Boris Cherney, Boris Cherney. Is the guy who created Cloud Code as a side project just over a year ago, and he posted when I created Cloud code as a side project packed in September of 2024. I had no idea it would grow to be what it is today. It is humbling to see how Cloud code has become a core dev tool for so many engineers, how enthusiastic the community is and how people are using it for all sorts of things. From coding to DevOps, to research, to non-technical use cases. This technology is alien and magical, and it makes it so much easier for people to build and create. Increasingly, code is no longer the bottleneck. A year ago, Claude struggled to generate bash commands without escaping issues. It worked for seconds or minutes at a time. We saw early signs that it may become broadly useful for coding one day. Fast forward to today. In the last 30 days, I landed 259 prs 497 commits 40,000 lines added, 38,000 lines removed. Every single line was written by Claude Code plus Opus 4.5. Claude consistently runs for minutes, hours, and days at a time using Stop Hooks. Software engineering is changing and we are entering a new period in coding history and we are still just getting started. So first of all, before I dive into what the hell this means, what the hell are PRS and commits? So prs also known in the development world as pull request is basically when a developer requests to merge one code branch into another. So this is a critical step in the development, and a commit is when you check in a new piece of code into an environment. Uh, if you think about, it's like saving a document, just in the code terms. But what this means is that the guy that is developing Claude Code, the lead developer, the guy that invented Claude Code, is now creating the new version with Claude Code. With 100% of the code generated by Claude Code, and I know that sounds a little meta and it sounds like the world is collapsing, but that's where we are right now. This tool is so good that the lead developers is creating all their code and all the code changes with the tool itself. Now, if that's not impressive enough that the guy that invented Claude Code is using Claude Code to write all his code. On January 2nd, Jana Dogan, who is a principal engineer at Google. So the competition post the following. I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options. Not everyone is aligned. I give cloud code a description of the problem. It generated what we built last year in an hour. Then in the communication, people asked her what exactly was the prompt, how much information she gave it. And she basically said, and I'm making it short, she said, I gave it very little information, three paragraphs without exposing any real proprietary information, but it did the work. And so what that tells you is, first of all, that the competition is using cloud code to write their software or at least experimenting, with it, which makes sense. At least they wanna learn. By the way, somebody asked her when will Gemini be as good? And she said they're working very hard, to do that. But she admitted that they've done an incredible work in both the underlying model of 4.5 Opus, as well as in the orchestration around it that makes Colo code be as magical as it is. So this is somebody who is a principal engineer at the competitor, again, in this case Google that is loudly admitting that she's blown away with what Claude Code can do, that it's better than what her entire team was able to do, working together for many months, but this wasn't the end of it. On January 4th, David Holtz, who is the founder of Mid Journey, again, another person that knows a lot about AI and coding and startups and tech. Posted the following. I've done more personal coding projects over Christmas break than I've done in the last 10 years. It's crazy. I can sense the limitations, but I know nothing is going to be the same anymore. To that elon Musk responded, we have entered the singularity, so what the hell is the singularity? We talked about this many times before, but the singularity is a point in time where technology can accelerate itself. But if you want ILOs Musk's explanation of what is a singularity, he was actually interviewed back in December 1st, and in this interview he actually shares a very interesting view about how he sees technology moving forward. And he's talking about the singularity over there. And he is saying that based on the current progress in AI computing and robotics, we will shortly get to the point that A, we will not need to work, anyone will not need to work because AI and robotics will be able to supply everything we need. And it basically saying it just means you don't know what happens. Like I'm confident that if AI and robotics continue to advance, which they are advancing very rapidly, like I said, working will be optional. And people will have any goods and services that they want. And the exact quote that Elon used to describe the future is that working will be optional and people will have any goods and services that they want.

Speaker 2:

If you can think of it, you can have it, type it.

Speaker:

And the most profound quote is, if you can think of it, you can have it. So if you want, that's a layman's term of what singularity actually means in Elon Musk's head. It means that technology will be able to deliver literally anything we can imagine when we imagine it. Now, are we there yet? No. But are we in everybody's minds, including Elon, moving rapidly in that direction? The answer is yes. So we're gonna talk about another interview with Elon that was just released this week. I'm gonna talk about it in a minute, but a quick summary on all that craziness on X in the past few weeks. First of all, it is very clear that something changed when Opus 4.5 was released. It is a different kind of model. It is significantly better across a very large variety of capabilities. As I mentioned, I've been using it for other things and I'm blown away with how good the results are. I must admit that I also really enjoy using the latest rock, and I also really enjoy using the latest Gemini, and I also really enjoy using GPT 5.2. Another thing that I will say that is a very important aspect of this is, and I've mentioned that multiple times before, is that the tooling, if you want the architecture, the infrastructure around the AI raw models makes a huge difference. And this is one of the aspects that the. That the researcher from Google was talking about is that it's not just the model itself is the way that philanthropic were able to make this model work incredibly effective for computer programmers. Now you're saying, well, I'm not a computer programmer. How does that help me? Well, they reality is they're investing a hell of a lot of effort in making it work well in programming, because that was their primary use case because that accelerates their own work. But, but once that is solved together with some other things that I mentioned in my end of year summary, which is context, window length, and other limitations, then they can apply the same concepts for anything else. Meaning we are at a serious shift in how good AI is. We're at a serious inflection point when it comes to how long it can work effectively and how well it can actually understand our needs and deliver them at scale. Now, yes, doing it in code is easier than doing it in other aspects of life because code is very easy to test. It either runs and does what it needs to do or it doesn't in real life, there's a lot more gray areas that are different than running code. But the fact that there are now. Mechanism, processes, technology, infrastructure, uh, and research that is proven to be able to achieve it in code means it's just a matter of time and probably not too long of a time because now they can write code faster than ever before until this is possible across the board. What does that mean? Is Elon correct? I don't know. It sounds like it, like if AI can really do anything that a human can do and robots can definitely physically do more than we can from a physical labor perspective, then I don't see why or how he can be wrong. The only question is when, now if we take that back to March of last year, if you want, specifically March 10th of last year, Dario Amide, the CEO of Anthropic, uh, was hosted by the Council of Foreign Relations, and he said the following, and this is an exact quote. What we are finding is that we are three to six months from a world where AI is writing 90% of the code. And then in 12 months we may be in a world where AI is writing essentially all of the code. Now, at the time when he said that, everybody thought he was crazy. And yet here we are between nine and 12 months and the guy that is writing the next version of Claude Code is writing a hundred percent of his code with Claude Code. Does that mean the entire world? Does it? Absolutely not, but it is definitely showing that it is possible. Now, how did Dario project this? Well, you need to remember, and we're gonna talk about this several times in this episode, that these labs have in-house models that we don't get access to. When people like Dario make predictions, they make it based on things that they're seeing in their own research teams, in their own labs, just not releasing to us yet. And so when they're making these projections, they're not guessing rolling a d ole, have a crystal ball, they actually have the next version in initial testing and they can see the direction that it is going. And it is giving us a way to see what's coming around the corner. And so this was again, March of last year, and here we are nine to 12 months later exactly has he predicted. And we are in a situation where we are very, very close, at least from a possibility perspective, to what he predicted. How long will it take organizations to figure it out and teams and processes to align with that capability? That's a whole other story. But as all these developers said, it's a new generation, a new era of computer programming that we need to adapt to. And for the bigger picture, is this a GI or not? As I mentioned many times in the show, I think it doesn't matter because it's just a definition and the only thing that matters is the rate of progress that we're in right now and what it's going to be. Its impacts on society and the impacts are already profound and it's gonna be a lot more profound even if we stop all AI development today and just learn how to use the tools that are available to us right now. Now since we already mentioned Elon talking about we are in the singularity and what it means to him, I, he just was on a interesting interview on the Moonshot podcast with Peter Diamante, and he shared several of his views. It's a very long interview. It's almost three hours, In this interview, Elon shares some very interesting insights that I will try to summarize right now. One is that he's saying that the primary bottleneck for AI right now is no longer chips, but it is energy and infrastructure, meaning the problem is not necessarily building data centers, the problem is powering and then cooling these data centers. These becomes the two biggest problems of building new compute for ai. And you don't knows exactly what he's talking about because his COS data center is the largest one location in the world. They're now in the process of building Coss two super cluster, which is currently being constructed, and it's supposed to go live later this year. It is supposed to be a massive 1.5 gigawatts to two gigawatts training cluster combined with the first one. And it's again, expected to be fully operational in April of 2026. Their problem right now is that the local grid cannot supply the demand of electricity that they require. So what they're doing right now is that Tesla is deploying their battery mega packs as a buffer to the grid, meaning they are charging the batteries overnight when demand is low, and then they're discharging these batteries to supply the demand of the data center during the day. This is just one solution, but if you want to understand how Elon thinks and because he has access to multiple different companies that has very interesting access to really advanced capabilities, he can do that. But I believe he's the only one that can do this right now. So he's saying we are vertically integrating infrastructure. We can't wait for the legacy greed to catch up. If you need two gigawatts, you have to build the buffer yourself. But then there's another company that Elon owns, which is SpaceX, and he is one of those people. And there's been a growing conversation about it in the past few months that the best place to place the next generation of data centers is in orbit. And the reason he's saying that is it solves both problems. One is there's no nighttime in space. The satellites can see the sun all the time, which means they can get electricity all the time from solar energy without having to produce any other energy. The other benefit is that space is really, really cold, so you don't need to worry about cooling the data center computers. When I first heard this concept, I thought it is complete nonsense. I thought it's complete nonsense for several different reasons. One is scale, if you just look at the size of these new data centers they're building, they're insane. They're massive. They're gigantic, and the amount of satellites you will have to put in space to come up with a similar level of compute is just ridiculous. The other problem that I can think of is maintenance. How can you provide maintenance to data centers in space? Any one of you who controlled or manage a computer environment, you know that things break. Whether it's the actual computers themselves or different components or networking infrastructure or something breaks, and when it's in a data center, you walk in there and you replace whatever's broken. When it's in space, it's a whole different problem. The third is the speed of communication between the different components. Again, if you think about the size of clusters they're building right now, the what gives it the power is the cluster itself. That's why Elon is building these mega giant clusters that cannot be done in space because the communication speed between the different components of the cluster will be slower than they are when they're co-located in a single step. So it's not straightforward. However, there is one company on the planet that already has a highly distributed, highly efficient, modular computer environment in space, and that is SpaceX with their starlink satellite system. So Elon is planning, not predicting his planning that SpaceX will be deploying data centers in space this year, and he is talking around Q4 of 2026. He's planning to start deploying data centers capabilities in space. Now, if there's one person I'm not going to argue with when it comes to deploying stuff in space is Elon. So I assume everything that I'm saying that is a problem or that I'm assuming is a problem, is solvable one way or another. One of the things that starlink did very, very well is that they basically do not care about maintenance. They basically saying we're building them cheap enough that if they crash after 12 months, we're just gonna launch a new one. And it will be very interesting to see if he slash they take the same approach when it comes to data centers in space. Again, I'm still skeptical, but I'm not going to bet against Elon on this particular topic. Another very interesting aspect of the interview is that Elon is expecting AI part entertainment to take the majority of usage of ai, And he's predicting, and that makes perfect sense to me because there's others who are making similar predictions and the initial technology of world models is showing us that it will be possible, that it will be generated in real time, meaning you won't have to create a full fledged movie. The movie will actually happen as you are watching it. And the same thing with computer games and so on. And this will be obviously very attractive. Now I do not know if really that's gonna be the primary usage of AI technology as Elon predicts, but it is definitely going to be a significant part of how this technology is being used. And just think about the ability to experience in high fidelity, whatever you can imagine. And you can freely interact with the universe that you are watching and look around in 360 degrees and really be a part of the story as it unveils, whether it's a game or a movie or something that's gonna be a cross between a game and a movie, because you can freely engage in impact. What is actually happening is a whole new different kind of way to provide entertainment that we don't have access to right now. And it is, will be very addicting, which is really, really sad and very attractive because you can be in any universe you want, whether realistic or non-realistic. And I fear that this feels very much like episodes of Black Mirror where people will prefer to be in these virtual universes versus the real universe. But I have no doubt that it is actually coming. The last interesting thing that Elon was talking about is distributed edge ai. So the concept is that you don't just use AI in your data center, but you use AI across a widely distributed network of devices that talk to one another. In his particular case, which is not surprising, is Tesla cars and robots optimists will be nodes of the broader compute, which does two things. One, it means that they can communicate with one another and collaborate on broader, bigger tasks. But the other is that when your Tesla is parked, its computing power. Its AI capabilities can be used by the network to provide more inference for the rest of the world or the rest of the network. This is, again, not science fiction. This is what Tesla has been building in our continuing to build as they're developing their fleet of cars and as they're going to start deploying their fleet of Optimus robots. And what he's saying is Tesla's fleet is already the largest distributed inference computer in the world. And so this is a whole new concept of how to provide compute, not just for the cars themselves, but how you can leverage that compute when the cars are not driving, when the robots are not doing different tasks, or maybe while they are, if there's extra bandwidth in these computers. Bottom line on this particular aspect is things are changing really, really fast, including the way the actual infrastructure is working and running, and is being deployed in order to support the future needs of ai. Now speaking of delivering compute, two very interesting announcements happened in the past, few days and few weeks. First of all, Nvidia or the CES 2026 unveiled Ruben, which is their new version of GPUs and the Ruben platform. Allows for training AI models using just one quarter of the processors previously required, and doing it at one 10th of the cost per token. These new GPUs will start being deployed in the second half of 2026. This means that the next versions of large scale models, maybe not the immediate one because these are being trained as we speak, but the one after that will be able to be even bigger and trained even faster than ever before because of these new GPUs. Staying on a minute, on other announcement by Jensen Huang at CES. He said the following, the Chachi pity moment of robotics is already here. And if you followed what happened in CES, there has been AI in literally everything you can imagine from wearable devices. Despite the complete failure of these kind of deployments so far, other than maybe the meta glasses and robots of any different kind you can imagine with AI built into them. And Nvidia specifically introduced an open source models ecosystem to develop, build, and deploy general specialist robots that are capable of learning diverse tasks without the need for extremely expensive pre-training capabilities. So this is where they're pushing from an infrastructure perspective to allow more or less any company on the planet to develop new robots faster, better, and cheaper. A lot of companies in CES also shared new AI models that are designed specifically to provide level four autonomous driving. Nvidia is one of them, but there were several other companies deploying the same thing. We all need to start preparing for a world in which A, there is a growing component of self-driving cars on the roads, and in addition, robots in integrated into more and more aspects of our lives. The other big and interesting piece of news from Nvidia and delivering intelligence that happened in the past week and a half is that Nvidia just did a deal with Grok. Grok with a Q, not to be confused with Grok with a K. So we talked about Grok several times in the past. They're a company that has built dedicated shifts for inference. So they're not competing with GPUs on the ability to train models, but they were

Speaker 4:

using AI models. So there's two kind of steps. There is the training of the models that still

Speaker 3:

use in the world right now

Speaker:

from a scale perspective. And then there is the time where we. Get to actually use the AI and provide inputs and prompting to it and get an output, and that's called inference. And inference is when tokens are being generated, which is the outputs that we actually see. So Grok had maybe the most advanced technology in the world right now. They call it LP, which is Language Processing Units, which run significantly faster than GPUs, while being also cheaper to run one. More advanced technology for inference, which we're gonna talk about. Why is that important in a minute? And two, somebody else might have snatched them and grabbed that technology to compete against NVIDIA in the future. So to prevent that from happening and to add this functionality and capability in-house, Nvidia just paid$20 billion for not really buying grok, because then they would've faced antitrust and anti competition and different kind of scrutiny. But another style of acquihire that has become very common in the AI universe in the past year and a half or two. And so they're just getting a non-exclusive license to GRS ip and they're gonna hire the key executives from the company. And their plan is to integrate Groks capabilities into their existing infrastructure, as well as have their senior staff help develop the next version of NVIDIA's solutions, as Jensen Huang wrote in an internal email to employees, we plan to integrate GRS low latency processors into the Nvidia AI factory architecture expending the platform to serve an even broader range of AI inference and realtime workloads. Now, the most senior executives of Grok are going to move and work inside of Nvidia, but the company will continue running as independent company while still delivering their existing solutions through Grok Cloud and through other solutions independently as well. So two points about this. First of all, I have a personal serious problem with all this new Acquihire solution. It is really bad for competition and for free markets. Now, these rules were put in place as safety mechanisms to guarantee free competition in the market. And these acquihires basically circumvent these safety mechanisms that were put in place to protect us, the consumers, and deliver us as much competitiveness and better pricing as possible. So I really hope the government will take some steps to, close this loophole that is now happening more or less every week now, even though in the current administration, I doubt if this is happening, and I'm not taking any sides here. It is just very, very clear that in the current environment, big tech has a serious influence on the current administration. And the race against China is taking a very dominant role in impacting the decision making. And so I doubt if this will happen, but I would still love to see it happen now, specifically talking about this acquisition or whatever you want to call it. It is very clear that we are in a transition point from an era of large modal training to an era of AI consumption. Again, just looking at CES, just talking about everything we talked about earlier in this show and what we're talking about every single week, inference will become more and more a dominant factor over the capabilities of the model itself, and I've said that multiple times in the past six months. All of the models we have today are incredibly powerful. They can transform most tasks that humans do today if the applications that are using them are integrated and set up correctly, meaning it is becoming more and more about how we integrate AI into things that needs to get done versus how good the AI is. Because in many, many, many cases, and I don't know if it's most, but I will dare to say in most cases, it is good enough right now, the biggest difference is gonna be the tools, the speed, and the cost in which we can use the intelligence in order to perform the tasks that we need. Now I've been saying that for a very long time now, but this acquisition is a formal stamp of approval from Nvidia that this is where we are and they're focusing very, very clearly in inference. And again, just look at what was released in Shared in CES this week, and you'll see that there's AI in literally anything you can imagine, which means we'll have a lot more output of AI required, which means more inference, which means this makes perfect sense that Nvidia will want to control this part of the market. Now, since we talked about one acquisition, let's move to another interesting acquisition that happened, which is meta. Meta acquired Manus, which is an AI platform that we talked about multiple times on the show. Manna is the first company that created a generalist agent tool that you can use to do more or less anything you want, whether it's doing deep research, creating presentations, writing code, developing applications, or literally anything you want, kinda like a generalist agent platform. They've made a big name for themselves by being first. The next company after that captured some market share doing similar things is Gens Spark, which is a Silicon Valley based startup that developed a very similar product. There are several interesting things when it comes to the Manus acquisition. The first thing is they were a Chinese company, and very early on they understood that being a Chinese company is gonna prevent them from scaling into the Western hemisphere. So they moved their people and their servers into Singapore and disconnected themselves from. China and that has proven to be very successful. The other thing that is very interesting is that it's a small niche product that is not gigantic and did not have a huge impact on the AI world overall. And the best person that has summarized it very, very well is Greg Eisenberg on I love following Greg. He has amazing insights and he wrote a post that said, so Manus AI sold for one to$2 billion to meta a few of my takes to people in Silicon Valley and I'm not gonna read all of them to you, but I'm gonna read some of them because they're very important to the era we're in in general. So number one, manas treated distribution as first class expense, spending heavily on creators to win attention early. Two that spent worked because creators showed the product in use, not because they explained it. Three product was simple enough that a demo did the selling without Narration four. Manus was the first platform to really own the category of Super. Agent gets you thinking what AI word can you own? Number seven, Manus proved that owning the user relationship matters more than owning the underlying model. Number 10, the team spent more time thinking about what will people screenshot than what benchmarks we will win. Number 13, a lot of people in Silicon Valley never used maus and it didn't matter at all. And then when he continued in his comments, he wrote suggestions on what you should think of if you're developing the next Venice. And I'm not gonna read all of them again. I will put a link in the show notes to the post, but he wrote, ship something that creates a clear weight. It can do that moment in under 10 seconds. So, quick summary about this acquisition. How will meta use this technology is unclear to me. It is not fully aligned with anything they've done so far. It is a consumer tool that doesn't connect in my mind to how meta works right now. So I'm really curious to see how they're going to use the capabilities they acquire, the people they acquire, the IP they acquire, in order to improve what has been a complete catastrophe, when it comes to meta's ai in the past year. So. Big question mark. On that angle, on the other angle, this is a new kind of company, right? It is a company that focused on practical AI use cases and showcasing it above more than anything else. Let people on X, on Instagram, and on LinkedIn show cool things they're doing with AI more than different benchmarks or solutions and focus on that more than anything else. The other thing is the wow factor. When I do training for companies, the first lecture that I do every single time is just a complete storm of multiple use cases, business practical use cases that are doable today, right now with the existing tools. And it just blows people minds because people just don't know what's possible. If you can build a solution, whether it's a new company or just solutions inside your company with existing tools that will make people think exactly this, wait, it can do that. As Greg Eisenberg said, you will win a lot of followers. Interestingly, Greg Eisenberg had another post, this week that kind of contradicts this whole concept, and maybe it doesn't contradict the concept, but he's saying how dangerous it might be. So he wrote a post about what he calls. Vibe revenue, and I'm quoting one segment of it. A lot of AI products get tried because they're the cool new thing. People sign up, poke around, feel the wow moment, tell a friend, maybe even pay for a month or two. Then real life kicks in and the subscription quietly gets canceled three to six months later. I call this vibe, revenue, money that comes from curiosity, novelty, or fomo, rather than from product becoming essential to someone's workflow. I must admit, I'm guilty of literally all the things that Greg,, mentioned in this post. I commonly test and sign up to new tools, and sometimes I forget to cancel the subscription, and then my amazing assistant grabs it and tells me, Hey, what about this thing? You haven't used it in three months? And then we cancel the subscription, and I'm sure I'm not the only person that does it, which means the revenue projections of many of these companies are highly inflated because there's no stickiness and people do not really use them for long periods of time. That's obviously not true for all the new companies, but there are many companies in that boat, and that is going to take a very, very serious hit, especially on VCs and different other investment channels because many of these companies will go under because it is not sustainable. And I'm gonna switch to a real rapid fire. We're gonna say a few words about every topic that I think you need to know because it'll be interesting for you to understand. And I'm gonna run them very, very quickly without any comments on what I think or feel about them, just to keep you up to date. And then I'm gonna take an Uber and go to the airport. So the first one is that OpenAI is actively recruiting a head of preparedness to lead the defense against potential catastrophic harms of using next generation AI platforms. The new head of preparedness will officially own open AI preparedness strategy and to end, that's an actual quote. And according to the job listing, the role requires overseeing mitigation design across major risk domains, specifically citing cybersecurity and biosecurity as two key areas of focus. And the position holds a significant authority over product deployment with responsibility, including guiding the evaluation results and even stopping deployments if they feel it is too dangerous. So the role requires, and I'm quoting clear, high stakes technical judgment under uncertainty. No shit. And Sam Altman actually tweeted something about this job posting, and he wrote, and I'm not reading all of it, but I'm just quoting. The potential impact of models on mental health was something that we saw preview of in 2025. We are just now seeing models get so good at computer security, they are beginning to find critical vulnerabilities. We are entering a world in where we need more nuanced understanding and measurement of how those capabilities could be abused and how we can limit those downsides both in our products and in the world. What does that connect to? It connects to something that I've said multiple times in the show, even today, is that the labs have way more advanced models than we have access to. If Open AI is willing to pay over half a million dollars to a position, so if you're interested the salaries$555,000 plus equity in the company, it is because they're seeing stuff in their lab that makes them worried, seriously worried because they have preparedness teams. Right now, this is just elevating that into a different level and so I think we all need to be worried. And I said that multiple times before. I really hope that will grow into a much broader collaboration with academia and governments and other players in the industry and not just companies internally trying to prevent catastrophic things from happening. Another interesting news from open AI this week is that they are shifting a lot of resources to audio overhaul in their platform. This is seemingly in preparation for the release of their physical product. The device that they are developing together with Johnny i's team that is expected either at the end of this year or most likely in the beginning of 2027, but they're working on building a completely new infrastructure and voice capability that is supposed to be released, maybe still in Q1 of 2026. It is supposed to be more accurate, provide more in-depth answers, and have more of a personality and align with the most advanced capabilities of the text capability, which is lagging behind right now. And from OpenAI to Microsoft Satya NA's Lettuce Manifesto declares 2026 as a make or break year for artificial intelligence. He called it the big AI reset. Basically asserting that the industry must transition from initial phase of discovery to a mature phase of widespread diffusion where utility trumps novelty. And I see that with my own eyes working with a huge variety of companies across different industries. There's a very serious push to let's stop building cool stuff and build things that actually move the needle and connect to our day-to-day work and make it better, faster, cheaper, or improve what we do and provides a growth path for different companies. And this is definitely the point, and going back to what I said in the beginning of the training that I provide, this is the time to do it. I cannot stress enough the sense of urgency and all I can say is if Satya is saying it, you need to pay attention as well. Satya, by the way, was channeling if you want Steve Jobs, and he stated a new concept that evolves bicycle for the mind, such that we can always think of AI as a scaffolding for human potential versus substitute. So while he's saying that the idea is to build AI that will provide scaffolding for human growth and extend human capabilities, there are serious rumors that Microsoft is about to lay off between 11,020 2000 roles in this year, potentially in Q1, that's five to 10% of its global workforce. That is in addition to the about 15,000 people they laid off in 2015. Now, the. Communication lead at Microsoft said these rumors are completely wrong, but let's just wait a few weeks or a couple of months and see whether it is or it isn't. So right now these are only rumors, but it will align with the behavior we've seen from Microsoft in the past 12 months in a different angle. And we mentioned world models for a quick second in this episode. Uh, Y Laun who recently left Meta, was interviewed this week and went venting a little bit or maybe more than a little bit on his previous employer Meta. He said, and I'm quoting you don't tell a researcher what to do. You certainly don't tell a researcher like me what to do. He also shared that META'S flagship LAMA four model benchmarks were, and I'm quoting fudged a little bit by the team, by using different models for different tests versus the main model to do everything. And that made, mark Zuckerberg furious, and that's led to the whole change in strategy and bringing in the new team and firing a lot of the other people and losing a lot of other people like Jan Laun. But the most important aspect of this is that he is moving full speed ahead with his new company. That's called a MI Labs. He raised a lot of money at a really high valuation, which is not surprising. It was very obvious that if he leaves, that's what's going to happen. And he continues to claim, and again, now I'm quoting lLMs basically are a dead end when it comes to super intelligence. He has been a strong and loud believer of this for several years now, and he's saying that the only way to get to a GI and super intelligence is through world models, which is what's his new company is going to be focusing on. Two interesting. News when it comes to creation of fake materials and sharing it for different purposes. One is the disinformation watch Doc. News Guard has identified just seven fabricated misrepresented images and videos related to the Venezuela operation. These seven posts alone gathered more than 14 million views in 48 hours from being posted, and they were created or manipulated with ai. The other one, which is a really weird scenario, which is an employee from DoorDash that was supposed to deliver a meal to somebody's house and document it by uploading the picture of the package next to the house like they always do, has fabricated it with ai. Meaning instead of actually delivering the package and then taking a picture, he used AI in order to place the package next to the actual person's door without actually delivering the food to the door. And the reason I'm combining these two things is it is something that troubles me deeply since I started working more and more in ai. And it is becoming more and more critical. It's becoming more critical because it's becoming easier to do. The level of video and images that you can create today are indistinguishable from reality. And we are in an election year and perception is everything. So even if somebody will catch afterwards that something was fake, maybe not all the people that saw the original post will see the declaration or the fix, et cetera. And perception is everything. And it is very, very easy right now and highly tempting to change people's perception. And it is really scary to me because it unravels the concepts of trust within society. And if you want to hear really what I think about it, go back all the way to episode 13 that released, I believe in May of 2023, so two and a half years ago, and that episode is called The Truth Is Dead, how AI is Putting At Risk, the Trust that is the fabric of our society. This has been true back then when I recorded it, and it is a lot more true right now, and I really hope we will find a way to stop that or at least slow that or at least dramatically reduce that risk because it is a very serious risk on all the mechanism in which our society is built upon. And with that really positive note, I will let you know that there's a lot more news that you can see in the newsletter, so you can go and sign up for that. I will remind you one more time that the next cohort of the AI Business Transformation course is starting less than a week and a half from the time this podcast gets released. So, if you haven't taken any proper training, go and sign up and come and join us on Tuesday the 20th. By the way, some people told me, well, it's a lot of money, or It's a lot of time. It's eight hours. It's a thousand dollars, and my answer to that is you need to ask yourself something very simple. Is the future of your career or the future of your business worth a thousand dollars in eight hours? If the answer is. No, then that's fine. Continue doing what you're doing right now. But if the answer is yes, because the future of your career and the future of your business is worth way more than a thousand dollars in eight hours, then you must take this course your future self. Well, thank you. That is it for today. I'm gonna run to the Uber and go to the airport. Keep on exploring ai, keep sharing what you find with me and with other people. And until next time, have an amazing rest of your week.