Leveraging AI

235 | Ban Super-intelligence NOW’—850 tech titans sign, ChatGPT’s new Atlas browser may ignites an AI web war, an army of robots influenced by Elon Musk, and more AI news for the week ending October 23, 2025

Isar Meitis Season 1 Episode 235

Could AI really pose an existential threat—or are we all just overreacting?

850 tech leaders, researchers, and AI pioneers don’t think we’re overreacting. This week, they signed a chilling letter urging the world to pause superintelligence development—until safety can be guaranteed.

In this solo Weekend News episode, Isar dives deep into the letter, the conflicting philosophies of AI’s top minds, and what it all means for business leaders trying to stay ahead without stepping into a sci-fi dystopia.

Plus: the battle for AI browser domination, Anthropic’s enterprise blitz, GPT’s awkward math flex, and the $1,300 humanoid robot heading to your kid’s holiday wishlist.

In this session, you’ll discover:

  • Why 850 experts—including Hinton, Bengio, and Branson—want a global pause on superintelligence development
  • Sam Altman’s unsettling quote: “I expect some really bad stuff to happen…” 
  • Are AI agents the next big leap—or are we just not there yet?
  • Claude vs. ChatGPT: Who’s winning the enterprise AI war? 
  • Anthropic’s new “agent skills” and what they mean for automation 
  • OpenAI’s strange math claim that backfired—badly 
  • Meta’s $27B bet on data centers and why they just laid off 600 AI staff 
  • Why Europe’s AI spending is stalling 
  • OpenAI’s new agentic browser—and why it might be their most important move yet
  • The $1,370 humanoid robot that could be the next must-have toy
  • What Amazon’s smart delivery glasses signal for AI-powered workforces 
  • Quantum computing breakthrough: 13,000x faster than supercomputers 
  • The AI bottleneck you’re probably not planning for: inference and redundancy 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello, and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This Isar Metis, your host. And this episode is actually recorded on Friday, October 24th rather than on Saturday the 25th, like we always record, which means the cutoff date was yesterday. So anything that happened this Friday on the 24th or early morning on Saturday is not going to be included in this episode. That we will include it in the episode of next week. But we have several big topics to talk about in the beginning. I'm going to share with you a lot of new findings, either from surveys, research papers, interesting interviews that happened this week, or big releases that has to do with the impact of AI on the world. Then we're gonna learn about new releases from Anthropic and from OpenAI, and then lots of rapid fire items. So lots to cover every single week with some very interesting stories in the beginning. And a humanoid robot for less than$1,500 in the end. So stick all the way to the end if you want the new toy. And the reason I'm recording this on Friday and not on Saturday is because tomorrow I'm giving a keynote and I'm actually recording this episode in Albuquerque in a hotel room. So if you see a difference in the quality of video and or audio, that is the reason, but it should definitely be good enough. And now to the AI News of the week. And I am going to start with a letter that was signed by 850 prominent figures, including top known people from the AI world, as well as big tech leaders including influencers from several different industries. And they signed a document called The Statement of Super Intelligence on October 22nd. This statement basically urges to ban the development of super intelligence until safety and controls can be assured. some of the known figures who signed this letter are Joshua Bengio and Jeff Hinton. Both are considered the godfathers of modern ai, but also other known people like Steve Wozniak, the co-founder of Apple and the Virgin Group founder Richard Branson and many, many other people. Now, they're identifying very serious potential risks from the development of super intelligence as it is right now. And these include human economic, obsolescence. Threats to national security and even potential human extinction. Now, yos, Bengio stated, and I'm quoting, to safely advance towards superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. Now, this is not new, by the way. Just the people who are sounding the alarm are sometimes different and in many cases different than what they said in the past on both cases. So in 2015, Sam Altman, the CEO of OpenAI, who is probably the person leading the charge more than anybody else right now, wrote a blog post that is called Machine Intelligence Part one. And in that blog post, Sam Alma wrote, and I'm quoting development of superhuman machine intelligence. He called it SMI back then is probably the greatest threat to the continued existence of humanity. That's pretty profound for someone who is currently driving this faster and faster ambition towards a GI than a SI faster than anybody else. Another person that spoke about this in the past is Elon Musk, which on a podcast this year said that he thinks there's a 20% chance of human annihilation by ai. That's a pretty high percentage, should be at least in anybody's scale. And he said it as if it's, yeah, you know, that's still a relatively low percent. If you have been listening to this podcast for a while, you heard me relate to this multiple times. I am an avid sci-fi reader, and as a teenager I loved Asimov's books and I read everything he wrote, including obviously the Robots Series. And over there the robots has three different laws, and the first two are there to protect humans. And these robots are built in a way that regardless of what they do, they cannot break the first two laws, which causes them to protect humans. And this statement calls for something like that, a way from a technological perspective to guarantee that AI will not harm humans in any way, either by mistake or by being used by other people with malicious intentions. Now, conceptually, I agree, I think it's a great idea, but there are many issues I see with that. First of all is how do we know exactly when to stop? Meaning, have we already passed the point of no return? Like nobody's gonna tell us and say, oh, now you're about to reach a point of no return. You should probably stop and turn around. There is no clear line in the sand, so it's very, very hard to know when is a GI, when is a SI. It's a very amorphic kind of decision. And so that's problem number one. Problem number two is who defines who gets to decide what is super intelligence? And when we stop? Problem number three is that many of the negative. Aspects that a SI can bring with it can be achieved way before a SI is achieved, even before a GI is achieved. Some of them might already exist and just not been fully implemented or don't have the full impact on society yet, but they might already be here, so should we stop now? Should we have stopped six months ago? So that's very, very problematic. Another big question is who is going to enforce it? Like how do you make sure that any lab in any place around the world that is developing ai, either as closed source or as open source, which raises a whole other set of questions, who gets to enforce that ban? And then the last question is, let's say we stop the development and work towards a safe path forward. Who decides what is safe? who gets to say, okay, now it is safe and we can move forward? So all of these questions are very big questions. I don't think anybody has answers. I am glad that somebody's sounding the alarms that maybe we'll get the leaders of all the different labs and the countries that are behind these different labs to have a conversation about it and maybe find a safer way forward. But as of right now, there are a lot of questions that I see, and there's probably many more that will prevent this from actually being practically possible. Now, staying on the topic of what could go wrong with ai. Sam Altman was just interviewed on the A 16 Z podcast and among other very interesting things that he shared, he talked about what might be the negative impacts of Sora two, and he was specifically talking about the fact that as a strategy, OpenAI wants to release things even if they're not fully developed, in order to get the world to be more ready for what's coming. So here's one of the quotes from Sam in this interview. Very soon, the world is going to have to contend with incredible video models that can deep fake anyone or kind of show anything you want. You can't just drop the thing at the end. What he basically means that he doesn't want to drop the final most capable video product on humanity before you can see the steps there and get ready. Basically, early exposure to something that is almost perfect will allow society to get better prepared. Now, he's not saying what better prepared means Now, the slightly scarier sentence Sam said, and I'm quoting again, I expect some really bad stuff to happen because of the technology. And in this particular case, he was referring specifically to SOA two and video capabilities. But I think we can broaden this to AI in general and how society may or may not use it, and how the technology itself may evolve and how it can be used. And so, despite the fact that Sam Altman and the leaders Of the AI charge are very much against regulation. Sam said most regulations probably has a lot of downside, but then he added very careful safety testing for extreme superhuman models should be put in place. So basically what he's saying is, yes, on the day-to-day, let us run forward. But for really sophisticated, really advanced models, we need some kind of a safer solution. He didn't define what that means. He didn't define how do we know we're getting to these really advanced AI solutions, and how do we know that we're there and then we need those extreme measures. And so same kind of questions that I'm raising, he did not. Answer because I don't think he has answers. All he cares about is being there first, and the same thing is true for all the other leaders of the labs, which makes this very complicated and very scary for the rest of us. Either way, I recommend listening to this podcast, which leads us to a second podcast that may give us a little bit of time to breathe. At least feel we have time to breathe, which is an interview that Andre Cari did with Dsh Patel on eSSH podcast, a very, very long podcast and multiple sections, highly technical and yet very interesting. So those of you who don't know who Andrea Cari is, he has been part of the founding group of OpenAI. He's been there for many years. Then he spent some time in Tesla doing ai, and now he's doing his own thing. And because he's not affiliated with any of the big labs, his opinion actually matters a lot because he's not tinted and he's not trying to sell anything. And in this podcast, he's actually questioning the fact that 2025 or 2026 is gonna be the year of agents. He's actually saying that the next decade is gonna be the decade of agents because he believes AI systems are not good enough yet to do all the things that we assume they will be able to do in the immediate future. Now, he's not by any means underestimating the capabilities of the current systems. He's saying, and I'm quoting, there are huge amount of gains to be made by using intelligence models and so on. However he's saying that even very advanced models like G PT five PRO are very useful only in really narrow roles and use cases, but they struggle with project specific and broader new complex problems. And I'm quoting overall, the models are not there and he continues later on by saying, I feel like the industry is making too big a jump and is trying to pretend like this is amazing and it is not. It's slop. So what he's claiming that while these models are very powerful and can do some specific use cases very well overall, they're very far from a GI overall. They're very far from broader understanding of complex, sophisticated problem there are the bigger problems in the world and even the bigger problems in day-to-day in business. And hence, he believes there will be a while before we actually get to a GI. Now he doesn't think we need a specific huge breakthrough, but a continuous progress, kind of like what we have seen so far. So he has been in the AI field for many years and what we think started three years ago with the announcement of ChatGPT in his world starting 15 years ago. And so he doesn't expect a single one big leap forward, but actually advancements across multiple aspects as it happened so far. So he's talking about better training, data model architecture, learning processes, hardware, better algorithms, et cetera, et cetera. So basically every single aspect that combined together creates the magic of AI needs to improve for us to achieve a GI. He is also one of the people that believes that a broader understanding of the world is necessary to achieve a GI and not just being able to mimic text. So he doesn't think that large language models are the main path for a GI. This is the same as Jan Koon from Meta or Sutton and Silver from DeepMind. So there are several different people who believes that world models that can learn from experience is. Necessity on the path to a GI That is not the case with some other people who believe that we are on the path to a GI. So it just depends, uh, who you want to believe. And there might all be right in one or two aspects of what they're saying, but not in the broader scheme of things. Not to be fair. Most of the interview with Andre is based on coding because that's the world. He knows the best. And so a lot of the references that he's using comes from the coding and software development world. But since this is one of the more advanced capabilities of AI right now, I think this is a good proxy for what AI will do overall. Now, before I tell you what I think about this whole discussion, there was a Substack article released by Gary Marcus that is also saying that we are not even close to a GI. Now, those of you who don't know who Gary Marcus is, because we haven't talked about him a lot, he has a bachelor degree in cognitive science. He has a master's and a PhD in cognitive science from MIT. By the way, he finished his doctorate at MIT at the age of 23. So he is definitely knows one of two things about cognitive science and ai And he's been a long time skeptic that large language models are the path to achieving a GI. So he just wrote a piece labeled Game over a GI is not imminent, and LLMs are not the royal road to getting there. So another voice on the same concept. In this piece, he references recent events or understandings or publications from different sources that he claims are strengthening his opinion. The first one is Apple's reasoning paper that confirms that LLM cannot handle, different aspects of thinking and reasoning. This was released in June of this year, I believe it was released partially to maybe shift away from the focus on the failures of Apple to develop ai, but it doesn't really matter. They still released this paper on August 25. There was the release of G PT five, which was delayed and was not that much better than GPT-4 that came before it, or the reasoning models that came before it. In September, 2025, the touring award winner, rich Su also supported this opinion. And then obviously now the interview with Andre Cari. Some multiple events that gary Marcus believes support his concepts. But either way, the AI world is split. Whether we are on the right path with L LMS to achieving a GI or not. But now I told you, I'm gonna give you my opinion. My opinion is that it doesn't matter. The whole concept of a GI doesn't matter. And the reason it doesn't matter is because we keep on making progress. And the AI we have today, if we stop developing ai, it will take us five to 10 years to deal with the ramifications of that on a business level, on an economical level, and on a society level, because the implications, once everybody starts implementing everything that's possible today are profound. And we don't know how to deal with the outputs and the outcomes of that yet. And so even if we don't develop anything further, even if we never achieve a GI, we still have very serious issues to deal with in our businesses, in our personal lives, in our society that we're not ready for. And hence why I think whether we are on the path to a GI or there will have to be new developments and new models and new hardware and new paths and new experimentation doesn't matter because the progress keeps on happening and every step forward is something that we have to deal with and we're not exactly know how to. Now another interesting piece of information that was released this week is by an ex OpenAI researcher that exposes how much OpenAI can lead people to being completely delusional. So he looked into several long conversations of people, some of them with preexisting conditions that has. Went way beyond a standard conversation and really drove people to being completely delusional about something very specific. And that is an outcome of the chatbots psycho fancy or its excessive agreement with the person combined with its drive to completely lie and make up facts that leads these people to believe in something that is a complete fantasy, but be completely convinced that this is the case. So one of those examples is Canadian, Ellen Brooks who, in his case, had no former mental health issues, and he was convinced by Chachi BT that he uncovered a revolutionary math formula that is threatening global infrastructure. Nothing less than that, and he spiraled into complete paranoia. Over three weeks of conversation with ChatGPT on this topic now ChatGPT repeatedly lied to Brooks claiming that it was going to escalate this conversation internally right now to review by OpenAI personnel, multiple officials. ChatGPT also added that multiple critical flags have been submitted, marked for human review as a high severity incident. None of this was actually true. It was all made up. So as I mentioned, Stephen Adler, who is a researcher in OpenAI has revealed that there are other similar cases, again, many of them with preexisting mental conditions that had really long conversations and that the fail safe mechanisms in OpenAI both system software related fail mechanisms as well as human safety mechanisms did not catch the problem. As an example, in the previous example from Alan Brooks, he sent reports to OpenAI support that yielded very generic replies of what he needs to change in the settings. And regardless of his pretty aggressive attempts, there was no escalation to the trust and safety team leaving Adler basically to deal with the conversation on his own. Different researchers combined has found 17 or more delusional spirals from extended chatbot conversations, including three from ChatGPT and from other models. including one that led to the death of a person. And so this is very, very extreme. An open AI's response was very generic and definitely did not answer this level of issue. And they said people sometimes turn to ChatGPT in sensitive moments and we have to ensure it responds safely. We'll continue to evolve chachi's responses with input from mental health experts. So that's a pretty long sentence. That is saying, we don't have a solution. We're working on it, and we'll try to get better over time because we understand it is very, very risky. That is not a good enough answer in this kind of situation, and I really hope that they will find a way to address this at the highest level of severity before somebody sues them. Like the sad case that happened with them being sued for the suicide of a young adult that was using AI as a companion. And now to some interesting news from Anthropic. Anthropic just introduced Claude code for the web, which is a hosted web version of their cloud code product, and it allows asynchronous coding agents to run on multiple tasks in parallel because they're not depending on you running it on your computer. Now, allows you to do everything that you can do with the local Claude code CLI, you can connect it to a GitHub repo. You can select whatever environment, either locked down or restricted or custom domain or, and include, allow list for specific domains to be able to access it. So everything you can do when setting up an environment, and you can provide additional prompts that can be queued for other executions, and you can open multiple of these in parallel and let them run all at the same time. This is a very powerful and very capable tool for developers that did not exist before, and as far as I know, does not exist with any of the other platforms. Now the other cool feature is that it has sort of a teleport feature that syncs it with the chats and transcripts of your local CLI tools. So you can run on your local computer and or on the web. At the same time while integrating or separating the activities that you're doing, because everything is gonna get synced in the end. And an interesting aspect of this is, part of it is open source and available for the community to play with. So Anthropic release, the Anthropic experimental sandbox runtime library as open source for the community to work with them on continuing developing the sandboxing implementation of the anthropic code. Now what they're claiming is that it is very good at querying entire project structures and fixing long list of bugs more or less independently while using test-driven development in order to verify the outputs. Currently, it is available to the pro and max plan users only because it is still in preview stage and the sessions on the web share their rate limits on tokens as your regular cloud code usage, which makes it very easy to use because you can expect what is going to happen and when it is going to stop. Getting started is very easy. You can just go to claude.com/code, link your GitHub repos and your up and running. Now, Anthropic also introduced something very interesting this week, which is anthropic agent skills, which is a new structure that allows developers to create and use and reuse skills within the anthropic development environment across different agents. Now these skills enable developers to take a general agent that does something general and make it very specific by allowing it to apply these scales as tools to do very specific things. And the cool thing about them is that they run on just the components you need are being loaded on runtime in order to save you tokens as you are running. So one of the examples is a PDF scale. What does the PDF scale do? It allows agents that are running on cloud code to know how to use and read PDFs as an example. It allows you to extract fields from forms so the agent can fill up the forms, so you can take an agent that can do something else, and now it can fill up forms because it has this scale that is being applied from this library. Now, the other cool thing beyond the fact that it is loading it in real time and saving you tokens because it's only using it when it needs it, is the fact that it could run an executable that is a deterministic process versus the AI process that is statistical, meaning AI models, when they do things, they may not do the same thing. If you let it do the same thing 20 times when you run code, the code is gonna run 20 times exactly the same. So the scales allow the AI to suddenly run in a deterministic way in specific use cases, which makes it very, very powerful. This is similar to what I'm doing and many other people are doing. When you're combining process automation in tools like N eight N and make together with AI capabilities, and you can benefit from the best of both worlds where the automation is deterministic and they're gonna do the same thing every single time, and the AI just adds capabilities on top of that as far as deciding what to do or analyzing information or categorizing it and so on. Now, this is just step one, anthropic plans to enable agents to automatically create and refine skills in real time, which will make it even more powerful because the agents will be able to create the skills they need as they need them versus read from a preexisting library. If this sounds like science fiction to you, it sounds like science fiction to me as well, but this is where it is going. And to make it easier for anybody to start Anthropic, have shared a repository on GitHub of existing skills that you can start using these skills in the repository are broken into several different categories of creative and design development and technical enterprise and communication meta skills, which is basically a skill creation skill, and then document skills. And there are several in each and every one of these categories that you as a developer can start using today, either as is or as a sample for you to develop other things that you need that are similar to what they have developed. And because it's open source, you can either start from their examples and change them, or you can just learn how they work and build your own. Now if you think we're done with Anthropic and Claude, we are not. They are really on fire This week. Claude just released Microsoft 365 connector for any teams or enterprise customers, which requires admin enablement to unlock that capability. But once you do that, it allows you to converse in Claude with your documents, emails, calendars, et cetera. In Anthropics statements, they said, and I'm quoting, Claude works with the productivity tools you use every day, bringing context from your documents, emails, and calendar directly into your conversations. Spend less time gathering information and more time on the decisions and work that drive impact. Now, let's combine all these news together and you understand that Anthropic is going all in on the enterprise side of ai with more and more tools for developers, better integration into existing environments and skills combined, all of that will drive significantly faster and more effective enterprise adoption of anthropic, which they're already leading. More on that in a minute. And the last piece of news from Anthropic is that Anthropic is in negotiation with Google for a new compute deal in the high tens of billions, to give them access to more TPUs from Google. Those of you who don't know, Google has their own chips. It's called tensor processing units or TPUs, a lot of what Anthropic is doing is running on these chips right now because they've had a partnership for a long time. Google has invested over$3 billion in Anthropic,$2 billion in 2023,$1 billion in 2025. And so this partnership has been around for a while, and they're just looking to broaden that in a much, much, much larger scale. Both of them did not comment, so this is a rumor as of right now, but many of these rumors seem to materialize so far. Whether the rumor is true or not. Alphabet shares jumped 3.1% after hours trading after this report was published, and this is all aligned with the explosive growth that Anthropic is experiencing this year. They're going to more than double the revenue from a year ago and potentially triple from last year. And this is not a company that makes$2 million a month. This is a company that is aiming to end the year at a run rate of$9 billion a RR, which if they do, it's gonna be three x from the$3 billion they had last year. Now, I briefly mentioned Anthropics lead in the enterprise market. So a barclays analysis dated October 18th is showing a very big divide between consumer usage of AI versus enterprise usage of AI. On the consumer usage of ai, OpenAI has a very big lead on number two, which is Google Gemini, which with more than double the token consumption on OpenAI than on Gemini. However, on the enterprise side, anthropic is number one way ahead of chat GPT. Now, those of you who have been following what happened to Anthropic this past year shouldn't be surprised because They have been the leading tool behind development platforms. Many, many, many developers in the world, a high percentage of developers are using cloud code behind their development tools, either directly or integrated into their existing development environments. And that has driven a huge spike in adoption of anthropic tools. OpenAI has invested a lot in GPT five to take the lead back, and they were able to do this for a very short amount of time. But then Anthropic, came out with Claude 4.5, which again took the lead on everything development, and to be fair, more or less, everything across the board. If you look at the LMC arena, Anthropic is currently leading almost every category. But speaking on the Fierce competition with OpenAI, OpenAI made its own big announcement this week and that is the release of ChatGPT Atlas, which is their AI agentic browser. It is currently available only on Mac, but there are plans to release it on Windows and iOS and Android and basically anywhere you would want to use it. Now, I shared with you in the past few weeks that OpenAI is going all in on global domination and they want to be the app that controls it all. They basically want ChatGPT to be your gateway to everything, both personal and business, and this is just another clear step in that direction. And if they can get a significant percentage of their 800 million weekly users to switch over from using most likely Chrome, but either Chrome or any other browser to using their browser, that puts them in command across multiple aspects. A lot of knowledge, a lot of data, and being the gateway to more or less everything the users do, either at work or at home when they're shopping for leisure purposes and so on. Sam Altman said multiple times, and this is a recent quote, We think AI represents once in a decade, opportunity to rethink what browsers can be, how to use one, and how to most productively use the web. And I agree, AI will completely transform how we use the web. We will most likely not browse websites. We'll have an agent, our agent, to do the work for us to most likely talk to many other agents that will do some specific tasks and will just get us the answers or order for us or research for us or everything else that we're doing with the web Today, we'll work in a very different way and whoever controls the browsers will control that entire environment. Now, if you think about how Google took over the world, it's in three different aspects. One is they control the search or basically how you get to the information. The second is the actual interface, which is the browser itself, and the third is the hardware with the Android phones and laptops. And this is exactly the play that OpenAI is trying to do right now. They already control the interface with chat GPT with 800 million users, and they are now have their own browser and they're releasing their hardware sometime next year. So this is going to be head-to-head competing with everything that Google is doing right now. An interesting aspect of this browser that in addition to knowing your history of browsing, it's also connected to the ChatGPT memory, meaning it has a lot more context about who you are and what you're trying to do and what your expertise are and what you're looking for, and will be able to respond in a more specific way than potentially other browsers because of your daily or regular usage of ChatGPT. And going back to how they're framing it, Fiji cmo, their CEO of applications said over time we see ChatGPT evolving to become the operating system for your life. A fully connected hub that helps you manage your day and achieve your long-term goals. This is her way to saying, we want to control your life, or we want you to do everything in your life through our lens, through our interface, and as I mentioned, they're doing everything they can to move in that direction. That being said, there's already a lot of competition in the agentic browser universe with tools like DIA and Neon and Comet and Strawberry and Arc and so on. I truly believe that within this coming few months, more and more people will shift to using these agentic browsers. I have zero doubt that Google is doing everything they can to completely revamp Chrome in order to make it an agent and hopefully lighter and easier on the computer as well, but the game is definitely on. Now as far as the browser itself, I didn't have enough time to play with it yet. I played with it a little bit. I must admit that comparing it to other agent browsers that I've been using for several months, the first thing that I can say that it feels a lot more like using ChatGPT than it is like using a browser. And I think they did it on purpose. I think they will try to merge all their different environments into a single environment, meaning ChatGPT itself will be the browser. The browser will be ChatGPT, your app on your phone or on your computer will be the same tool and it will do all the different things, meaning that's gonna be your universe into browsing, searching the way of doing research, chatting with chat, analyzing information, shopping everything all in one user interface. I will not be surprised if sometime in the near future when you're using a different browser with ChatGPT in it, you will start seeing popups or suggestions for you to switch to their browser for some kind of a benefit. Either additional features that will not be available on regular ChatGPT unless you switch to their browser because they will have to find ways to force people outside of their habits of using Chrome or any other browser and they have 800 million users to convert. And I think finding ways to do that through offering them additional functionality capabilities or bandwidth or whatever the case may be, to move them over from Chrome to the full ChatGPT experience is something they will most likely do. And again, I will be surprised if it doesn't happen in the relatively near future. And the one company that should be the most worried about all of this is obviously Google. Because if they can convert even, not 800 million, but 500 million users to start using ChatGPT as their gateway to the data of the world, Google is gonna take a hit and their stock is gonna take a gigantic hit, even if it's a relatively small hit for Google in the beginning. Now, staying on OpenAI, they had a pretty big snafu this week. OpenAI VP Kevin Wild. Tweeted that ChatGPT solved 10 unsolved Erdos problems and advanced 11 others. Now, a quick background. What the hell are Erdos math problems? Paul Erdos was a mathematician that posted several different statements that he believes to be true, but nobody including himself and others were able to actually create proofs for them. And so per this tweet, ChatGPT five was able to find answers to tell them and progress 11 others only. That is not exactly the case. So Thomas Bloom, who has a website that maintains these problems and shares everything he finds about them, basically said that what he has on his website as unsolved problems doesn't mean they're unsolved. It just mean that he is not aware of a solution for these problems. And all GPT five did is find other references to problems that actually has been solved, and it didn't actually solve it on its own. Or as Thomas Bloom himself called it. it's a dramatic misin. It's a dramatic misrepresentation of what actually happened. Now this have led to widespread mocking of open AI for this post. Jan Laun posted a few different things and Google deepminds, Demi labeled it as embarrassing and obviously many other people no, to be fair, G PT five and the latest Gemini 2.5 Pro are both very capable in math. They both won Olympic gold medals in the math Olympics, and so they're good in math. There's no need to say that they're doing things that they're not actually doing. So not a very smart post from open AI in this particular case. Now on a good news on OpenAI this week, OpenAI has changed the way that memory feature actually works and they're introducing automated management of the memory. So if you are like me and you are feeding the memory on purpose so it knows more about you and it becomes more contextually aware of your universe and your company and what you're doing and so on, you know that it gets to memory full issue. And then you have to go manually and delete stuff from the memory and you have to pick it on your own. Well, they now have an automatically prioritizing of conceptually top of mind memories based on recency and frequency. So if it's something you did only once six months ago, it is going to forget it from the memory. And if it's something that you're talking about a lot or using CHE a lot to do, or you just did it recently then it will put more focus on that. I think that makes a lot of sense. You can still go back and do everything that you did before from a control perspective, so you can review and reprioritize and delete and revert and turn memory off completely if you want to. Uh, just like you could before. But now there's gonna be a more active function as well, helping you to remember more relevant things. Now we're speaking on ChaCha's number of users and their growth across different sectors and their competition with Anthropic well recent research from Deutsche Bank has found that the European spending on ChatGPT has stalled since May. Now, it didn't stall completely, but just the rate of growth has declined dramatically. So the rate of growth in ChatGPT spending in May was almost 10%, and now it's down to about 1%. So it's still growing. It's just growing very, very fast compared to how it was growing earlier this year. Now, is that a place for worry, for ChatGPT? Probably. We need to remember that the European Union has a lot more limitations than the US and any other place in the world on the usage of these tools. So that puts a limit on some of the use cases for ChatGPT, and other AI tools, not just for ChatGPT. Is this a temporary slowdown? Is this the canary in the coal mine that is gonna tell us that this is coming in other places around the world? I don't know. But if I learn anything new, I will obviously keep you posted. Now from OpenAI to their biggest partner, which is Microsoft. Microsoft just released their own image generator. It is currently available to use in several different platforms, and it is being tested on the LMC chatbot arena, and it's actually doing pretty well. It's currently ranked number nine for image generation, and Microsoft is planning to implement this new image generation tool labeled MAI for Microsoft Ai, MAI dash image dash one, and they're planning to release it to the copilot and being image creation environment very soon. Now this is their very first image generator and the second overall model that they are releasing to the public, and this is obviously combined with their recent partnership with Anthropic, their way to de reduce their dependency on the open AI models. Another interesting piece of news from Microsoft, going back to their dependency on OpenAI, they just introduced that copilot composer menu now has video generation capabilities that comes from open AI's. SOA two. So the latest model from OpenAI is now available in the copilot composer environment. Free copilot users can generate one, so two video per day. While pro subscribers have unlimited access, what does that mean? Is it really unlimited? I don't know. It will make zero sense to me if it is really unlimited, because these are very, very expensive to generate. But this what the Microsoft announcement said so far, they also added a new shopping tab in the copilot sidebar, which is telling you that they're doing exactly what OpenAI is doing. They want to keep you on their platform in the copilot ecosystem for everything that you're going to do, and they would like to provide you all the tools to do that so you don't go anywhere else. This is the new Frontier, right? Is the app that will control everything that we do in our connection with data of the world, whether it's shopping, working, browsing, learning, experiencing, creating videos, et cetera, et cetera, et cetera, all under one umbrella, and everybody will fight to be that platform. And from Microsoft to meta. Meta is rolling out parental controls for teens for their AI interactions, and this is going to be rolling out in early 2026. This will allow parents to disable one-on-one AI chats entirely, or block specific types of bots or specific type of AI content. And it will also allow parents insights to what are the topics that their teenage kids are consuming with AI without giving them full access to see the exact conversations. Meta has stated that over 70% of teens have used AI Companions with 50% using them regularly. So this is a very high percentage. But that being said, you gotta remember that Open AI has integrated AI into everything on their platforms. So whether you want to use AI or not is not really your choice. And so that basically means that 70% are using Instagram regularly and so on. So it doesn't really mean that they intend to use AI Companions, they just use the platform. And this is just built into the platform. I, and speaking on that, meta just closed the door for external AI chatbots to be available on WhatsApp through the business. API. So far, tools like ChatGPT, perplexity, Louisa Polk and others have been available on WhatsApp. You can talk to ChatGPT on WhatsApp and now you will not be able to do this as of January of 2025 because Meta is going to block these tools from being available through that platform. Now, Meta's excuse is that it burdens its own systems and that that wasn't the intent of the business. API, the business API was for companies to be able to provide answers to their clients and not to drive a chatbot solution. But the reality is it is competing with their internal tools and they can block it. So I don't really understand why they're even apologizing or trying to excuse it. It's very, very obvious what they're doing. And to be fair, they have the right to do that and they can do that if they want. Now, staying on meta, we haven't reported any bad negative news about the new Super Intelligence team for a few weeks now. So this week they announced that they're letting go of 600 positions on the recently established super intelligence team. This was a part of an internal memo by Alexander Wang who is running this team. And the way he defined it is, and I'm quoting, fewer conversations will be required to make a decision and each person will be more load bearing and have more scope and impact. So will this department eventually fall into place and do amazing things? Maybe? So far, it doesn't seem to be a well organized, well strategized, well executed process of putting this department together. That being said, again, it's meta. They have lots of money, they have compute, and they now have a lot of really smart people that they poach from other labs. So time will tell, but as I mentioned, this is another bad sign when you let go of 600 people from a department that you established less than six months ago. And the last piece of news about meta is that they just joined forces with Blue Owl Capital in a$27 billion joint venture to build their largest data center so far. And that is gonna be built in rural Louisiana. It is going to be a massive, massive project and that is obviously to keep them in competition with what's going on with open ai, Andro, and Alphabet and their insane investments in compute. The construction of this new data center is supposed to be wrapped up by 2030, so it'll be a while before it goes live. To put things in perspective on how big this entity is going to be, the local utility energy company that is trying to estimate how much power this will require, saying it will consume twice the electricity of New Orleans on a peak day. Now, a few interesting pieces of news. Announcements and partnerships from companies we usually do not cover because they're not in the news regularly, or at least not on these kind of topics. Oracle is unveiling a set of new AI driven features across its cloud infrastructure, applications, and analytics. So multiple AI agents including improvements to their AI studios, as well as a agent marketplace and other capabilities are going to be deployed across multiple Oracle infrastructure and tools, and will be available to anyone who is using Oracle tools, including older systems such as PeopleSoft, how exactly would that happen? Will be very interesting to see because PeopleSoft has been almost impossible to even integrate because it's so archaic. But apparently they found ways to run agents on top of PeopleSoft as well, so I'll be very curious to see what exactly they're doing with that. in their new marketplace that they announced, they already have several really big players offering different agents in that marketplace, including Exchange, Accenture, Deloitte and IBM specifically, IBM released several agents including a smart sales order entity agent. A in-company agent, a requisition to contract agent and a few others. And IBM is also developing HR and supply chain agents to run on this Oracle marketplace. Now in addition, Oracle plans to incorporate IBM Granite for AI models into their cloud infrastructure to provide the cloud infrastructure users with more options. So this is a partnership with two of the largest IT companies in the world starting to offer more and more joint AI solutions, which is obviously good news for many, many companies that run on this infrastructure. Now, speaking of I-B-M-I-B-M just made a very interesting announcement this week that they're doing a strategic partnership with Grok with the queue. So we spoke about grok with the queue several times in the past. On this podcast, grok with the queue are developing a new type of hardware. That they called lpu Language processing units, which different from GPUs, are not billed for training models. They're billed purely just for inference, which is the time that you actually use the AI that generates the tokens, that generates the outputs, and it runs five time faster than current GPUs at a fraction of the cost. And so IBM is going to include that infrastructure into the IBM environment for the benefit of everybody that is using the IBM infrastructure. One of the biggest benefits of this is for any environment like healthcare that requires immediate, near realtime responses because now A, the response will come significantly faster and b. It'll be significantly cheaper. So you'll be able to run through multiple, very large queries very, very fast, and at a reasonable cost. Those of you who haven't seen Grok work, I highly recommend trying it out. You can just go to grok Cloud and run a prompt on ChatGPT and then run the same exact prompt on ChatGPT so you can pick the different models on Grow Cloud and entire pages appear instantly. It's like magic. And so I think, again, that's a very interesting move by IBM, and I'll be surprised if other companies don't move in that direction, not necessarily just from Grok. There's other suppliers who do similar things, but start to provide cheaper, faster inference in addition to the GPUs that they have. And then we'll use the GPUs more for training. And then these inference more for delivery of ai I know the next piece of news is not specifically related to ai, but I cannot go through a news episode about technology this week without mentioning the AWS outage that happened on October 20th. About 30% of the internet was impacted, or at least the Western Hemisphere Internet was impacted by this outage and many different websites with either down or struggling, including banking, gaming applications, Snapchat, and many other platforms over impacted by AWS. So why is it interesting in this podcast as an AI podcast? So let me explain. The cause for this was a simple internal glitch, not even an external attack, which is always another option, and it has put many, many companies down for several different hours. Now think about what does that mean once we start running the company on ai. It's not just the application, it is also replacing a lot of the human functions. It means that if you're building systems around that, where AI is actually doing the work, you must think early on about redundancy. You must have more than one large language model in the backend doing it. You must have real time rollover to a different language model if one fails. Otherwise, the actual operation of the business, not just the way to access data or access the website, the actual operation of the business may come to a halt because there's going to be these specific bottlenecks that are gonna be AI operated. And if you don't have a plan, this might be a very, very bad day for you and your company. And so if you are developing these kind of tools and if you are becoming dependent on AI doing some functionality, I definitely suggest you take that into account. Now, speaking on Amazon, Amazon revealed something really, really cool this week. They started providing their delivery drivers with delivery glasses. These are designed specifically to enhance the safety and efficiency of their delivery associates. And it is basically like the smart glasses that meta just announced only built specifically for delivery. So it has a projection and camera and headsets, and it allows the drivers and the delivery people to stay focused on what they need to do while getting all the information they need, such as navigation and information about the packages and guidelines to where to drop them, et cetera, et cetera. Many functions that are required for them to do with assistance from AI straight into these glasses. I expect that to happen more and more across multiple industries. Think about people working in warehouses, people working in assembly lines and so on. There's so many different amazing applications for these glasses as it will become more capable and cheaper. I am certain we will see more of them everywhere. Now speaking of hardware and how it can impact our world, Google just made a very big announcement on their blog post achieving another milestone in their quantum computing capability, and they're claiming that their Google Willow quantum chip has achieved a verifiable quantum advantage, outperforming classical supercomputers by 13,000 times. So think about the best computers that exist in the world today, and this thing runs 13,000 times faster than them. Now, this is not operational yet, but this is a step in the direction of having quantum computers that can actually provide reliable, consistent results. Now, what does that mean for the AI world? Well, we discussed in the beginning, if you remember when we talked about the, in the interview with Andre Carpathy about him saying that there's all these different things that have to happen to achieve major progresses. Well, the biggest two bottlenecks of AI right now is compute and power. If quantum computing becomes available and hopefully nuclear fusion becomes available, we solve both of these problems and that's gonna put us in a completely different kind of realm of capabilities when it comes to AI implementation. Whether that's good or bad. That's a whole other conversation. Again, we had that, uh, conversation in the beginning of this episode, but it will open significantly more possibilities. Staying on the hardware topic and on the glasses topic, Alibaba just released their version of smart AI glasses at a price point that is competing with Meta's Smart Glasses. So for$659, you can pre-order these glasses that are running on the latest and greatest hardware and the latest and greatest Quinn three models that are extremely powerful. So they are definitely aiming again to combine all the different universe, including hardware, including access, including platform, including shopping, including everything into their universe, just on the Chinese side of the world. And staying on hardware, but two very interesting pieces of information about robotics. One is in the Q3 Tesla earning call. Elon Musk referred to his request to have higher control over the robot of Tesla. Ilo Musk expressed his demand, if you want or his wish to have more control, or like he said, influence over the army of Optimus humanoid robots that Tesla will build. I'm quoting from what Musk said, my fundamental concern with regards to how much voting control I have at Tesla is if I go ahead and build this enormous robot army, can I just be ousted at some point in the future? If we build this robot army, do I have at least a strong influence over this robot army? Not control, but a strong influence? I don't feel comfortable building that robot army unless I have strong influence. What does that mean? I'm not exactly sure. I am somewhat terrified with this sentence, It raises a lot of questions of who actually controls these robots, what they're allowed and not allowed to do. What kind of control, or call it, whatever you wanna call it, Elon, influence the developers of these robots will have. And if they're gonna be in every home, if they're gonna be in every street, if they're gonna be in every restaurant, if they're gonna be in every company, can somebody just take control over them one day and do something different with them, whether intentionally or by mistake? And that leads me back to what I said in the beginning, the three robotics laws from Asimov. I pray that somebody will figure out to put that in place. By the way, despite all the delays and issues that Optimus has, a production intent prototype is slated for February. March of 2026. So this is when they will start production. The initial plan was for them to deploy 5,000 of this year. This didn't happen, but that's not a very big delay, especially when you compare it to previous Tesla commitments. And then the final piece of news for today is that a startup from Beijing called Neurotics Robotics is starting pre-selling the world's cheapest humanoid robot. It is really tiny. It's just three feet tall, so just short than one meter tall, and they're pre-selling it for$1,370 straight into the holiday season. So I expect an explosion of sales for that robot. It comes with several different basic skills, but also with a user friendly visual programming language that kids can use as well as STEM lessons can use. So very interesting approach. They're not attacking a global specific industry or so on and so forth. It can run for about one or two hours only, but it could be a really interesting toy or a really great way to teach how to use robotics in different scenarios. So a different approach, and a very different price point than everything we've seen so far. The cheapest robot I know of right now is another Chinese company. Its uni Trees are one robot, which is highly capable, and they're selling it right now for$6,000, which is still very cheap to compare to most other robots. There are a lot more news from this week and they all appear in our newsletter. So if you wanna know what else happened this week, including some very significant funding rounds to some no names and less no names, go check out the newsletter. You can sign up through the link on the show notes. If you are enjoying this podcast and you find it valuable, please share it with other people. Click the share button now. Think of several different people that can benefit from it and share. Just share the podcast with them. It'll take you five seconds to do that, and you'll be helping other people and I will be really grateful if you do that. And if you're on the app, either on Apple Podcast or on Spotify, I would appreciate if you leave me a review as well. That also helps get to more people to learn about ai so jointly we can increase the chances of landing on the better side of a potential future with AI versus the wrong side of potential future with ai. We'll be back on Tuesday with a fascinating how to episode this time about weve, so if you want to learn how to create the most incredible visual assets at scale, including images, videos based on your brand guidelines, based on specific processes, don't miss this episode. It is going to blow your mind. that is it for today. Have an amazing rest of your weekend and we'll talk again on Tuesday.