Leveraging AI

239 | Powell’s AI job freeze, OpenAI’s $1.4 trillion bailout talk, “China will win the AI race”, Apple’s Gemini-powered Siri comeback, and more crucial AI news for the week ending on November 7, 2025

Isar Meitis Season 1 Episode 239

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Learn more about AI Business Transformation Course: https://multiplai.ai/ai-course/

Is your company’s next hire… an AI agent?

This week, Federal Reserve Chair Jerome Powell quietly sounded the alarm: job creation is stalling—and AI is likely behind it. Meanwhile, OpenAI is writing trillion-dollar checks its revenue can’t yet cash, and Apple’s Siri might finally be getting a real brain… courtesy of Google.

If you’re a business leader navigating 2026 and beyond, this is the episode you can’t afford to miss.

From massive layoffs masked as "rebalancing" to the quiet data wars fueling generative models, this episode maps the uncomfortable truths—and powerful opportunities—every executive should be tracking.

🔑 Key Takeaways for Business Leaders

  • The Fed just quietly admitted it: AI is flattening job growth—and it’s not temporary.
  • Satya Nadella says the next year is all about “unlearning.” New org charts, new workflows, and fewer people.
  • The irony of AI job creation? Most new roles involve training the tech that will replace you.
  • OpenAI’s $1.4T spending plan doesn’t add up. Not even with Sam Altman’s most optimistic projections.
  • The real tech bottleneck? Power. Microsoft is sitting on idle GPUs because the grid can’t handle the load.
  • Agentic browsers are coming for your interface. Today’s apps are tomorrow’s legacy software.
  • Amazon wants to block AI agents. Shopify is welcoming them with open arms—and gaining traction.
  • Coca-Cola just made its holiday ad with AI. The message is clear: even legacy branding is being reimagined.
  • China’s catching up—fast. Alibaba’s models are now outperforming GPT-4 on elite math benchmarks.
  • Michael Burry is shorting Nvidia. Should you worry? Depends how exposed you are to the AI infrastructure play.
  • Copyright law is broken. Getty’s lawsuit failed—your content might already be training someone’s next model.
  • Hardware is going wearable. Glasses, rings, humanoid robots—each collecting more real-world training data.
  • We’re not in a bubble. Yet. But Gartner says product supply already outpaces enterprise demand.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This Isar Metis, your host, and like every week we have a lot to cover. We're going to start with the impact of AI on jobs, including the fed's, first time actually addressing it, or maybe not addressing, but admitting that there is a situation brewing. We are going to talk about the very interesting week that Open AI had the aftermath of those statements. We're going to talk about many different rapid fire items about OpenAI. We have interesting news from some of the other big players, including maybe we're actually gonna get a real Siri in 2026. There were some very interesting hardware new releases this week, and we're gonna end up with holiday cheers. So we have a full and interesting episode. So let's get started. Federal Reserve Chair, Jerome Powell stated that after adjusting for statistical over counting in payroll data, and I'm quoting, job creation is pretty close to zero now, he's linking the hiring slowdown to ai and he's noting, and again, I'm quoting again, a significant number of companies are announcing layoffs or hiring pauses with many explicitly citing ai, saying much of the time they're talking about AI and what it can do. Now what he's warning about is that large employers are signaling they have no need to add headcount, not just now, but for years to come. And he said that the Fed is watching it very carefully. Now, he also stated that AI puts them at a very strange dilemma, which on one hand they're seeing a huge upside risk in inflation, mostly impacted by huge investments in tech industry. But on the other side, they're seeing downside risk on employment. He was saying that makes it tough for them and other central banks because, and now I'm quoting again, one of those calls for rates to be lower. One calls for the rates to be higher. So they are in a dilemma, and yet they did cut rates from four to 3.75. But that is the current situation right now where on one hand there are huge investments which could lead to inflation, but on the other hand, there is no growth in actual labor force and it actually might be shrinking. He also mentioned the recent conversation whether there is or there isn't an AI bubble and he's on the side that believes that it is not a bubble. And he said, and I'm quoting, these companies actually have earnings. And I agree to that to an extent. First of all, not all these companies have giant earnings. So yes, the big players are making crazy amounts of money and their fastest growing companies ever, and they're making billions just a few years after launching a product. However, many other companies are ranking up hundreds of millions of dollars in investments and are very, very far from making any significant earnings. And the other big problem is the growing gap between the valuations and the amount of money they're raising and the CapEx that they're committing to versus the earnings that they're actually having. More about that when we are going to talk about open ai. But overall this year have seen a very significant tick up in layoffs, whether they're related or not related to ai, time will tell. Many of the companies say that they are letting people go before because of ai, because it's an easy excuse, but in other many cases, that is not the real underlying reason. As we talked about last week, Amazon laid off 14,000 employees, which is about 4% of their white collar staff. But a lot of them are potentially just based on over hiring after the pandemic. So it's just scaling back down to reasonable sizes in different departments. Challenger Gray and Christmas, which is a research firm, has cited nearly a million layoffs this year. 946,000, which is the highest since 2020 when we had the pandemic. out of those 17,000 are tied to ai and 20,000 I are tied to other types of automation. So yes, the numbers are growing, but they're still a relatively small number compared to the overall layoffs that we've seen this year. Meaning there are other forces at play, but the bigger fear, again, going back to what Powell said is that there's a very low rate of generation of new jobs. Another big name that related to the job creation or elimination as it relates to AI was Satya Nadella, the CEO of Microsoft, and he said on the BG two podcast that Microsoft will grow headcount again after being flat this year at around 228,000 employees worldwide, despite a 12 year revenue growth overall and a 40% growth on their Azure cloud income. So they didn't grow despite a very significant growth of the company. But he's saying that they will grow headcount again with a lot more leverage than the headcount they had pre ai. And he said that there is going to be a transitional period, and I'm quoting it is the unlearning and learning process that I think will take the next year or so. Then the headcount growth will come with max leverage. He added right now, any planning, any execution starts with ai. You research with ai, you think with ai, you share with your colleagues. And so what does that tell us about the mindset of Satya and probably many other leaders in his position. And I really like the unlearning and learning aspect of what he said. We are going to be working very differently than we are working right now. That is true for most organizations, whether for-profit or nonprofit, and that is true across almost every industry, which means we have to unlearn the way we work right now, whether from a technological perspective, from a headcount perspective, from a org chart perspective, and from more or less every other perspective you can think of. Because AI will start feeling more and more aspects of the way we work, which means we have to forget about everything we know so far, not everything, but a lot of what we know so far, and come up with new habits, new processes, new tech stacks, new procedures, new training protocols and so on, in order to really benefit from new AI capabilities, and every company that is not investing in that process right now is gonna be in serious trouble in the very near future because their competitors will do that, which means they will have a completely new base cost structure, which means they'll be able to be more competitive, do more things with less money. And if you don't make these changes, if you don't commit to training, you will find it very, very hard to stay competitive in that future. And speaking of company training, we have just opened the registration for two new courses. One of them is the course we've been running for over two and a half years, which is the AI Business Transformation Course, which is the basic course that will take you from your current level to knowing everything you need to know across multiple aspects and tools of AI in order to be prepared for 2026 and be able to deploy AI in your business effectively. This course starts on January 20th and then continues in three consecutive Mondays. And the other course, which is a new course for us, is the advanced automation course which will teach you how to combine AI together with traditional workflow automation, such as Make an NA 10 for maybe the most powerful capability there is right now that combines the rigidness and the consistency delivery of traditional workflow automation tools together with really advanced AI thinking and data analysis capabilities. This is the thing that right now, when I work with clients, make me feel the most like Batman because I can do more or less anything with this combination. So if you have the basics and you know how to prompt properly and you've built a few custom gpt and you want to take it to a completely different level, come and join that cohort. But if you need the basics, come and join our AI business transformation course in January. the advanced automation course, starts on Monday, December 1st, with another session on Monday, December 8th. And in these four hours, you will increase your capability to build effective automations in your company by a very, very big spread. But now back to the news and connecting the dots to what's happening to some of the people from a job perspective., and the current situation is actually creating a very big irony in some of the new jobs that are created. So everybody's talking about, oh, AI will create more jobs. Well, right now, the only jobs that it's really creating are jobs that are gonna take more jobs away. And I will explain, the biggest player in this new industry is Meco, which we talked about many times. And what they are doing is they're paying doctors, lawyers, and other experts, 200 and$300 per hour to train AI models in order to basically replace them. So they currently have tens of thousands of contractors that are earning a lot of money. They're claiming about$1.5 million per day that they're paying these contractors to train AI models on how to do their jobs. Me was just valued at$10 billion and they're supplying their data to companies like OpenAI, philanthropic, meta, Amazon, Google, Microsoft, Tesla, Nvidia. Basically, all the big players are consuming the data on how to do day-to-day jobs of professionals across multiple industries so they can be replaced. So these people are basically saying, if we can beat them, join them kind of scenario, and let's make the most out of it while it lasts. similar things are happening in other industries as well. OpenAI recently announced that they're working with Julliard Music students in order to teach compositions and former investment bankers from Wall Street to train on entry level investing, uh, support capabilities. We also reported a few weeks ago on Uber's new digital task initiatives, which lets drivers and technically anybody, you don't have to be a driver to perform simple AI labeling and training gigs while making a little bit more money when you're not driving. So they're on one hand helping the drivers make a little more money when they're not driving. On the other hand, again, they're creating more training data to replace other jobs. We reported, uh, in the last couple of weeks about Amazon new augmented reality glasses for delivery drivers, on a surface level is supposed to boost productivity and safety of the drivers. But on the other hand, they're collecting data on exactly what the drivers are doing, what are their routes, how many times they're getting off the truck, how much they're walking, and so on, which can be used to train autonomous robots that can later replace these drivers. Now to tell you how much the demand for this kind of training has grown, the most recent analysis of this entire industry says that it has grown from about$3.7 billion in 2024 to 17 billion in 2025, and it is just accelerating. Now if you want to take this to the next level and understand where this is going, we are all going to be using age agentic browsers. There is no way around it. This is gonna be the way we're going to engage with the internet in 2026. After that, it's probably gonna be no browser at all. It's just gonna be agents. But in the beginning we're gonna work in agent browsers. I do a lot of my work right now in Comet, but we now have other very solid options from other developers. I have no doubt that all browsers will become agent browsers, which means you're letting them do some of the work, which means everything you do in the browser will become training data for the companies behind these genic browsers, which will learn basically everything that we do in the internet, and they will be able to replace it shortly after because it's all going to become training data. So beyond Meco that are paying people or Uber that are paying people, we are all going to become participants for free in training AI tools that we'll be able to do the work that we do, or even the leisure stuff that we do to support that as well. Now to be fair, mimicking what we do in the browsers. The way we do it in the browser is very far from the most effective way to build autonomy around what we do. Doing a backend to backend API kind of calls will be faster and a lot more accurate than doing it the way we do things in the browser today. But it's definitely a good first step in teaching the machines. What are the things that needs to be done? And later on I have very little doubt. Again, that will be completely agentic. And the user interface that is built for humans will slowly disappear and will be replaced with backend agent to agent, agent to computer agent to server communication that will be significantly more effective. This will just take a little longer. What does this mean? It means that more or less, everything we do with computers right now, agents will be able to do within the very near future because we will provide them the training data as we do the things that we do. And staying on the topic of impact on jobs, IBM just confirmed that it's cutting. What is calling low single digits, percentage of its 270,000 global workforce, however, that's about 8,000 jobs. So while they're trying to play down and saying, oh, it's only lower single digits, percentage, it's still 8,000 jobs, uh, that are gonna get cut and as expected, uh, they're not calling it job cuts. The CEO Arvin Krishna says it's rebalancing or the way he defined it. We routinely review our workforce and at times rebalance accordingly. Now, while this is perfectly fine and it's the role of the CEO and the leadership of the company to do these kind of things, the bigger worry is that most of the rebalancing happening in the last 12 months have been down versus up. And to tell you more where the wind is blowing, Krishna told Bloomberg, he envisions that AI will replace about 30% of the 26,000 back office workers that they have right now. So that's another 8,000 people ish that are gonna be let go later on just because AI will be able to do their work. So a quick summary on where we are right now from a economy and job impact. AI still cannot replace entire jobs yet, but the focus is a on the world. Yet as I mentioned, many, many companies are investing a lot of money in training models to be able to do more or less everything. However they can do more and more tasks effectively and tasks is what together makes a job. So if a person currently does a hundred percent of something, if AI can replace 20% of that, the question is what do you do with the other 20% of capacity? And there's only two options. Option number one is you grow faster than that. If you can grow faster than the efficiencies that you're gaining from ai, either by using AI to do more things that you didn't do before, that you couldn't do effectively or profitably, or just because you have more capacity with these people to do more stuff, then this is awesome. You will retain the workforce, maybe even grow the workforce and grow. However, not all the companies in every industry can grow. The size of the pie is given. Even if it grows a little bit, it may not grow at the same pace as the efficiencies of the companies. And in this case, you have really two options. First of all, if you are a VC backed or a private equity backed, you don't really have an option. You will cut people but if you're not VC backed, if you're just running your own business and you're saying, I love my employees, they've been with me for years, I wanna retain all of them, you would still have a serious problem because you are betting on the future of all your employees. bEcause if your competitors are going to restructure around ai, unlearn and relearn, like we said earlier, and develop new systems, new processes, new tech stacks, uh, maybe potentially addressing new markets in different ways, they'll be able to be more competitive than you. And then you may lose your entire company hurting all of your employees and not just the immediate people that you'll have to let go otherwise. And this is not a good situation to be in, but I fear that this is the clear reality that is in front of us now. Again, there's the people who are saying, well, every previous revolution has created more jobs than it has destroyed. And that is correct, but there are several different questions. Question number one is what kind of new jobs it will create? Because previously, every time there was a revolution, we made more white collar jobs replacing blue collar jobs. So instead of being in the fields, we started managing, processes. And then, uh, later on with computers, we started sitting in front of computers, uh, that could do the work and automate things in factories and so on. So we always went for brain work instead of manual labor. And this is exactly what AI is replacing right now. So what are we going to grow into? I don't know. So that's question number one. Question number two is how many jobs it will create? Will it be enough to offset the jobs that it will take away? And the third question is, how quickly will it happen? And I have a feeling, and again, it's my personal feeling, it is not necessarily the truth that A, it's not gonna create enough jobs to offset the ones that it's gonna take away, and B, and that I'm almost certain of. It's not gonna create them fast enough, which means in the near future, in the next few years, we're going into seriously turbulent job market with a very serious impact on the economy, both in the US and globally. And we may come out stronger on the other side with new roles and new jobs and potentially even replacing current capitalism with something better. But until that happens, I think we're looking into some very turbulent years. Our next topic, as I mentioned, is gonna be OpenAI before we dive into the interesting week that they had, uh, they had a very big positive announcement. They have just rocked past 1 million business customers in the shortest amount of time of any company ever. So there hasn't been a single company in history that has got to 1 million business customers in such a short amount of time. To put things in a bigger perspective, they have 7 million CHU PT seats, meaning actual people in businesses that are using the platform. And that grew 40% in just the last two months. GPT Enterprise seats grew nine x from last year and it is slowly connecting to everything in companies. So right now, G PT five can reason across Slack and SharePoint and GitHub and other tech stack solutions that companies have, allowing employees to make sense in multiple data sources significantly faster than we could do before. We had AI Codex, which is their coding platform, has grew 10 x since August. So in the blog posts where OpenAI share this, they gave multiple examples of different known companies such as Indeed and Lowe's and Intercom, and Databricks and Agentic and different agentic companies, and how much time they are saving by using different OpenAI solutions, which again connects to, okay, if it's now 30% more efficient or 50% more efficient, or 25% more efficient, what are they doing with the extra time the employees have, and do they have anything impactful to use this time for? If not, the outcome is obvious. And now before we dive into the, as I mentioned, interesting statements that OpenAI had on two different incidents this week. One big piece of news that will lead us into that is that OpenAI just announced another big data center deal. In this case,$38 billion with AWS over the next seven years. With some of it rolling out as early as the end of 2026 with growing it into 2027. Which brings the total commitment of OpenAI to compute in the next few years to$1.4 trillion. So the first situation this week was with the interviewing bring Brad Gerstner, who's also an investor in open ai, and he was actually pushing Sam. I trying to understand exactly that, and he basically asked Sam, how can a company that makes roughly$13 billion in annual revenue could commit to 1.4 trillion in spending on AI infrastructure. And Sam Altman basically lost it. He was obviously really upset about the question, and his response had nothing to do with the question. He basically said, if you want to sell your shares, I'll find you a buyer enough. He then said that their revenue is actually a lot higher than 13 billion, and that there are many, many people who are standing in line to buy their shares. And if Brad wants to shell his shares, that could be very easily arranged. Brad said that that's not his plan and that he will gladly buy more shares, but that obviously did not answer the question. Now, I don't know why Sam exploded. I think it's a very legitimate question to ask, right? If you are making 13 billion or 20 billion or 25 billion or 50 billion, how can you commit to four to 1.4 trillion in spending on CapEx in the next five to seven years? Now remember my comment from the beginning of this episode when I said, the problem is not whether these companies have revenue or do not have revenue, but the gap between the revenue that they're generating and the amount of CapEx that they're committing to this is exactly what I mean, right? When Microsoft commits to 30 billion or 300 billion of spending in compute, it makes sense. They make that kind of money and their cashflow will support that maybe with a little bit of financing, but they have the money to pay the financing. In this particular case, the gap is so big that it just doesn't make any sense, and so we did not get an answer from Sam at least until the second event happened. So the second event happened when Sarah Friar, the CFO of OpenAI was talking at a Wall Street Journal event, and she was talking about the fact that the entire US needs to support this process, including industry and government. And she said that the government needs to guarantee, and I'm quoting, drop the cost of financing, but also increase the loan to value. And she's talking about specifically, uh, supporting the financing of chips and data centers. Which basically means putting taxpayers dollars at risk in order to support the growth of the AI industry and within it open ai. But then in a really poor choice of words, to make it even worse, he suggested that the government should put backstops for the company's massive AI infrastructure debt. In other words, this should be an interest of the US for open AI to be successful and to be able to make this thing work and hence should guarantee or potentially be ready to bail out if this thing doesn't work. This obviously backfired and critics came from every aspect of the spectrum that you can imagine slamming them for the insanity of this. Basically saying they on one hand won preferential borrowing rates from the government, and on the other hand, what the government to guarantee that they don't go out of business. All of that based on taxpayer's money when they just became a for-profit company, after a very long struggle to get there. Now Friar herself was trying to clarify that shortly after in a statement on LinkedIn, and she said it's not exactly what she meant. She just meant that the government should play their part with the private sector for the overall AI growth of the us. And she wrote specifically, and I'm quoting, we are not seeking a government backstop for our infrastructure commitments. But then again, these were two interesting comments, one by Sam Altman, the CEO one by Sarah Fryer, the CFO, both within a few days, both about their level of financial commitment that does not align with the revenue. So Sam went to Twitter and wrote a long post. So if you're following Sam on Twitter or X, or whatever you wanna call it doesn't matter. You know that he writes really short, great tweets usually, and every time he writes a very long tweet, you know, something went terribly wrong. Well, this was maybe the longest tweet I've ever seen Sam write The previous time I remember him writing really long tweets was the not very successful release of GPT five. So I wanna read to you a few short segments from the tweet. And again, you can very easily find the rest, which I suggest you do because it shed some light on how they should have communicated this to begin with versus how they communicated. And now trying to backpedal from that. So now I'm quoting from the post first, the obvious one. We do not have or want government guarantees for open AI data centers. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. I agree with that and I'm sure you agree with that a hundred percent. I'm continuing back from the post. What we do think might make sense is governments building and owning their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it. And it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government's benefits, not benefit the private companies. I agree with that as well. And again, I wish they would've said that as a statement to begin with. And then he talks about the three big questions in the air right now and he is giving detailed answer for each one. So I'm not gonna go to all the answers, but I'm gonna give you a quick summary of the questions in the short answers of what he said. There are at least three questions behind the question here that are understandably causing concern. First, how is OpenAI going to pay for all the infrastructure it is signing up for? So, let's talk about this for a minute. Sam is saying that they're not gonna make. 13 billion this year, but they're on track to making 20 billion. He's also mentioning that they're going to grow the company to 300 billion by 2030, which is very, very impressive. But it doesn't come even close to covering 1.4 trillion. And that is with their current commitment, that is if they do not commit for additional compute any time after this day, which is very unlikely. So even if they do grow at the crazy trajectory that they are projecting, which they may or may not do, but let's give them the benefit of the doubt, and let's say that they are making 300 billion in that timeframe, they will need to pay four times that amount. This doesn't make any sense. Going back to my statement from the beginning of this episode, the gap between the revenue that is very, very impressive to the financial commitments that they're actually making. Second question again, I'm quoting again. Is OpenAI trying to become too big to fail? And should the government pick winners or losers? Our answer on this is unequivocal no. And yet, the amount of moves that they're making in their ties that they're making to all the significant infrastructure companies in the us, both on the electrical side as well as on the compute infrastructure side, if they fail, they're putting the entire economy at risk, which whether they are planning to do this or not, doesn't make any difference. And the third question that he's referring to is, why do you need to spend so much instead of growing more slowly? His short answer, again, there's much longer answer in the actual tweet. He said, we are trying to build infrastructure for a future economy powered by ai. Now that does make sense, but it still doesn't mean that you can overspend your most optimistic revenue by four x within the next few years with the risk of actually having to spend even more on unexpected things or just by buying more compute. So the bottom line is, first of all, OpenAI is no longer a small startup, and its CEO or CFO or any other spokesperson cannot respond in the way they have responded in these two interviews. It just doesn't make any sense. They need to have their thoughts together. They need to have their talking points together. They cannot just go and pull words outta the thin air and then have to back paddle from the situation. The second thing is, I still don't get it from a very personal perspective, I've been running businesses for 20 years. I've raised money for startups. I have never seen anything similar to this. Nobody has seen any similar to this from a scale perspective, but also from a ratio perspective, right? If you are projecting exponential growth really incredible, like within a few years from starting a company to generating$300 billion is incredible. Take a$500 billion commitment, a$600 billion commitment, something that you can pay the financing for, and then spread it over another 20 years. You cannot commit to 1.4 trillion and which is four x your most optimistic projections. This is. Really, really scary. And again, it is scary not because of OpenAI. I don't think other than OpenAI, other people care whether OpenAI fails or succeeds. There are a lot of other, other companies who failed in the past. I think the problem is that it's going to take with them a lot of really, really large companies that are now spending a lot of money to build infrastructure that they are supposed to be paying for and may not be able to. But going back to the impact of this on the global scale, there is definitely a race between the US and China and the race is very, very close and the winner of this race may win a very significant. Very sig and the winner of this race may have a very significant impact on the future of our planet and to shed some more light into the situation over there. NVIDIA's, CEO Jensen Huang, one of the richest and most successful people in the world right now, and the leader of the largest company in the world right now just delivered an interesting warning at a Financial Times future of AI Summit and he's saying that China will win the AI race Now again, he corrected that a little bit afterwards in a statement after the conference where he said, as I have long said, China is nanoseconds behind America in ai. It is vital that America wins by racing ahead and winning developers worldwide. Two points about that. One is that I agree. I think there is a global race. I think China is very, very close Second. If second at all from a software perspective, they're doing some incredible things from a hardware perspective, they're still behind Nvidia, but we need to remember that Jensen Huang has a real serious interest in making these kind of claims and in putting pressure on the US government because President Trump just said last week in an interview that he will not allow Nvidia to sell their top level Blackwell chips other than inside the US or to ally governments, and that he definitely won't allow China to get them. He did say that he might allow. Nvidia to sell lower fidelity chips to China. But as of right now, China's not even interested in that, which makes it obviously less relevant. So I think the US government is in a similar situation to businesses. As I mentioned before, you can decide to slow down, but then you may lose everything. And so it is a very uncomfortable situation to be in. And we have to balance our decisions between what's good for humanity, to what's good to our internal people, whether it's the company or the government. I have a feeling that in many cases these two objectives do not align very well, which calls for tough decisions. And I don't know what the government will do, and I obviously don't know what every company will do, but these times core for very strong leadership and trying to put things together in a way that will reduce the negative impact and increase the opportunity that is laying ahead. Now let's dive into some rapid fire items. We'll start with OpenAI. That is apparently planning the release of GPT 5.1 thinking model. It has surfaced in the code of their web app just recently. The rumors talk about a family of GPT 5.1 models, including potentially a mini fast lightweight model and a full scale model as well. And presumably they are aiming to release it ahead of the release of Gemini three, which again is rumor to be released in on November 18th. So stay tuned. We may get two very powerful models in the next two weeks. OpenAI 5.1 and then Gemini three. None of this has been officially announced by anyone, but these are what the rumors are talking about. As we reported last week, OpenAI is now a for-profit company, but the lawsuit from Elon Musk against OpenAI that may prevent them from going public is still going on. And as part of that co-founder of OpenAI, Ilia s has revealed some details in a deposition that he had to give. And as part of that, open AI's co-founder, Ilia Susko that since then left. Open AI and started Safe Super Intelligence has shared a deposition that is shedding some light and some interesting facts about several different aspects in the history of open ai. Most of it relates to the ousting of Sam Altman and bringing in back and what happened in those crazy few days in between. So one interesting aspect is that OpenAI was actually talking to Anthropic about a merger that will make Dario Amay, the CEO of Anthropic, the CEO of the unified company. And apparently there were serious conversations about this, but this was turned down by the investors of Anthropic who feared that it will dilute their billions of potential gains and that's why it did not move forward. Now, in addition, SR'S 52 page memo to board members of OpenAI from back then details consistent pattern of lying and undermining his execs and putting his executives one against the other as behaviors that Sam did regularly. Now, if you remember what happened afterwards is that many of OpenAI employees, about 700 of them, said that they will leave OpenAI if Sam is not brought back, and Microsoft offered him a job to basically bring all these employees into Microsoft and then he was put back in his position that he's holding till today. But this sheds some light on what was happening in those few crazy days as Sam was kicked out and then brought back Another big news from Open Air this week is they just rolled out the SOA app for Android users. So after breaking the record of getting to 1 million iOS downloads in just five days, in September, they finally released the Android version as well. It is now live and available in the us, Canada, Japan, Korea, Taiwan, Thailand, and Vietnam. I already downloaded and played with it. It's actually a really cool app and I'm an Android user and so I finally got a chance to install it and play with it, and it's, as you've seen on the iOS side, very, very impressive with the capabilities that it can generate. On the other hand, it's another thing that's gonna suck you into it and waste a lot of time for you. Every time I watch these videos, I'm blown away, not by the fact that they will make me stay on the app longer, just like any other social media platform, but I keep on thinking all the time that these are AI generated and it just blows my mind. And it is really, really scary because a lot of these videos and are finding their ways into TikTok and Instagram and so on when most people do not know that they are AI generated. another interesting nuance change that happened on ChatGPT this week that is actually really powerful, especially as we're moving forward, is now the ability to add content and add comments to the operation. as ChatGPT is running an operation in the background, so far you give it a prompt and then it's doing its thing sometimes for a long time. If you are gave it a very long task to do and then you had to wait for it to finish or stop it in order to write an additional prompt and write. Now you can actually add comments, add context, and interact with charge G PT as it is doing its effort. I think this is a very critical capability, especially as you develop more and more sophisticated processes, and especially combined with the capability of these AI models who now work for longer amounts of time while thinking through processes and going towards more the agentic era. And as you know, you can click on the thinking button when chat is thinking and see what it's actually doing, which means you can actually see the direction that it's taking, which then allows you to interject and direct the conversation it's having in its brain to the right direction, or help it get additional information that it needs without it asking you and so on. I think this is extremely powerful and I'll be really surprised if the other models don't come up with something like this as well in the immediate future. And we also got some new hints on the new device that OpenAI is developing with Johnny I's team. In the same conversation with the Wall Street Journal Tech Live conference, Sarah Friar talked about. said that she cannot say a lot about the company's upcoming device, but she did say it's a multimodal world for ai. What is beautiful about these models is that they are as good through text as they are through being able to talk language to be able to listen auditorily, to be able to visually see. She also said that we got used to cell phones and it getting people being looking down into their screens and talking with our thumbs. Then she said, I'm looking forward to being able to bring something into the world. That I think starts to shift that. So basically not being heads down in our phones and not talking with our thumbs, what does that mean? I'm not a hundred percent sure, but it hints to the fact that A, it is a multimodality device, meaning we'll be able to see and listen and talk and text in and out. B, that it will alleviate us from the need to look into our screens all the time, which means it probably does not have a screen because otherwise there's no point in that statement. It will be very interesting to see what they come up with. Now, will this eliminate cell phones altogether? Here's what Sam said about it. He said, and I'm quoting in the same way that the smartphone didn't make the laptop go away. I don't think our first thing is going to make the smartphone go away. It is a totally new kind of thing. Well, since I don't have a clue what they're developing, it is very hard for me to comment on that. But I must disagree with Sam's statement relating to the laptop not being eliminated by the smartphone because the smartphone did completely eliminate the palm pilots. Those of you who remember that thing, and it also made. Old phones obsolete. And it also dramatically reduced the amount of small digital cameras that are being sold around the world. And it also completely eliminated GPS and navigation devices that we had, MP three players that we had and many other devices that were very, very common before the introduction of the smartphones. So yes, it did not eliminate the laptop, but it did eliminate a lot of other things, which means there's still a very big chance that whatever they come up with might replace cell phones. Maybe not in step one, but potentially within a few years. Now, before I connect the dots to how this impacts global domination, there's one last piece of news, which is ChatGPT just added two additional apps to the apps that can run inside of ChatGPT, which is Peloton. Which allows you to now inside of chat PT to craft personalized workouts and TripAdvisor that allows you to plan vacations. These joins a growing list of apps that are already there, and they already share that a few are coming such as Uber and DoorDash, which brings me to the following thought. We are witnessing the entire ecosystem of how we engage with the digital world through different devices. We can see it changing in front of our eyes whether we're paying attention or not. That's a different thing. Now. Right now we cannot imagine a world without smartphone and apps. Because we got used to doing this. But 20 years ago we had no smartphones and we had no apps. And so just like we got used to that, I think we need to start getting used to the new wave or the new way of interacting with the digital world and through that with the real world. So let's start with apps and then talk about the bigger system. Right now we're using most of our apps through either Android or Apple phones. In that ecosystem, either Google or Apple takes a huge cut off the top of every. Transaction that happens through the app store. Every upsell or every in-app purchases, Google and Apple. Apple takes about 30%. Google takes a little less than that, but a very, very big chunks from the apps that you're using on your phone, which means that companies behind these apps have a very strong incentive to actually move away from the existing ecosystem, which explains to you why they are running two develop capabilities on ChatGPT. ChatGPT right now is free for them to run their apps on, but even if Open AI starts taking 5%, 10% of them, it is still significantly cheaper than the what they're paying Apple or Google right now. In addition to being a new distribution channel with 800 million weekly users, which is growing all the time, it also is a very great way for them to break the shackles that they have on themselves from Apple and Google right now. That means that over time we may use more and more of these apps inside the ChatGPT or other AI platforms versus Native on our phones. But in the long run, I don't think this will survive either, because I think what's gonna happen, we're going to engage with the digital world using agents, and these agents will spin up the functionality or the display or the connectivity they need in real time based on the specific task. Or they will use something like skills in Claude right now, meaning little functionalities that can allow them to connect or do different things, but not in entire applications. And this will shorten the time of going from an agent trying to spin up something to an agent, being able to actually do something. These functionalities will exist as either skills or different small mini apps that will allow the agents to act in an agent to agent or agent to server world. NOw let's connect the dots to some bigger things as well. Think about how Google became so successful and impactful. They started with search, which is how we find and interact with information online. But then they went to the ways we engage with that information, the actual access point. And there were two access points. Access Point number one was the browser. How do we engage with the internet, not just the data, but how do we engage with it? And they built a browser, actually bought a browser and were able to make Chrome the most used browser on the planet. But then cell phones showed up, and this is where they took Android in order to control the devices that we use in order to engage with the internet, which gave them even more data and more access and other data points on how we use the digital world, where we use it from and basically know a lot more about us so they can make even more money. Then they added the app store on top of that. So any application, whether they developed it or not, they have access to, and they get a cut off and they build a set of tools on top of that for basic users and enterprises with the Google Suite and beyond. So this is how Google became the world dominance they are today. They're controlling the entire ecosystem from the data and how you find it, all the way to how you engage with it, the tools, the processes, everything. This is exactly what OpenAI is doing right now. They're already controlling the data process and replacing the old search with ai, and they are now building devices. They're just launched a browser. They're launching enterprise and day-to-day tools, literally step-by-step following the blueprint that Google did in the last 20 years. Will they be successful in pushing Google aside? I don't know. Will they become a very strong competitor to Google? A hundred percent, but they're not the only company that has been successful in the past couple of years or in this past year. The information, which is one of my favorite digital magazines right now, I just released. Its 2024 breakout hits. Of the top 50 companies that have grown the most in the past year, and many of these companies are AI companies that we talked about a lot. A big one that they talked about is sphe, which is the company behind Cursor that skyrocketed to 27 billion in dollar in valuation, with over 500 million in recurring revenue as of June. Up from 400 million valuation and 20 million in a RR in just one year. Suno, which is the music generator app that I really, really like, has hit 150 million in annual recording revenue from app subscriptions, and grew to over$2 billion in valuation. Clay, the AI sales tool, which I also really like, has got to a$3.1 billion in valuation in August. sixfold from the 500 million valuation just earlier in the year, perplexity as in$18 billion in funding with a huge growth 11 labs, which is the voice generation tool, raised a hundred million dollars at a$3.3 billion valuation in October with shareholders already doing a private sale with north of$6 billion valuation. So what it is showing you, it is showing you that A, these companies valuations are growing exponentially, like nothing we've ever seen before. But the reality is going back to whether there is or there isn't a bubble, the revenues are growing at an insane pace that actually justifies these valuations. So everything seems to be growing at a very high pace other than one thing. So in the BBG two podcast interview, had OpenAI Sam Altman and Microsoft Satya Nadela exposing what they fear is slowing everything down, which is electricity. US data center electricity demand has surged over the past five years, outpacing the utilities to support them by a very wide margin. Nadela even shared that many of Microsoft GPUs are sitting idle due to the fact that some of their data centers cannot support enough power to run all the GPUs. And his specific quote was, it is not a supply issue of chips. It is the fact that I don't have warm shells to plug them into. In fact, that is my problem today. Warm shells meaning the infrastructure from Alec, electricity perspective, from a data warehouse perspective to plug them into, but the flip side, that Alti, that Sam Altman mentioned in the same interview talks about the risk that the existing investment in, let's call it old school, electrical infrastructure, are taking. He said, and I'm quoting a very cheap, from a very, if a very cheap form of energy comes online at a mass scale, then a lot of people are going to be extremely burned with existing contracts that they've signed. Now, as you may or may not know, Sam Altman is invested in several different startups at very large scales in the electricity production fields from solar to nuclear fusion and so on. So when he's saying these kind of things, he knows what companies are working on in the world today from an electricity generation perspective. And when you are building a new nuclear reactor, or if you're building a new carbon based electrical generation facility, you are not building it for five years, you're building it for 50, and the investment is spread over that. And when he's saying that this might be a big risk, he knows what he's talking about. But since we mentioned Satya, let's switch gears and talk a little bit about Microsoft and then followed by many other companies. Copilot just launched copilot pages, which allows Microsoft Phase 65 copilot licenses holders to create new dynamic webpage without writing any code behind the season, it is running GPT five, and you can turn it on inside of copilot chat tab for instant preview, edit and iteration of ideas and creating web pages that you can immediately deploy and share with people. This has been possible on all the other large language models for a while. My favorite tool to do this is actually Anthropic Claude. I believe it is creating the most effective pages right now. The other option is obviously to go to either a vibe coding platform to generate something that is more significant or to one of the age agentic tools, such as Gens, spark, and Manus. But now you can do this within the copilot universe, at least to an extent. Now, I promised you news about Siri. So from Microsoft, let's switch to Apple. Apple has finalized a deal with Google to put Gemini as the engine behind Siri, at least in the short term, the deal. In this deal, apple will pay Google$1 billion every single year to be the engine behind Siri as early as 2026. So if you've been following this podcast or just been following the saga on your own, apple has promised Apple intelligence with a new Siri for a very long time now. And there's been a lot of drama and a lot of internal reorgs and new people running this department or departments because of all the different reorgs without any real success as far as delivering a level of quality that will justify what they wanna release as the next series. And now after having conversations with all the different potential partners, including philanthropic and open AI and so on, they have selected Gemini to run the model, but they are saying that they're still developing their own internal capabilities and that first of all, some capabilities would still run. Based on Apple's intelligence, but most of it is going to be based on Gemini, but they're still developing a tool that will eventually will replace Gemini. The other thing that they added is that these Gemini models will run on Apple's infrastructure to guarantee the data security and data privacy of the information that you're going to be sharing with Siri and there, and this presumably puts them back on track to release the next version of Siri in the spring of 2026, and there was a very big question mark on that because they kept on pushing it back for the last year and a half. So maybe now it's finally happening. Staying on Google. Google Maps just added Gemini into Google Maps, which is actually really, really cool. You can actually talk to your navigation platform as you're navigating, and it can do a lot of other interesting things. You can cross reference images with Google Streets and basically ask for landmark based navigation. I see this kind of building. Can you tell me, should I turn left or right? Drivers can also converse with Gemini in the middle of the route. Is there a budget? The example they gave, examples they gave is, is there a budget friendly vegan restaurant within a couple of miles, or where is the parking next to there and stuff like that. Also, the navigation cues are becoming more helpful. So instead of telling you turn in 500 feet, it's gonna tell you to turn next to the Wawa gas station or a clear restaurant and stuff like that. And there's a new integration between Gemini and Google Lens where you'll be able to point your camera at something that you're seeing and then have a conversation with Jim and I about this such as, what is this place? Why is it popular? What hours is it open? Or ask about a flower that you're seeing and so on. This gives you a glimpse of where the world is going because as you know, many companies are developing glasses or other wearable interfaces. So combine the capabilities I just described of Google Lens combined with Gemini navigation, combined with Gemini. Combine that with translation from any language either written or spoken, and you understand that the world we know and how we engage with it is going to be changing dramatically in the next two years. Now the technology is already here. I just think it will take a while for it to become mainstream, and it'll probably take a year or two. But I do believe that within the next few years, we'll start seeing more and more people use AI wearables across more or less everything we do. And that comes with a whole interesting baggage of what it means from a privacy perspective and so on. Staying on Google. Google just added a vibe coding capability inside of Google AI Studio. So there was some of that functionality available before, but now it's a full vibe coding platform where you can tell it what you want the application to do and it will do it very, very quickly. The cool thing about this that makes it a little different than other vibe coding tools is that it's building upon all the different infrastructure and different tools and capabilities and APIs that Google already has. So it knows how to use all the different functionalities such as map and vision and so on, to build apps much quicker than on the other platforms with a lot of flexibility. It is not built for developers, it is built for people like me and probably most of you. So it's definitely worth testing out. They're getting into a highly. Crowded space, but they have the benefit that it is in the Google universe. So I will report as I learn more about what it can and cannot do compared to some of the other tools. Switching from Google to philanthropic. Philanthropic just shared that it is expecting exponential growth from its enterprise adoption, and they're talking about hitting a revenue of 70 billion by 2028, which is less than three years from now. Now in addition to the 70 billion in revenue, they're expecting 17 billion in cashflow. which is a very different scenario than OpenAI is expecting. OpenAI is expecting to keep on losing money at least until 2030, so they're expecting to continue the exponential growth they're seeing from adoption, mostly on coding and API usage, and they're seeing that impacting their gross margin to very positive from very negative Last year. Last year they had a negative 94% gross profit. Uh, and this is going to change apparently in the next few years. They're on page to hitting a 9 billion A RR by the end of 2025 with their 2026 targets hitting 20 to 26 billion, which is three x this year. And as I mentioned, a very big part of that come from API sales and a very big part of the API sales comes from claude Code alone. So Claude Code has by itself has a$1 billion in revenue growing up from 400 million in July. So it's two and a half X in just a few months coming just from Claude Code. I love Claude Code. I use it for a lot more than just coding. You can use it for many different things. We're actually going to do an episode about how to use, uh, cloud code in order to help you develop different agent and tools for business use cases without understanding anything about coding, I don't understand anything about coding. So expect this episode in the next few weeks. Now to put things in perspective of how good Claude Code X is, Brex, which is a company that creates corporate financing solutions are have been using anthropic powered agentic tools as part of their process. And they're claiming that 80% of their new code base is generated by ai, mostly through Claude Code. That is very, very significant. Now, I must admit that I talk to a lot of developers and managers in large tech companies, and most of them are actually pretty frustrated with how employees or people are using AI right now to develop code. But I guess if you develop the right processes around it, going back to unlearning and learning how to work properly, there are huge benefits, and this is a very significant, successful company that is saying that 80% of the code is AI generated. It tells you where the wind is blowing, even if not everybody are getting similar results right now. And from philanthropic to perplexity. Perplexity is fighting back the battle with Amazon. So amazon has sent a cease and deceased letter to Perplexity asking it to prevent their comment browser from allowing users to send a cease deceased letter to Perplexity to ask them to Hal comments shopping on the Amazon platform on behalf of users. Well, in a November 4th blog post from Anthropic that was titled, bullying is Not Innovation. The startup calls Amazon Move a bully tactic to scare destructive companies like perplexity out of making life better for people. They're basically saying that people have the right to use agents to do stuff on their behalf, including shopping on their favorite platforms. And obviously Amazon wants to protect their turf. CEO, Andy Jassy said on their Q3 earning call that third party agents are lacking, and I'm quoting personalization, shopping history with delivery estimates frequently wrong prices and often wrong descriptions. They're obviously trying to push Rufuss, which is their own platform, which is actually seeing huge success. So Rufuss has rack 250 million users and 60% higher purchase completion rate compared to traditional ways of shopping, and they're projecting 10 billion in annualized, in annual incremental sales coming from the usage of their own AI platform, which tells you partially why they don't want perplexity to be a part of the equation. I must admit I don't get it. And the reason I don't get it is because if perplexity allows people, agents to buy on Amazon, that's just another distribution channel for Amazon. And yes, they have their own platform that allows them to then sell ads and other stuff, but this is a distribution channel they didn't have before. So I'm a little confused with the approach from Amazon and it will be interesting to see how this evolves. I don't think they'll be able to block AI agent for a very long time because otherwise they will stay with no external traffic. And so they will have to figure out, unlearn and relearn, uh, how to work in this new agentic world. Staying on AI and shopping. Shopify has. Shared that they see a seven X increase in traffic from AI tools to Shopify stores since January of 2025 and 11 X surge in pro in purchases powered by AI search. So a very different approach than Amazon. In this particular case, Shopify is actually embracing AI or as their president, Harley Finkelstein said, AI is not just a feature at Shopify. It is the center of our engine that powers everything we build. As you remember, they recently signed partnerships with Open aiche, PT with perplexity, with and with Microsoft copilot, all to be able to show products from Shopify in these channels. Again, exactly the opposite approach from Amazon, and I don't think Amazon will be able to do what they're doing right now because then companies like Shopify will start chipping into their market share because more and more people will shop through these agents versus directly on the platforms themselves. Speaking of interesting partnerships and perplexity, perplexity is going to pay$400 million in cash and equity to SNAP in order to become the conversation chatbot behind some of the AI features inside of Snapchat. So why would Perplexity spend$400 million on something like this? Well, Snapchat has 477 million daily active users, and that will give perplexity access to a very large, young audience, which can guarantee their future growth. And from the US to China, we mentioned earlier that the gap between the US and China is narrowing if it exists at all. Alibaba, Quin three Max just nailed several elite math contents in the us. It got a hundred percent accuracy on American International Mathematic examination, also known as aim, and the Harvard MIT mathematics tournament. It is the first Chinese AI model to hit the perfect score on these channels across algebra number theory and other aspects of mathematics that are included in these tests. G PT five PRO also hit a hundred percent on the HMMT from Harvard with tools and 93 and less than that without tools. And it also hit 94.6 on the aim without tools. So right now Alibaba actually scores higher than OpenAI and got the fir the perfect score on both of these tests. Now, there've been a lot of chatter on this podcast and obviously coming from multiple different aspects. Are we in a bubble or not? And are the valuations are too high and so on? Well, the Michael Bur the Oracle from the Big Short is now doubling down on his AI skepticism, which he has been very loud about for a while, and he is now putting over$1 billion of his Scion Asset Management Fund, which is about 80% of the entire portfolio as bets against Nvidia and Palate here specifically, the specific quote from him on X was sometimes we see bubbles, sometimes the only winning move is not to play. Basically, meaning he's betting against the market and specifically against palantir and Nvidia. The CEO of Palantir actually finds really interesting. Alex Karp said the two companies he's shorting are the ones making all the money, which is super weird and I agree. And yet their valuations are currently absolutely insane regard the fact that they're making all the money. So going back to my statement from the beginning of this episode, there is a growing gap between being very, very successful to having valuations and being able to spend funds that are way beyond how fast you're growing. So these companies are growing? Yes. Are they making crazy amounts of money? Yes. Does that justify the current spending or current valuations of these companies? Not sure. and again, Michael Barry definitely knows more than I do, and he thinks no. By the way, just his statement sent both these stocks down, for a couple of days. Will it recover? Probably in the short term? What's gonna happen in the long term? I don't know. Don't take any investing advice from me. This is not my role in this podcast, but staying on the topic of whether we are in a bubble or not, Gartner just shared a very interesting report and they're claiming that the problem is not even whether these companies are making money or not, or whether there's a demand for it or not. They're saying that right now the mass rollout of a agentic and other AI models and platforms and products and solution dwarfs the current demand and the ability of companies to actually use these products. What they're saying is that we're going into a correction period, not necessarily because it's a bubble, but just because the ability to use these tools are not possible at the rate they're coming up, which will squeeze the margins of different companies, which will allow the bigger players to scoop up more talent and more technology, and then become even more powerful at the other side. And if you want the exact quote, it is, while we see signs of market correction or consolidation, product leaders should recognize this is a part of a product lifecycle not assigned on inevitable economic crisis. A few quick items on the hardware side of things. A company called Zipper from Seattle is now releasing voice activated glasses for people from the trades like roofers, HVAC techs, electricians, and so on. These glasses are connected to their existing platform that they've been developing for a while to manage these kind of employees. And it provides real time inputs, feedback as well as assistance to the employees in the field based on what they're seeing and hearing. And they can also talk and communicate back to their companies through these glasses. I think that these kind of solutions that are very specific to a different fields are gonna be extremely valuable. I've actually used live view mode inside of chat, PT and Gemini in order to help me fix different things in the house. It is almost magical, and doing that without having to actually hold the phone will be even more impactful, especially as you are working in potentially narrow spaces or on roofs where it's unsafe to pull up your phone and look at it. And from the company's perspective, there are two aspects to the story. The companies can track. The employees know exactly what they're doing, how long they've spent on every single kind of job, when they've been in and out of the job site and so on, which on one hand allows you to build more efficiencies and better training and better operations. But on the other hand, has a really serious feel of Big Brother is watching you all the time. So I guess good and bad, but I think it's inevitable until we see more and more of these solutions out in the field. A company called Sandbar just released their AI activated ring. It is connected to an app on your Apple phone later on Android and the web as well. And it has a little button and scroll inside the ring as well where you can talk to it, dictate to it, get its feedback via voice and be able to save information, analyze information, and get information, all through that little ring. Whether this will be the solution for our AI communication. I don't know. Again, I think glasses make more sense because of many different aspects, but a ring is definitely a lot less invasive and you can use it at night and you can use it in public places, and you can use it in places where you don't wanna wear glasses, so it might be a part of the solution. You can now get the silver ring for$249 or the gold ring for$299, and it comes with three months of the pro stream. Functionality, which costs$10 a month afterwards to use the ring. Staying on the product development, maybe the most impactful display of robotics that I've seen for a while came from Xang Iron Eight, which is their eighth generation of robots, but the third generation of a humanoid robot. And they made these robots extremely human-like. And what I mean by extremely human-like is, first of all, they come in different variations. There's a female and a male version you can pick whether you want it to be chubby or you want it to be curvy and so on and so forth. but they also have a artificial skin and it looks extremely human. How human does it look when they did their demo? Many people in the audience and skepticism over. Over the internet basically said that they think it's a human inside disguised as a robot. And what they did is they actually brought the robot back on stage and cut off its pants, basically showed the artificial skin, but then cut off the artificial skin to show the metal pieces moving underneath, and then had it walk on stage again. Now with half of its leg exposed, showing that it's an actual robot, the direction that they're taking it, by the way, is not two factories or home usage, but actually to service usage, such as in restaurants, coffee shops, and so on. And that's why they built it to look very human so it can interact with humans in the most logical way. This is a very interesting direction and a very scary development. From my perspective. This is the closest that I've seen to a future where we have robots that are indistinguishable from actual people. That being said, they're claiming these robots follow the three laws of Asimov, which I talked about many times on the show, but they've added a fourth law of privacy Data never leaves the robot, which is obviously important. Speaking of robots and training them to do human work. Apparently Tesla has a lab to have humans train the robots. How does that work? Well, multiple people doing mundane casual day to day tasks for hours every single day with a bunch of cameras installed on them and a bunch of cameras installed in the room, and all of that information is being recorded in order to train the next version the Optimus robot that Tesla has been developing. This connects very well to the topic we talked about in the beginning of people today are doing jobs that will eliminate their jobs later on in the future. This is just a hardware version of the same thing. An interesting topic about legal Getty Images just won or maybe not won the lawsuit against stable diffusion using their data for training. So they won part of the lawsuit, basically their claim about infringement, but they were not able to prove breach of copyright, and they dropped that part of the lawsuit. The interesting part is this was mostly based on a technicality where they couldn't prove an evidence of where stable diffusion trained their models, which allowed stable diffusion to get away out of that. But that now sets a precedence, which basically shows how much our laws, at least the UK laws right now, but I think it's the same most al almost everywhere around the world. The current laws are not ready for ai and they will have to change to adapt to that. That includes copyright laws, that includes ownership laws, that includes data usage laws that includes what do you own about yourself or not? You remember the whole case last year with AI generating rap songs of known rappers, and they couldn't sue because there's nothing in the law that protects your voice as yours. So going back to. Unlearning and relearning our thing. I think there's a lot of things that will need to change in the legal system around the world to deal with a new AI future. By the way, this result sent Getty images stock 8.7% down, and it makes perfect sense. I must admit, I don't know anybody that should still be using these image databases because you can generate your own images on the fly for less money, a lot less time, and get something unique that doesn't show up in 167 other blog posts. If you have been following this podcast for a while, you know that I really like mentioning the LM Arena, which is the platform that allows you to compare different AI tools and rank them. And based on that, they have their leaderboard where they just announced a new leaderboard that they call LM Arena expert, which basically takes 5.5% of the overall prompts that are used in the platform and is defining them as expert prompts. And then they're just comparing the results on these as the most difficult, most advanced prompts in the world. And they're actually broken this into 23 different occupational fields, and you can see the success and or failure of different models across all these different fields. I find this to be a lot more interesting than the existing benchmarks that are a lot more static than actual real life use cases. So if you wanna know which tools works best for very specific advanced capabilities, go and check out the new LM Arena expert leaderboard. And then I promised you holiday cheers. In the end. Well, Coca-Cola just released another Holidays are coming campaign and new video. And just like last year, this is an AI generated holiday ad. last year there was a very big backlash from then doing this after years and years and years of doing this with actual humans and shots. Well, they continued the same path and they continued it with the same. With the same company who developed last year's ads, but instead of showing people they switch to animals, which may give them a way out of saying, oh, but you took away people's work. Well, this is not people. This is now animals that are showing up in the ad. It's actually pretty cute. I've watched it twice just to see if I'm missing anything. Interesting. Some people are saying that it looks not realistic and so on, and I think it doesn't matter. I think it definitely delivers the message and it has a great vibe and it's bringing the enjoyment and excitement of the holidays, and it's not like somebody did it in their garage, the video. Generating this took a hundred people and over 70,000 AI video clips that were put together in order to finalize the final outcome. So there's still a lot of work to be done, even in the current AI era, if you wanna get to that professional level as Coca-Cola is demanding from themselves and I'm now quoting their chief marketing officer, before when we were doing the shooting and all the standard process for the project, we would start a year in advance. Now we can get it done in around a month. So go. Check out the new ad. Let me know what you think. And if you've been enjoying this podcast, please subscribe to it so you don't miss any episode. There's some really, really interesting episodes coming on the Tuesday episodes on how to do things with ai, some amazing guests that have already interviewed and these episodes are coming out in the next few weeks. Also, while you're out there, you can go and check out the two new courses. They're gonna be links to them in the show notes. And until next time, have an amazing rest of your weekend.