
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
229 | Insane week of AI releases 🤯 ChatGPT shopping, Sora-2, Claude Sonnet 4.5, and agentic browsers. The AI disruption might be slower than we thought. A new star AI actress is born, and many more important AI news for the week ending on October 3rd
Is the AI disruption overhyped or just getting started?
Yale says the labor market isn’t budging. Walmart is betting $1B that employee training is the missing piece. Meanwhile, Gen Z is pivoting to trades in an AI-fearing talent shift no one saw coming.
This week’s AI headlines tell a much deeper story than flashy product drops.
From ChatGPT turning into a shopping mall to Claude going full autonomous coder, and the rise of “work slop” at the office—every release points to a strategic fork in the road: consumerization vs. enterprise agents.
Your job as a business leader? Know which wave to ride—and when.
This episode delivers the insights to help you navigate the noise, avoid the hype, and see what’s really happening under the surface.
In this session, you'll discover:
- Why Yale’s new research says there’s no labor disruption yet—and what that doesn’t mean
- How Walmart’s $1B upskilling initiative reflects a bigger workforce gap than most execs are ready to admit
- The quiet revolution: Claude 4.5 coding autonomously for 30 hours straight
- OpenAI’s wild move into consumer land with Sora 2 + an invite-only social video feed
- Why Instant Checkout turns ChatGPT into an e-commerce front-end (and how it could threaten Amazon)
- The rise of “work slop”—and the reputational risk it brings to your team
- Agentic browsers are here: Comet, Opera Neon, and more change how we interact with the web
- AI in Hollywood: The synthetic actress already replacing human stars
- And a shocking stat: 58% of employees are using AI tools with no training—and leaking sensitive data
Yale Budget Lab: Early Evidence of AI’s Labor Market Impacts -
https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we have an explosive week of AI news. I probably could have recorded three or four episodes, uh, just this weekend on things that happened. There were a lot of rumors about new releases that are coming in the next few weeks and in Q4 in general, and we got many of those this week. And I was seriously considering starting with those, but then I decided there's some other stuff to talk about other than new releases. So we got a lot of interesting survey and research that is showing that AI may not be. As disruptive as we thought, at least not at the speed that we thought, and I thought it would be very important to share that information. So we're going to start with impacts of AI on the workforce in general. There's a lot of interesting things to cover. Then we are going to dive into all the new releases, and we have a lot to cover there from open ai, anthropic, Google, et cetera. And then we're gonna do a very quick rapid fire across multiple new things that are gonna be included like every week. I cannot include all the news from this week in a single episode. And so if you want to know the rest and there is a lot to know, make sure that you sign up for our newsletter and there's a link in the show notes to do that. But as I mentioned, lots to cover. So let's get started. So, as I mentioned, there are several sources that are showing that AI is not as disruptive, at least at the speed that we thought it is. And one of these sources were a conference hosted by the information, which is a magazine that shares a lot of AI related stuff and technology things. And Sarah G, the founder of Conviction, in that conference, predicted that it at least that it will take about five years for businesses to fully integrate ai. And she was citing the 20 years adoption curve. It took cloud computing to really be a part of every business as we know it today. And so she believes that larger organization cannot move at the speed that AI is moving and that it is going to be. A while, again, five years, that will give us more time to adapt and get ready for how it is actually going to work. She is talking first and foremost, not about the tech itself, but as extensive worker training that is lacking right now, and that is a necessity for the success of AI implementation and successful usage of AI tools. I've been saying that if you've been listening to this podcast every single week. Now in the same conference, the chief AI officer for SAP also noted that actually the fastest things that are happening are the boring, mundane tasks, and he's saying that 34,000 employees of SAP are already using AI for simple routine tasks like document processing and receipt auditing. So the small, daily simple stuff is actually relatively easy to implement, where obviously the more complex, more sophisticated and advanced stuff takes longer. But this is the stuff that has more impact on the economy, on the results, on performance, and on efficiency. The same article also mentioned a lot of experimentation and failure as the first stages of AI implementation, like implementing any new kind of technology or any new kind of processes. Which is basically saying that not every deployment is going to work, which means it's gonna take time to get it right and then adjust from there. That being said, it is clear to everyone, and you hear that from multiple angles. That new startup or small startups that can move fast, can gain a huge benefit because they can use these technologies significantly faster, with less overhead because they're starting without the baggage of the people, without the training and or an older tech stack that they don't have to take into account. And so it is very clear to everyone that there's gonna be this new era of AI focused, AI centered companies that are gonna be able to run significantly faster. But bigger enterprise businesses or traditional businesses will take much longer to adapt than maybe we thought in the beginning. But the main focus, as I mentioned, is around employee training and the gap that there is right now. Many sources are showing that companies are providing employees licenses without providing them adequate training across multiple aspects of it. I'm gonna discuss more of that in a minute. Now, another interesting angle that I must admit I was very surprised about and I really dove into that information, is a research paper by a company called Jobber, JO double BER, that is an all-in-one solution for home services pros. So they have an angle here, so I want to put that very clear out there. But based on their annual survey on blue collar attitudes, young workers are overwhelmingly faVEOr trades like carpentry, plumbing, electrical work, et cetera, for their potential future jobs to resist automation. So, what they've done in this paper is a combination of two things. They gathered information from other third party sources that were released recently, like different respectable surveys and research from different companies, as well as have performed their own research on Gen Zers with over a thousand high school and college age individuals between the ages of 18 and 20. And what they found is actually very interesting. So, as I mentioned, they first of all quoted several external sources. Some of them we've heard before, such as that college grad unemployment spikes to 4.6%. They're also mentioned information from a different source that the bachelor degree costs tops half a million dollars right now. If you factor in tuition, lost earnings, so you're not working at that time and loan interest. And again, they have an agenda here. They wanna show that going down the trades path is the right path to go. But still, this is a huge amount of money. And they showed a lot of benefits of why should you go down the trades path. But their survey information is actually very interesting. So they asked Gen Zers, what are the biggest concerns about going to college? Number one thing was student loan debt. More than 57% of youngsters said the same thing. But then immediately after that, in numbers two and three, as were uncertain job prospects afterwards, a lack of real world experience. When you come to compete with ai, that's obviously a issue because AI already know a lot of the stuff that you don't know, and it can be trained significantly faster. So 39%, almost half of people, ages 18 and 20 that were surveyed, said that uncertain job prospects afterwards is one of their biggest fears going to college. They also asked Gen Zers parents have recent trends or development caused you to reconsider the type of career path you encourage your kid to pursue? 33%. So the larger group said they, it did not change, but 25%, one in every four parents said yes due to the rising cost of university education. And then 15% said yes because of concerns about AI replacing white collar jobs. Now, 77% of Gen Zers themselves said that it is important to them that their job would be hard to automate and be replaced by AI and from a career consideration. The number one factor of Gen Zers for career selection was career, job security over passion and salary consideration. So this was the number one factor was job security with obviously AI being one of the bigger risks Obviously parents reflected very similar concerns. More than 51% of parents said, that risk of air driven job loss influences the advice that they're going to give their kids. So what this means, this means that even the perception of AI impact on the workforce. And if you've been listening to this podcast, you know, I think it's gonna have a very significant impact on the workforce. But as I mentioned, we're gonna share a few other examples of why this may not be as crazy as I at least thought so far. But it is very clear that it's already impacting the decisions on what to study on young individuals and their parents, which by itself will impact the workforce. Now staying on the topic of training workforce, we mentioned that before, but now it's rolling out Walmart have just deployed a plan to train all of its employees on AI usage in 2026. They're planning to invest$1 billion in skills investment through 2026, not just in ai, but other aspects of re-skilling their workforce in preparation for the future. They're hosting a Skills First Workforce Initiative on September 25th in their Bentonville, Arkansas headquarters. And the summit will include 300 plus expert for firms to be able to share different types of skills that will be delivered to all of Walmart employees in the next year. The AI training is going to be done in partnership with OpenAI, as we shared in previous weeks, which is showing you that the gap in skill capabilities and specifically AI skill is a big issue even in the largest organization in the world. Some organizations, Walmart is just one example, is starting to take this very seriously. But they're starting to take this very seriously. Only now we're talking about Q4 of 2025. More about that shortly now. The research that really caught me off guard was a research that was done by the Yale Budget Lab and that study found that there has been no significant labor disruption from AI 33 months after the ChatGPT debut in late 2022. So again, on this podcast, all I'm talking about is job disruptions, ob, job disruption. Job disruption. And so I really invested in going in deeper on this research and understanding exactly what they've done to see that there's no agenda behind it and so on. And they actually took a very interesting approach and the research seems completely legit and the findings is very hard to deny. So what they've done is they've actually tested the occupational mix across different sectors and different aspects of the economy. And the four key takeaways are, and I'm quoting while the occupational mix is changing more quickly than it has in the past, it is not a large difference from predated in the widespread introduction of AI in the workforce. Number two, currently measures of exposure, automation and augmentation show no sign of being related to changes in employment or unemployment. Number three, better data is needed to fully understand the impact of AI on the labor market, which is just basically saying, okay, we're not sure yet, but this is what we're seeing right now. And number four, we plan on updating this analysis regularly moving forward. Now, the way they've done the analysis, as I mentioned, is really interesting. They've used a methodology that they're calling a dissimilarity index, which is the change in occupational mix over time using monthly data from CPS. Which is the current population survey done by the US Census. So what this is basically checking is over time, month over month, what kind of occupation people have and how that changes over time. And they compared several different times in history and several different introduction of new technologies to see if the current trends are different than historical trends, which basically means that we're gonna have a big difference in impact. And the reality it's not. I will share the link to the actual survey and the results, but if you look at the graphs, they're absolutely astonishing. Now for each and every one of the graphs, they also have a baseline, and that baseline takes a period in time in which there was no any significant introduction of new technology in order to compare it. And the graphs almost align across all the different times of history, different sectors in industries and so on, including the ones that are the top exposed to AI changes. So there are still big differences between the different sectors. Information sector sees the biggest difference, followed by financial sector, followed by professional and business services. But overall, if you compare the trends over time to previous years, to previous changes, and even to the historical average, it's not that much of a difference, which basically tells us that AI is changing the way we work and maybe changing the types of professions. But overall, from an employment perspective, it is not making a significant change. Now they even tested the younger generation, and were trying to show that even in this market, there is no big significant change. So they compared the timeframes of 2015 to 2025 compared with 2022 to 2025 to show that there's only a slight increase in the occupational mix of this specific sector, which is the younger generation that is coming into the workforce. Now, why did they choose to combine ages 20 to 24 together with ages 25 to 34? Is it to hide something in the data of just one of them? I don't know. Again, I don't have access to all the data behind the scenes, even though they have shared the data. So you can go and download the actual raw data that they used and even the calculation that they've done. So this looks like a very legitimate and serious research that has addressed this from an angle that I haven't seen before that is trying to see the occupational mix. Basically what people that are employed are doing and what's the unemployment rate across these different levels. And they're showing that there's very little difference right now than previous trends that we've seen in the past. Now, how does that align with previous research that we've seen? I think they're looking at slightly different things. I think this research is looking more into what kind of occupations we have and the potential of creation of new jobs that might replace old ones versus some of the other studies that we've seen, like the younger generation unemployment. That is definitely looking like a fact. And as they mentioned now that I found this research, I'm gonna look for every time they issue that, and I'll keep you updated on their continued finding. Another interesting research that was released this week that has a very significant impact on their workforce was conducted by the National Cybersecurity Alliance, also known as NCA, and they found that 43% of workers have shared sensitive data with AI tools, including company financials and client information. 65% of responders to the survey use AI daily, which is a 21% increase year over year, and they surveyed over 6,500 people across seven different countries. Now, here's the kicker. 58% of workers report no employer provided training on AI related data security and privacy risks. That is almost two out of every three employees surveyed saying they did not get proper AI training, and yet they're using AI every single day. Now previous surveys have seen that most people bring their own AI to work, meaning they would bring to work the same AI tools they use at home, and about 90% of those people. So about two thirds of people bring their own AI to work, and about 90% of the people that bring their own AI to work use the free AI tools. So the free cha, GPT, the free clo and so on that by definition, train on your data and we'll use your data for future use of the AI lab. And this obviously generates a very big risk of companies. 96% of IT professional in a SailPoint survey flagged AI agents and AI tools as a security risk, and yet 84% of them said that their companies are already using these tools. So you see that in addition to the lack of training that provides higher efficiencies and actually benefits from using ai, in addition, it generates a huge problem from a data security perspective because of lack of training. Now, if you think that security is the only problem, and if you think that AI is driving great benefits and it's worth the risk, there's another research that came up this week from Stanford reveals really scary results on the negative efficiency impact that AI has on the workforce. So Stanford Social Media Lab and BetterUp Labs did a survey of full-time US workers and found that 40% of them receive AI generated content. That is, and I'm quoting Masqueraded as good work, but lack the substance to meaningfully advance a given task or what's known in the common language today as slop. And there's a few other interesting points here. This phenomenon is thriving horizontally. So people peer to peer at the same level of the company report, about 40% of what they're getting is work slop. While the low quality AI content is only reported to direct managers at 18%. What that tells us is that people can actually identify low quality AI outputs and they're still sharing it amongst their peers, and they're only sending a smaller portion of it to their managers. Hopefully the better portion, This is obviously not a good sign because what it tells us is instead of going to the person who sent the information and say, Hey, what the hell are you sending me the scrap for? This is just wasting my time. Go and actually do your job. They are continuing to forward this information and despite the fact that they know it is bad because they know it is bad, otherwise, they will also report it to their managers. Now, not surprisingly, the most exposed AI industries like technology and professional services report the highest work swap VEOlumes compared to other industries. Now, if you're one of those people that generates low quality AI outputs and share it as your work output, half of the servant workers, so 50% our viewing work, slop, submitters, the people are sending them this kind of information as less creative, less capable, and less reliable, and 42% deemed them less trustworthy, and 37% see them as less intelligent. What that tells us is that it is relatively easy to identify AI generated content at this point, or at least low quality AI generated content, and what it tells about the person is that they're going to be less trustworthy, less reliable, less capable, and less intelligent, which is obviously not something you want for yourself or for other employees in your company. There were other similar research that I shared with you in the past few months, including the Chicago and Copenhagen study that yielded only minor gains a few hours a month from AI that was broken down into smaller bits and pieces. That is hard to actually benefit from. But it is very, very obvious that despite the fact that there are no clear productivity gains in many companies, again, going back to training and proper implementation. People are still doing it and they're doing more and more of it, despite the fact it makes them look bad just because it's easy and people are lazy. And if there's a lazy path, some people, I'm not saying everybody will take that path, which doesn't project very well on the future. Now, the bigger problem though is how does this whole AI thing is impacting the next generation of employees? So there was an interesting article on Time Magazine this week in which a Chicago high school graduate exposed what she's saying is, my generation is afraid of thinking without ai. In her essay to Time Magazine, she wrote that her peers from straight A students to struggling learners lean heavily on ChatGPT for essays, math and quizzes, and completely dropping critical thinking as a key aspect of their learning and intellectual process. The author is claiming that many students, herself included, are using AI to create and churn out high scoring essays and other homework, but they're struggling in the actual class discussions, which is not surprisingly because they didn't do the actual work. So what do all of these surveys tell us? They tell us that change is coming and it's coming fast. It may not be coming as fast as we thought, and it may be having just initial impacts on the workforce and so on. But it is amplifying and it's amplifying very fast. And the really scary thing is the gap in training, the gap in training in schools, the gap in training in higher education, and the gap in training in the actual workforce. It is very, very clear that the factor of explaining to people the pros and cons of AI and how to use it effectively and efficiently, will be the biggest differentiator between successful people to less successful people and successful companies and less successful companies. And I've been saying this for the last two and a half years, or almost three years now, of working with companies on how to implement AI successfully now. Going back to the specific workforce aspect of this, it is very, very clear that many companies are leaving this to the employees to figure out on their own, which is a huge mistake. Both in means of the efficiency gains that are not being gained. Basically giving employees tools without giving them the proper training is an expense and a huge risk from a data security perspective. Now I must say that something is shifting right now. So as I mentioned, I've been doing this. I've been training companies in workshops and courses and consulting for the past two and a half years. Day in, day out, something has shifted dramatically between Q2 and Q3 of 2025. The amount of demand I'm getting for my services has skyrocketed sometime in the beginning of Q oh three of this year and continuing into Q4. The amount of requests I'm getting for evaluations of current capabilities and defining the exact gaps and roadmap for training for employees is just through the roof. I'm literally traveling every single week in the next six weeks doing workshops to different companies all around the country and actually in Europe as well. Which is a good sign. Which connects me back to the point that I mentioned earlier about Walmart finally doing company-wide training on AI to all its employees or to the relevant employees. Every company needs to do this right now, but you need different kinds of training. I've done these kind of presentations and training and workshops to board members of NASDAQ traded companies. They need to understand this to make the right decisions. I've done basic workshops to general introduction to generative AI to company wide. I've done company specific and department specific training, both online and in person, and each and every one of these has pros and cons, and they're all available, not just from me. The important thing is you need to analyze as a manager in your company what are the gaps in skills and in knowledge when it comes to ai, and you have to find a solution for that. And you need to do this very, very quickly because as we will learn in the next segment of this podcast, it is going to be a lot more significant going into 2026. I'll mention two more things about the kind of training we provide. In addition to company specific workshops and training that we provide, we also have training open to the public. So if you're an employee in a company that is not providing you this kind of training right now and you do not wanna fall behind and you want to be able to secure your employment and your future, we have two kind of courses. We have a self-paced course that we fully updated in September of 2025. So the course you're going to take is going to be aligned with the course that I was teaching as an instructor led course that ended just a couple of weeks ago. And there's a link in the show notes that you can go and take that course at your own pace. We're also going to open a new cohort of our highly successful AI business transformation course, the final dates have not been decided yet, but it's probably going to start at the end of November and the registration will open in the next couple of days. So stay tuned. I will keep you updated in the next episode. And now two new releases from this week. And we're gonna start with the one that caught the most amount of attention, which is open AI just released SORA two. So Sora, those of you who don't remember, caught us all by surprise as an amazing video model that promised a lot and took a very long time to deliver. And then when it delivered was ah, not that great and not as amazing as they initially promised. And very quickly, other models caught up and shortly after when VEO three was released by Gemini, it kept SORA very far behind. While SORA two is a whole new different level, it is at the level of VEO three in video generation. It has the capability to generate background sound and effects and VEOice and conversation just like VEO three. It has incredible physics and some of the videos that they shared are absolutely mind blowing when it comes to showing real world physics with cars drifting and ice skaters and dragons soaring through the air and so on and so forth. Not that I'm sure about the physics of dragons soaring, but I assume it's similar to birds and airplanes, but everything looks absolutely incredible, including a digital version of Sam Altman that is sharing all the benefits of this new model. Now, in addition, they created a new app. It is a social media app that, again, was rumored for a very long time, and the social media app. It allows you to create share C and remix AI generated videos that are generated by SORA. It was downloaded 164,000 times in the first 48 hours of its existence. It's currently only an iOS app, and by the way, it's only by invitation right now. So even if you can download the app, you may not be able to use it. So in two days, it got to number three in the download charts in the US and Canada, and in day three, it became the number one downloaded app on iOS. Now the app enables you to generate ten second clips from text prompts and or photos, and you can add dialogue and sound effects as I mentioned before or as Sam Altman called it, the GPT-3 0.5 moment of video. Now, a few interesting things about the app and Sora tool in general. First of all, you can insert verified likeness into videos. Basically, you can claim that you are you and prevent other people from remixing or deep faking you as you, because now it will know that it's you. I'm not sure how will that actually hold over time, but it's definitely a move in the right direction. Users can also share these videos publicly or in specific group chats or one-to-one and remix other creators while tweaking the prompts or previous videos to create slightly different videos from the original ones. Now the app itself generates a TikTok style feed that is personalized by your activity, your location, and your ChatGPT chat history, which is there's an opt out option for that. So like everything else in social media, you are the product and the way you pay is with your privacy only. Here, in addition to your feed history, it's also using your ChatGPT data, which is not something you have in other social media platforms, and you can decide whether you like that or not. And again, at least there's an OPTOUT option that you can choose. Now currently, as I mentioned it is by invitation only and it is offering it from OpenAI to heavy SORA version one, users and pro subscribers. So if you're a pro subscriber, you get immediate access, but they are like everything else in OpenAI are gonna release this to them plus and teams and free users over time. Now, in addition, obviously the Android app is coming soon. I'm an Android user, so over there I signed up for a wait list. So basically I'm waiting for the app to be available. It's not available yet. I will let you know once it is. And an API for SORA two is coming as well. Now one thing that they've done that is interesting and that is gonna get them in trouble and already starting is from a copyright perspective, it is an opt-out option, meaning right now, as an example, you can create fictional universes that look very much like Star Wars and specific video games unless the companies who created the original ones choose to opt out from being able to be used in SORA. On the flip side, it blocks unverified real people, so unless you have an approval for somebody to use their face, it is supposed to block you from being able to generate them, which again, I think is a great idea. Now also from a security and safety perspective, all videos get visible, watermarks and invisible digital credentials that can be identified as ai. It also has teen safeguards, which means there's no adult kind of content that you can generate. And it's very limited with the amount of nudity and stuff like that we will allow you to generate. And it is combined with their newly released ChatGPT parental controls to monitor what you're doing on that app as well. And not just on ChatGPT. From a look and feel perspective, or from a concept perspective, it is very similar to the app that Meta released last year, or not a new app, but the vibes mode inside the Meta Universe, which is very similar, just allows you to create a feed and get a feed and remix the feed of AI generated videos and images. What they're claiming is that it's supposed to foster collaboration and creativity because of the ability to remix versus just passive scrolling through specific content. Now, when I saw this, I got really excited and really scared at the same time. And after reading Sam Altman's post on this it amplified both of these feelings. So let me read to you a short segment out of the blog post that Sam shared. So I'm specifically quoting Sam's Fear section about what this may lead to. So here we go. Social media has had some good effects on the world, but it also had some bad ones. We are aware of how addictive a service like this could become, and we can imagine many ways it could be used for bullying. It is easy to imagine the degenerate case of AI video generation that ends up with us all being sucked into a RL optimized LOP feed. The team has put great care and thought into trying to figure out how to make a delightful product that doesn't fall into that trap and has come up with a number of promising ideas. We will experiment in the early days of the product with different approaches. So this is from Sam and now back to my personal thought of this. As always, you need to follow the money. If OpenAI will not monetize their social feed, it has a chance of putting the right limits in place and hopefully then also gaining approval from society for doing the right thing by trying to minimize that impact. However, if their goal would be, and it might be to maximize time in the feed, just like any other social media platform. Then this could go terribly wrong. So just think about the current social media concept. It is an algorithm that looking for whatever it can find in order to make you stay on the platform for as long as possible. Well, right now, this algorithm can generate the actual content. It is not the way it's implemented right now, but it is definitely possible. The algorithm can see what you're responding to, can understand the cues, and in a later state, may even look at your selfie camera in real time to see your actual real-time responses and reactions and create content on the fly to keep you as engaged and as hooked as possible to that content. This feels very much like a Black Mirror episode, and sadly, I feel this future is very close, even if not from OpenAI. Then it's going to come from meta or other social media platforms. Now, in addition to the risk of being more addictive than current social media feed and sucking people deeper and deeper into a virtual reality universe, it also will reduce social connections because our feed, your feed will be different than anybody else's feed because it'll be created in real time for you, which means sharing it will not be relevant because it's never gonna be as exciting as the feed that was created specifically for me. So a lot of ways this could go terribly wrong, and all the components for this really bad meal is, are already available, right? So the social feed is there, the ability to generate content on the fly is there the ability to see our responses and our impact is already there. And so, this might be a very natural next step to meta and OpenAI. And again, anybody else wants to go down that path, which I am really scared by it, especially having three teenage kids. But the other big thing that Open AI launched this week probably will have an even bigger impact on the world, which is they have released their e-commerce integration straight into chat called Instant Checkout, which is basically a way for you to buy goods straight on the ChatGPT interface. So this feature that was developed together with Stripe is also releasing an open source new model that is called Agentic Commerce Protocol that will allow other companies to use the same exact concept. This initial launch from ChatGPT is in partnership with Etsy as well. So you can now buy Etsy stuff straight on ChatGPT and shortly after they're going to include a million Shopify merchants on the platform as well. It is already rolling out to plus pro and free users which will enable all these users to do the checkout straight on the ChatGPT app. As a reminder, they have over 700 million people using ChatGPT every single week. As, as a reminder, two weeks ago, we've learned from open AI themselves that more than 50% are using ChatGPT for personal use cases. So this is a perfect alignment because some of the things that we're doing on our personal day-to-day stuff is buying stuff. Now the new open source protocol is also very interesting because they're taking a play of the MCP move by Anthropic. So Anthropic released MCP as an open source model just about a year ago, and that took the world by a storm. And I think OpenAI are expecting their agentic commerce protocol to do the same thing. And I don't see any reason why it wouldn't. Now, at least as of right now, product recommendations in ChatGPT remain organic and unsponsored, meaning if you're gonna ask a question, you're gonna get recommendations, but nobody's paying to show you these as ads. Again, this might change in the future, and if it does appear, then you get an instant checkout buy option. And if you click on that, it's gonna take you to a checkout steps as we are used to the merchants retain full control as the record holder for fulfillment. So if you're buying it from an Etsy supplier, the Etsy supplier will be the one that is be, will be the merchant of record and not ChatGPT or OpenAI, or a third party, basically, very similar to any other checkout that you know in other platforms as well. And these merchants will be in charge of returns, payments, supports, et cetera. Again, just like on other third party aggregators or platforms like Etsy and like Shopify. And this new protocol ensures that you have complete control and with explicit confirmation, so the system won't just buy stuff on your behalf. And from a data security and perspective and so on, it is, as I mentioned, using Stripe in the backend. So everything we know from Stripe that is now used across the web, everywhere is obviously still there. So from a data security,, it is safe as well. Etsy shares jumped 16% following this announcement, which is not surprising because it's obvious that a lot of people will be able to find because it's very obvious that this can drive significant change to the way people are shopping today. Now, this is not a surprise. One of the things Fiji cmo, the newly appointed CEO of application shared, that's part of the revenue that they will get between now and 2030 is going to be through commissions from the checkout process. So let's do a quick math of what this can look out. ChatGPT, very, very quickly, right now they have 700 million users that are probably across the billion next year. Let's say a person does a purchase on the platform once a quarter, and I think over time that will increase. But let's say it's once a quarter, that's a billion purchases once a quarter. Let's say OpenAI makes half a dollar for on average as a commission on each and one of these purchases, they make half a billion dollars every quarter. That's$2 billion a year. That is very significant money. And again, I think the trend of shopping together with AI agents is just going to increase, which means this can be tens of billions of dollars in the 2030s, not just for open ai, but for any agentic approach. But because Chachi, Pia right now in a very big lead when it comes to personal usage of ai, I think they will be the ones that are going to gain the most. And these numbers will start competing with Google shopping, and with Amazon very, very quickly. But the biggest release of this week wasn't from OpenAI. The biggest release of this week actually comes from Anthropic, which just released Claude Sonnet 4.5, which is an incredible superpower when it comes to writing code, creating autonomous agents, and doing computer and browser use in levels that we've never seen before. As part of the release, they have shared that Clotted solved sonnet 4.5 autonomously coded a Slack slash teams style chat app in 30 straight hours of running autonomously. That generated over 11,000 lines of code. This is quadrupling their prior Opus four model with a seven hour coding limit. Now it obviously broke all the existing benchmarks on coding, so it scores 77.2% on the SWE bench, verified 82% with parallel compute, 61.4% on OS world computer tasks up from 42% of sonnet four. So a 50% increase from 42% to 61% from one model to the next anthropic calls it. And for probably a good reason, the best model in the world for real world agents coding and computer use with three x better browser navigation than last October's technology. One of the better testers of this was Canva, and they praised it as an incredible tool for, and I'm quoting complex long context tasks like engineering and in-product features, but they delivered a whole suite of tools and not just the model itself. And we're gonna dive into some of the things, but a very short list is it includes virtual machines, memory and context management, multi-agent support, Claude code checkpoints for rollbacks, and many more. Now Scott White, the product lead at Claude AI said this is a continued eVEOlution on Claude, going from assistant to more of a collaborator to a full autonomous agent that is capable of working extended time horizons, but it is built not just for programming. So Diane Penn, the head of product management at Anthropic said it is operates at a chief of staff level for tasks like calendar syncing, dashboard insight, and status updates, and it's been actually really helpful generating spreadsheets with LinkedIn profiles. And she gave an example. It's been actually really helpful generating spreadsheets of LinkedIn profiles so I can email them from a hiring perspective. Now, they're also saying that Sonnet 4.5 is their most aligned model yet, which is basically reducing the risks and levels of deception and prompt injections and other risks that come from using different models. So in addition to being the more capable, they're saying it is by far the safest, they kept the same pricing as the previous model. So you get$3 per million input tokens and$15 for a million output tokens. And it's available right now on Claude ai, on the web, iOS and Android, as well as the API, Amazon Bedrock, Google, and Google Vertex ai. Now they included a lot of really powerful features, mostly for developers. So the new API features include a beta memory tool for basically unlimited context via file-based storage and context editing. It also knows how to shorten the context from one step to the other on its own in order to save tokens and optimize how it is running through longer session. And it is also very good, as I mentioned, in creating presentations, spreadsheets, animations, and dashboard, which goes way beyond just writing and executing code. They also released a new browser navigation extension for Chrome that is currently available only for max users. So the people are paying$200 a month, and the examples that I've seen online are absolutely mind blowing on what people are doing with this right now. Several different good examples, both on X and on YouTube are showing this performing significantly better then other Agentic browsers right now, they also released a new concept that is called Imagine with Claude, that is on research preview that is basically creating software on the fly. So you create the first screen, you're describing what you want to do, it creates it, and then when you click on something, it will create that something without you having to tell it what to do. It will kind of assume what needs to be the next step in the navigation or the development and the software basically right itself based on the user's interaction with the software. This is a whole new way of concept of looking at software and application. It will be very, very interesting to see if it's actually doing anything productive, but if it is going to completely change and transform the world of applications, and the where they are created. If you think about it, if the application will create itself on the fly to complete what you needed to complete, why would you ever install an application that somebody else has created? Now, they also released an integration into Slack that is built together with Salesforce. You can use it in three different ways. You can add Claude in private messages to get it to work as an assistant. You can use an AI assistant panel that to do channel based queries and learn about all the information in the channel, and it can do thread participation where Claude Drafts responses for your review. So when somebody calls you in a channel, you'll be able to see a potential response ready for you to go, and all you have to do is edit it, approve it, and it will be going right there and then, so you don't have to write everything on your own. It has access to private and public channels that you have access to, so it's aligned with what you have access to completely when it comes to permissions. But in what you can see, it can see everything and it can get information and research information, and get context from everything that is in there, including attached documents and linked pages and so on. Rob Seaman, the chief product officer of Slack, said every company is on its way to becoming an agentic enterprise. By bridging the gap between Claude and Slack users, we're creating a seamless AI experience. And it is available as of right now to all paid Slack plans and as an MCP connector to Claude if you have a teams or an enterprise plan, so you can get it both on the cloud side and on the slack side, both of it with paid plans. Now, it is very clear that in this release, philanthropic is going all in on the enterprise world initially, and the biggest thing is on the coding world in. Or as their tech lead, Sholto Douglas said, the most economically important and immediately trackable area is coding for the AI world. So if you remember, we talked about the big focus of G PT five on taking over philanthropic as the leader in the AI coding world, and they were able to do that both on the benchmarks as well as day-to-day use. More and more people reported that G PT five is actually outperforming Claude 4.1. Well now Claude 4.5 is completely breaking away both immunes of the capabilities of the model itself, the amount of time that it can work independently, as well as the tooling that comes with it, especially on the developer backend side with multiple aspects and capabilities for very granular control On the API side, which is how philanthropic makes most of its money,'cause most people are using Anthropic coding, not on the Anthropic, not on the Claude coding platform or the Claude app itself, but actually through the API with IDs such as cursor, et cetera. So as an example, they completely revamped their SDK. They're rebranding it from Claude code SDK to Claude agent SDK, which is reflecting the new focus of where this is all going. And the SDK comes with a huge variety of controls and capabilities that are already built in, including, as I mentioned, the ability to connect and work and create other tools beyond code, such as CSV file, spreadsheet, presentations, dashboards, visualizations, and so on and so forth. And the examples that gave, that they gave are showing a versatile agent application universe with finance agents and personal assistance and customer support and deep research capabilities, as well as an agentic feedback loop where Claude operates on its own gathering context, taking action, and then verifying and testing its own work, learning from the process before coming and correcting itself in the process, which again allows it to work for 30 hours nonstop of writing code, and it will just be a matter of time before it does it on other aspects as well. It is also very good at context gathering across multiple channels, combining agentic, search with semantic, search with subagents, that can work in parallel to find context in different information in different places, and combine it all together and with a compaction tool that auto summarizes messages to reduce the amount of tokens that it is using in the process. It also comes with a huge range of action that it can take. Obviously. The first one is using tools, which we already know, but this has been around before. But it also adding dash and scripts, which is allow flexible tax tasks like extracting information from documents and PDF attachments, code generation, which we talked about, and MCP support for everything inside that environment to connect to anything in the outside world and several different verification models, including rule-based visual feedback, such as being able to see the screen and the output that is generating and LLM to evaluate what's actually being generated. Whether it makes sense in the bigger context overall, a very strong, very powerful play by philanthropic to put itself as the dominant power when it comes to agents. And again, combine that with a very capable browser agent that can run in the browser and connect and report back to the API. So you can now run a development and ask the code itself and ask Claude to go and do the research on its own before it continues to write the code. It will go to the browser, find the right website, see how a specific thing operates, find the API documentation and come back and then implement it all its own, all in an agent autonomous way. It is absolutely mind blowing and scary. And again, the browser usage can do a lot more stuff and I've seen really amazing examples I think this will become the norm in 2026. We will slowly stop using browsers as we do today, and we'll move more and more into these age agentic browsers. I must admit, I started using Comet about two months ago, and there's more and more things that I do only in Comet, like every technical work, every development of automation in NA 10 and so on. I do only in come right now because it just helps me and amplifies what I know how to do with things. It can research on its own and then do for me, and this is just the very early stages of that, which is a great segue to our next segment of AI browsers because we got a few new of those this week as well.
Speaker:So let's talk about agentic browsers. The first one that has news from this week is actually the one that maybe is the most commonly. Use right now, which is Perplexity Comet browser. Well, comet started as limited to only their pro level subscribers, and as of this week, perplexity Comet AI Power browser is completely open and free to the public to use. The goal is obviously to stay ahead from an adoption perspective and a market share perspective. Now that Google has already released a Gemini in Chrome in September, and Anthropic just debuted their capabilities in August, and now, as I mentioned, introduced a whole new version of it as well as OpenAI that launched operator earlier this year. But there is still a plus tier for perplexity comment that provides information from partners such as CNN, Conde Nast, Washington Post, Los Angeles Times, fortune, Lamont, and La Figo. So all of these publishers are built in with live data that streams only into the plus version. That is a paid version, but you can use it for everything else, like I'm using it for free right now. And as part of the announcement, perplexity shared that future enhancements that are coming soon will include a mobile version of the Comet browser, a background assistant that can run async ironically, in multitasking while running several different things at the same time, or for browsing on one thing and doing research on the other, and so on and so forth. In addition, by the way, their phone assistant is probably the best assistant right now on iPhone. It is definitely much better than soa. Sadly, works not as good on Android, and I'm an Android user, but if you are on iPhone and you're looking for a really solid phone assistant than the perplexity assistant is actually a very good choice. And as I mentioned earlier, I do a lot of my technical work and definitely all my entertainment automation building on come because it helps me a lot in the process. So now that it's free worth checking it out. Another contender that has been in the browser universe for a while but has not participated in the age agentic browser universe yet and made a big announcement this week is opera. So opera just announced neon, that they have teased since May of this year, and it's now available in several different modes. They released it initially as a closed beta on September 30th. This is through their Neon Founders program, which is$20 per month for the standard, or a discounted option for$60 for six months, which makes it$10 a month. Uh, if you're committing for six months. Uh, broader rollout is expected sometime in the near future. And as I mentioned, they have several different functions. The main, most interesting one is neon do, which as the name suggests, autonomously and pulling from multiple browsing windows as well as tabs, as well as, which is very interesting from your browsing history. So when it's executing tasks, it can go not, not just to the existing open tabs, but also to your browsing history to find relevant information like summarizing information from previous things that you look for, or even if you're looking at word related websites, it can pull that information as well. The other interesting is that it is running locally, so the AI is running onboard the computer and not sending information to the cloud, which is obviously a benefit from a privacy perspective. And it is showing you what the agent is doing at any given point, basically showing you its thought process and chain of thought, so you can pause it and intervene whenever you want. And the other very interesting approach is that it has what's called cards, which think about like I-F-T-T-T, those of you who tried this, the if this and that setup that Google has for a while now, you can set up different use cases, basically defining what the scenario needs to be, and then you can rerun these scenarios every time you want to run them. Think about any repetitive workflow that happens in a browser, which could be any work related, personal, or a combination of the two. You can save as a card and then run the card again when you just wanna run the same repetitive task again, which I think is a very smart idea. And in addition, they have an aggregation of that, which is more sophisticated, more complex things, which instead of being tabs, they're like mini browsers of their own that has multiple tabs in multiple chats and docs and other contextual information to allow the overall process to run. There's a similar approach to this in arc, which is another AI based browser, and they have what they call spaces, but this is a little more sophisticated and it allows you to group different tabs and stuff like that. So very interesting functionality from Neon, and I cannot wait for it to be available to the public to test it out. In addition, they have a version called Neon Make, which generates editable mini apps and websites and reports and games and so on. Basically, anything that requires running code and running it in a browser similar to what we know from other tools such as cloud artifacts or Google and Chat GPT Canvas, that can generate code and run it within the browser itself. Another interesting variation on this concept is a company that is based in New York in Berlin called Data and Data launched their ERF product for beta release on October 1st. It is a free. Local first application that is fusing between AI part browsing together with something like notebook, LM style notebook. So the idea is making a research focused tool that can still take actions in your browser and search autonomously across multiple aspects of your information. So you give it the topic you wanna research and the areas of information you want to cover and where it should find that information or let it run on its own and then it will generate things. Think about like a notion style summary that is editable that you can then edit and share in a library with other people. So a slightly different approach. They also have the ability to create mini apps and interactive graphs and charts and displays and app outlets, and small piece of code. Very similar to all the other tools as well. So slightly different approach, slightly different focus, same kind of concept. Another contender. And the last one that we are going to talk about that is slightly different but it because it's not to the public, but it is very, very powerful, is Cursor, which is now maybe the most hype and most used IDE when it comes to AI based solutions, has introduced cursor browser agent. And what they have done is they have integrated their existing agent capability together with Anthropics web browsing and control capability to execute sophisticated tasks such as scraping data, analyzing that data, and then cataloging it in whatever way you want as a starting point for or in support of the code that then the ID actually generate and it allows it to get additional context autonomously from the browser. But in addition to just researching information, it can actually run and operate websites because it is using the agentic capability inside the new web agent from Anthropic, so it can actually click on links and fill out forms. Or extract data from web pages or anything it needs to do in order to help the coding side get access to the information or the processes that it needs in order to do the things that it needs to do. Now, it also works the other way around, meaning if you need to extract information, the coding side can write Python code to then scrape information from the website so it had access to it in the following steps, and this wheel goes on and on and on. Why is this helpful? Well, in a really wide range of use cases, the first one is obviously when you need information in order to continue your development, such as getting API documentation from a website in order to complete the process that you're working on or if you wanna write scripts to actually operate systems and you wanna understand what the systems are and how they work. All these things are possible now autonomously, using this really powerful and combination of the cursor IDE, together with the Anthropic web agent capabilities. But as I mentioned, the cursor version of this is specifically tailored for developers working within Cursor and just as a plugin or an extension of that universe. So it's not directly competing with some of those other tools Now to some other releases that are not the biggest ones of this week, but they're still interesting and exciting. So the first one we're going to talk about is Google Light Gemini 2.5 flash image, also known as nano banana transitions from preview to full production readiness as of the announcement of October 2nd, which basically means that all the functionality is now available through everything Gemini, including the API and Vertex AI from Google. Those who somehow miss the craze. Nano Banana is an incredible image generator that is now baked into the Gemini universe, and you can run it independently or as part of the Gemini chat. The one interesting thing that they fixed as part of this border release is maybe the most annoying thing that was blocking me from using Gemini on some of these cases, which is the lack of control on aspect ratio, so it only generated square images so far, and now it supports 10 different formats including 21 by nine, 16 by nine, four by three, three by two, and obviously square and the other way around. Portrait with nine by 16, three by four, and two by three. And what they call flexible, which is five by four and four by five. So basically almost any kind of format that you want is now available built into nano banana without having to switch it to a different tool and do an out paint in order to get the aspect ratio that you want. That was driving me crazy in nano banana before, so I'm really excited to see that. But Google also released something really interesting without actually sharing it. I literally stumbled upon it while going to Google AI Studio to check out the new functionality of Nano Banana. And I found in Google AI Studio a new release that I learned they just released in October that is called AI Studio Build Mode. And AI Studio Build Mode is basically a free vibe coding tool that is baked right into Google AI Studio. Now, I haven't tested it yet. I literally just found it yesterday as I was searching for some other information, but it seems to be. Something very similar, at least to the lower level code creation, uh, tools out there. It's probably not competing with the repli and the lovable of the world yet, but the direction is very clear. When Google starts testing something, they usually are later on deploying it one way or another. So, if you want to start playing with Vibe coding and you don't wanna pay the main tools out there today, starting with Google's new tool might be a very interesting option. I will test it out and I will share with you what I found. But I've seen a few people developing basic games and stuff like that, very easily. Again, just by simple prompting and getting fully functional applications. So what does that mean? It means that Google, like everybody else is looking in the direction of how to get more people to write code with AI in simplistic ways, and that is one of their ways to test the functionality and the capabilities that they have in there. Again, do I think they're going to try to compete with cursor on the professional ID side or with rep and lovable and other vibe coding tools? I don't think so, but I will keep you posted if I see that they're changing direction. But it is very clear that there want to include some kind of vibe coding offering into their overall Gemini package. Now speaking of coding tools and platforms and APIs and so on, a very interesting piece of information caught my attention this week on X, and this is somebody posted screenshots from Open Router showing the number one used tools for several different aspects. So those of you who don't know Open Router, I've been using Open Router for probably a year and a half right now. It's an incredible website that allows you to connect to their API once and behind the scenes. They're connected to all the other large language models and any other AI related tools. And you can, within your API call, call any other tool API. So basically you do one integration once, and then if you want to compare different models, you can switch back and forth without really changing anything other than calling the new model by name. And they have a leaderboard that shows how many tokens are being consumed from every single API type that you can imagine. And right now in the general leaderboard in number one and number two. Our Grok four fast, free and grok code fast. One ahead of Gemini, ahead of CLOs on it. 4.5. And even when aggregated by provider and not by model specific, xai is number one. Google's number two, anthropic is number three, and Xai is number one by a very, very big spread. With 1.24 trillion tokens over 576. Billion of Google. So more than double, the tokens that is being consumed on Google is being consumed on X ai, and it's more than triple the amount on anthropic, which is telling you that while we're not talking about Xai a lot, they have a very capable AI solution out there right now. And as we mentioned last week, they have by far the most cost effective AI right now, which may hint why on the API side, they are leading the race as of right now by a very big spread because you can get very solid results for significantly less money than you can get from the other models. And when it comes to cost effectiveness, right now, they're far ahead and this is a great proof of that. And staying on the same topic of taking models and making them more cost effective, deep Seek just launched a new variation of their model. It's called Deep Seek V 3.2 dash. X or exp, and it's a very similar approach to what Grok did with Grok for Fast, which is taking a model, making it significantly smaller, significantly faster, almost aligned with the big models capabilities, but for a fraction of the cost. So right now it is 2.80 cents for a million tokens of input compared to 7 cents. That was the price for it before. So a 65% discount to get very similar results. And this is a trend that we are going to continue seeing. New models are gonna come out, but then they're gonna build these smaller variations of the same models that drive similar results, but significantly cheaper for us to consume, which is great for all of us because it allows us to get better intelligence for significantly less and faster. The cool thing about this one is as all the other products from Deep Seek, it is open source and you can use it for whatever you need and get it from hugging face and GitHub and use it for your AI implementation. It is currently the cheapest out of these faster, smaller models other than G PT five Nano, which is still leading from a cost perspective at half a cent per million tokens. Since we talked about deep seek, let's stay in China. Alibaba just announced something very interesting. Their TONGY lab has introduced what they're calling age agentic continual pre-training, or a agentic CPT, which is a new open source framework that trains large language models in a much more efficient way than the existing process. And their main model that uses this new architecture is significantly smaller than top models today, and yet it rivals many of them on research capabilities. It is the first open source model to exceed 30 points on humanities last exam, which is showing very strong capabilities in research and data analysis while being a significantly smaller model. And as I mentioned, open sourcer, you can use it for your needs as well. Now from China back to the us Microsoft has made some interesting announcement this week as well. They have integrated co-pilot chat into Word, Excel, PowerPoint, outlook, and OneNote for all of Microsoft 3, 6, 5 users without requiring an additional license. So basically if you have these tools, then you have Microsoft Copilot baked into them. They're also making the Microsoft 365 copilot into more interactive with agent mode, which enables users to guide the tool through complex multi-step tasks in these tools such as Excel and Word and so on. And guess where this is coming from? If you are listening and paying attention, you probably guessed correctly, it comes from their new integration with Anthropic. So they took the really incredible capability of Anthropic to create CSV files and Word documents and presentations, and they're baking them into the Microsoft tools based on the announcement that they made two weeks ago, that they're gonna add Anthropic capabilities into the open AI capabilities inside the Microsoft copilot universe. And staying on the topic of new announcement or an old announcement that is actually finally materializing. Amazon finally is releasing Alexa Plus, which is something that they've announced in the beginning of this year. This is basically a LLM based, much more interactive version of Alexa that understands context, that knows what you are searching, that knows the product you are buying on Amazon that has access to the internet, can do research for you and can actually take action, like book reservations, order food, and so on, all with the Alexa that you have at home or in your car or wherever it is that you're using alexa. This is a big jump from a contextual understanding perspective of Alexa while connecting it to all the things that make Alexa very helpful as it is today, such as having access to all your music library, whether on Amazon Music, or on Spotify or whatever it is that you're listening and you're viewing history on Amazon videos your orders on Amazon itself or just searching the web and so on. All of that bake into the existing user interface, whether on your mobile app, on the Alexa device, in cars that run, Alexa, and so on. So this is supposed to be deploying out. Right now I have multiple Alexas at my house. Will be interesting to see if the old models will upgrade automatically if something needs to be done, if there are any limitations, and I'll keep you posted as I learn more. And we're gonna end with three interesting announcements from OpenAI and then a AI based actress that is taking over the world by a storm. So the three big announcements from OpenAI, one is they just released their. H one results, their revenue has rocketed 16% over the entire year of 2024. So the first half of 25 has seen a 16% growth over the 12 months of the previous year. but at the same time, they have burned through$2.5 billion in cash because their expenses were significantly higher than that huge jump in revenue, and that led to a$7.8 billion operating loss in a$13.5 billion net loss in just the first half of the year. That being said. They have just raised another$40 billion at a$300 billion valuation, so they ended the first half with$17.5 billion in cash in securities, which can keep them running at this crazy burn rate, at least another six months. They're also pushing a lot more aggressively on market share, and they've invested$2 billion on sales and ads, which is again, more than double than the entire investment in that topic in the entire year of 2024. Now they're projecting continuous crazy growth and a$9.4 billion in revenue in 2026, but$115 billion in cumulative burn rate through 2029, which is an insane amount of money and puts their profitability at a very big question mark, at least between now and then. The other interesting information that was released, uh, this week is on how much power OpenAI is going to consume compared to benchmarks that we know today. So they're planning in their partnership with Nvidia, a 10 gigawatts data centers. That is in addition to the seven gigawatts data centered that they're planning as part of Stargate. Now, putting that in perspective, a 10 gigawatt project alone matches the amount of electricity that is used by the entire city of New York. The seven gigawatts number is what San Diego consumes during a really large heat wave, top peak usage. So that gives you an idea on how much power this company is going to consume with the next few years on regular basis. One of the top researchers on this topic from University of Chicago, Andrew Chean warns that AI could consume 10 to 12% of global power. Obviously creates a very serious environmental impact, and while everybody's talking about trying to make this green, the reality is as of right now, it's very far from green and it's using energy from traditional carbon based fuels, which is not a good thing for any of us. I really hope that sometime in the immediate future will help us solve for green energy. Either better solar panel technology or the Holy Grail of fusion. And the other thing that I wanna share with you that you can go and explore on your own is that OpenAI started publishing a new series that's called OpenAI on OpenAI, in which they're basically sharing in videos as well as blog posts, how they are using the technology internally. And they're talking about go to market and they're giving multiple examples, showing you how open AI is using the open AI technology, which can give you great ideas and inspirations on what is possible with AI today. And then to end on a really interesting, weird, bizarre, surprising, scary topic is the introduction of Tilly Norwood. So who the hell is Tilly Norwood? Tilly Norwood is a fully synthetic AI-based actress that is crafted by a Dutch AI studio, and they offered it as an actress Two leading agencies. They're basically saying that she could be the next Carla Johansson or Natalie Portman. She's this cute, sophisticated, young actress, only, she's not real. Now, obviously, that drove a huge celebrity backlash that said, this is horrible and this is the end of the world, and how people are doing this, and what happened to human connections and so on and so forth. This comes from all sides of the, A list of actors and actresses in the world, as you would expect. As well as the SAG AFTRA Union of the Actors, which includes about 160,000 actors that were very loud against it. If you remember the strike from last year or two years ago when they went on strike and basically shut Hollywood down until the Hollywood studios signed an agreement saying they will not. Use AI to replace them. And the writers, if you remember, those of you who were listening to the podcast back then know that I said that it's the most ridiculous agreement ever signed because the studios don't stand a chance. And the reason they don't stand a chance is because it doesn't matter what they sign, they will come the time. And that time is coming probably faster than we think in which new types of studios will show up that will be able to generate movies, full fledged movies, instead of for 20 to$200 million budget for. 20,000 to$200,000 in budget, and this is all gonna be AI based. There's not gonna be any actors. There's not gonna be any cameras, no lighting, no filming, no editing, no microphones, none of that is gonna be there, and yet a lot of people will want to see it. That can lead to a huge variety of different kinds of futures. Some of it might be that people will be willing to pay more for human based movies. I really hope that that's the case. The flip side can also happen that we will get flooded with low quality, but a huge variety of new movies because anybody will be able to create movies and maybe we'll be able to go to the movie theater and play$2 to watch a movie instead of 25 or 28 or$30 to watch a movie, which will then allow more people to go to the movies. So there are. Benefits to this approach, unless you're an actor and you're afraid for your job. But that is very similar to many other jobs that AI is going to wipe out or at least impact negatively. Uh, do I think that the near future we'll have at least a hybrid model? I'm a hundred percent sure of it. Like in the very near future, we'll start seeing more and more AI in movies, either AI actors or entire ai. Movies and they're going to be blockbusters and a lot of movies that people will wanna see that may not have human actors in them at all. Do I think that's gonna replace all actors in the immediate future? No, but I think our kids and definitely our grand creeds wouldn't care. All they would care about is that the movie's exciting and that it's moving them and that can, they can enjoy it and they wouldn't really matter to them. That sometime in the past, humans used to do that. This sounds terrifying and scary to some. Sounds really exciting to others. I think it's gonna be an explosion of creativity, allowing a lot more people to create full movies or TV series or even short videos and films, which I think is great. I am not happy about the impact that is going to have about the actors and videographers and editors and, and all the other people that go into creating a movie, but I don't see any way around this. This is very similar to what happened to cartoons that were drawn by hand, and now the vast majority of them 3D and computer generated, and I would seriously doubt if there's any handmade animation movies still being made and yet this industry is still growing and a lot of people love watching animated movies and so on. This might be just the next variation of that. That's it for today. If you have been enjoying this podcast, please subscribe to it so you don't miss any single episode. And while you're at it and you're subscribing, if you give us a review on your favorite podcasting platform and click the share button and send it to a few people who can learn from it as well. I would really appreciate it. They would really appreciate it too. And we'll be back on Tuesday with another fascinating, incredibly interesting how-to episode that is going to show you how to better manage your time using AI in several different methods and time is the only resource you cannot get more of. So learning how to manage it more effectively, both for your personal and professional life is extremely valuable. So come join us on Tuesday. And until then, have an incredible rest of your weekend.