Leveraging AI

243 | Super week !!! Gemini 3 and Grok 4.1 take the lead in the AI race, OpenAI and Anthropic sound the Super Intelligence alarm, 24+ hours coding agent, and more important news for the week ending on Nov 21, 2025

Isar Meitis Episode 243

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Learn more about AI Business Transformation Course: https://multiplai.ai/ai-course/

Are you prepared for the moment when your AI tools fail—and take 20% of the internet with them?

This week was one of the most explosive in recent AI history. From Google’s jaw-dropping Gemini 3 release to a stealth drop of Grok 4.1, plus the Cloudflare crash that wiped out access to ChatGPT for hours — the implications for business leaders are massive.

In this episode of the Leveraging AI Podcast, Isar Matis unpacks the seismic shifts that happened across the AI landscape this week—and what they mean for your business. If you're leading a team, scaling a company, or just trying to stay ahead of disruption, this is your AI cheat sheet.

Bottom line: Ignore this week’s AI developments, and you risk falling behind. Fast.

📌 In this session, you’ll discover:

  • Why 20% of the internet crashing should scare every business leader
  • How Google leapfrogged the AI race with Gemini 3 Pro 
  • Why Grok 4.1’s silent release might be the biggest underdog move of the year 
  • AI agents are here: what Microsoft Ignite revealed about the enterprise AI future 
  • Klarna cuts 50% of staff—how AI is creating a new kind of workforce
  • How businesses are hitting $1.1M revenue per employee using AI 
  • The rise of humanoid robots in real-world production lines
  • OpenAI and Anthropic are warning us—are we about to lose control?
  • The executive order that may block AI regulation at the state level 
  • Why you shouldn’t buy your kid an AI toy this holiday season 

💡 Key Takeaway:

AI is evolving at breakneck speed. Leaders who aren’t proactively integrating and planning for redundancy, ethics, and upskilling will be left behind. Fast.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Matis, your host, and what a week we had. So I will start a little bit with my personal experience this week. I just came back from delivering two different workshops in Europe. each of them was two days of full training plus a hackathon in the end, in two different locations with about 80 people in each and every one of these workshops. And it's always my favorite thing to do because most of the people in these workshops, and I always ask people to raise their hand based on their current level. Most of the people have defined themselves as either total beginners or novice users of AI before the workshop and at the end of the second day when they show the outputs of their hackathon, it is just so exciting to see the level of progress and practical implemented use cases that they can start using in their businesses for things like financial projections, sell projections, inventory projections, dashboards, showing correlations between marketing investment, online purchases, and store purchases, training videos, marketing videos, HR promotional videos, market research reports with very detailed analysis that saves them tens of thousands of dollars or sometimes hundreds of thousands of dollars of external provider automating website content, visual assets at scale for multiple things they need and many, many more. And all of that after Tuesdays of training across every aspect of the business. Now, to make this even more amazing, during the first hackathon, CloudFlare decided to crash 20% of the internet. So we had very limited access to ChatGPT. So people had to figure out other ways to do this, and yet they were able to generate amazing results. Speaking for 2 cents on the cloud flare crash in general, While I assume most of you experienced issues related to that, let me explain a little bit what happened cloudFlare went down for a few good hours on Tuesday this past week took down about 20% of the entire internet, including that many, many different websites, including X and ChatGPT, and it all happened due to a simple tweak they made to their click house database permission system that is supposed to control their bot management tool, basically allowing or disallowing bots to scrape different websites. And that led to a whole set of dominoes falling down, which eventually, as I mentioned, took down 20% of the internet for a few hours. Why is that interesting? Well, you need to remember, we talked about this many times before, and I promise you we'll dive into the episode in a second. But I think this is a very important aspect of learning what, how to prep better for our future as we become more and more dependent on AI tools and agents to run critical aspects of our businesses, and over time, most likely, forgetting how to actually do them manually. Redundancy is becoming critical, and when I say redundancy, it's across more or less anything. If you have critical aspects of your business that AI is going to run, then you need redundancy across all different levels of your IT stack, including how you deploy your services worldwide. Also, access to fallback mechanisms and redirecting to other AI models. If the models you're running on are not working, you need to be able to test these models in advance and so on and so forth. So. And if you wanna be even safer, you need to provide training to your people on regular basis, let's say every quarter or every six months on how to do the manual process that used to exist before AI took over because you may need to use it. We used to do that in the Air Force all the time and do some segments of large exercises without any computer systems. And that retained our capability to work manually if we had to, and you should probably do the same again, specifically for critical aspects of the business that run by ai. But this was just a very small component of what happened this week. As the saying says, when it rained, it pours. We got multiple new models this week from multiple vendors. All of them are absolutely incredible, breaking all the different benchmarks and so on. We have a completely new AI leadership board ranking that has been shuffled from just a couple of weeks ago, and we got some very significant warning about the impact of AI from leaders in the industry, including anthropic, open AI, and leaders in the consulting industry like PWC and Garner, and it was Microsoft Ignite this week, which had its own new set of announcements on how AI is going to be integrated into everything, Microsoft. So we have a lot to cover. So let's get started. The biggest and most exciting release of this week, which was highly anticipated, and we talked about this in the last few weeks, was the release of Gemini three. So it was known or expected, as I mentioned, that Gemini three will come out in November, and there were a lot of rumors on how much it's gonna be better, and nobody exactly knew what it is, but now we have actually access to it. But it was as expected release this week, and it is an incredible model across the board. How incredible. So let me describe this to you just by looking and analyzing the information from the LMC Arena, which is a platform that allows people to compare different models without knowing what they are. So a white label comparison and pick the models that act better for them across multiple aspects. We talked about the El Marina multiple times on this podcast. Well, I have a screenshot from it from just a few weeks ago because I was talking about this in a keynote that I did at a conference, and back then, a long time ago, about two weeks, uh, the first three spots across the board, combining the multiple results together with two different variations of clause on it. 4.5 and with Claude Opus 4.1 with Gemini 2.5 Pro, just in the fourth PO position, followed by ChatGPT, and then Claude Opus 4.1, and then GPT-4 0.5 and, and so on and so forth. But the three first spots were held by Claude. Right now, Gemini three Pro is number one overall. Number one on hard prompts, number one on coding, number one on math, number one on creative writing. Number one on instruction following longer. Number one on longer query, and number one on multi-term, which is more on the agent side. Number two, which we're gonna talk about this in a minute, is Grok 4.1 thinking. And when I say it's number two, it is sharing number one with hard prompts, coding, math. Multi turn. And overall it is sharing the first place with Gemini three and it's number two in creative writing. Number two in instruction following, and number four in longer query, followed by Claude Sonnet in the third place. So two different models that were both released this week has surpassed Claude Sonnet 4.5. If you wanna dive into some of the specific benchmarks. Gen I three scores, 92.1 on the MMLU Pro reasoning benchmark, which is compared to Claude 4.5, sonet at 89.7 and open AI at 88.2. And just with all the previous releases, this is just just a single model release. It is a family of models from the lightweight nano model that is built for on-device usage to the ultra complex reasoning model that integrates advanced video understanding co-generation, real-time collaboration tools. And overall, as we've seen amazing capabilities across the board, it was all trained on Google's own hardware, so TPU version five P clusters, which, and this model fully integrates into everything Google. We are going to talk more about that in a minute. And if you remember, or if you want, go back to the early episodes of this podcast in 2023 when Google was very far behind and delivering embarrassing results when it came to ai, I said all the time that Google is going to win this race because they have everything they need in order to win the race. And here we are where Google is back ahead and not just back ahead. They're integrating it into more and more aspect of Google. They have the perfect vertical and horizontal integration of these models, from the demand to the data, to the compute, to the people, to the distribution channels, to the tools that people use. Literally everything you want, Google has it, and it was just a matter of time until they're all going to integrate it together. This is obviously not the final step of the race. It's just the current step of the race where Gemini three is ahead based on internal information from Anthropic that was shared on the information. Internal tests within Anthropic is showing Gemini three outperforming Claude in seven out of 10 categories, including math, vision, language, tasks, and other aspects. And this has driven two different things. One is they are pushing in their relationship with Amazon to get more GPUs in order to grow faster. And Dario Amide reportedly said in an internal text to different partners. Were not sleeping on this. Claude four drops in Q1. So the race is on, but in addition to the fact it's another step in the race, this is maybe the most multi-modal, most generalized model we ever got. Or if you want to understand what that means from the horse's mouth, debe, the CEO of deep mindset, and I'm quoting Gemini Three, isn't just smarter. It's a foundation for agents that reason across modalities from analyzing live video feeds to co-authoring code in real time. Now in addition to its incredible capabilities across the board, there was a big focus on red teaming and safety and a big part of the announcement emphasizes the amount of effort they invested in that. And they're saying that Gemini three's safeguards is blocking 85% more harmful prompt then Gemini 2.5. Now, is that enough? That's a very good question because what I'm asking myself is as these models gets better, the risk of the damage that they might do gets much, much higher. Because if original models could write a poem and it wouldn't rhyme properly, these models can run your company. They can potentially generate generous weapons or write really problematic to handle, malicious pieces of code. And so being able to fix those is a lot more important right now. And while they didn't share the percentage of harmful prompts, it's actually able to control or the ones that get executed through these controls, even the 15% that was not improved from previous models might be unacceptable depending on what the prompts are. So when you read these announcements from these leading labs, I want you to think about that. Think about what is possible with this new model and how acceptable it is to allow that capability to not be fully aligned and be able to use for negative things. And I'm not sure these two graphs align in a way that is acceptable from a social or business perspective. Another big part of Google's announcement was the release of Gemini three Pro Image, also known as Nano Banana Pro, which delivers a huge upgrade over the already amazing original nano banana. So it's delivering a studio quality control by maintaining incredible subject consistency, including using multiple characters in several different images and across 14 different objects, all in a single workflow, which enables you to seamlessly swap backgrounds or outfits or multi-image blending from different reference photos into a single unified image. In their release page, which will appear in our newsletter, they are sharing multiple important capabilities on their website in the page that discusses the capabilities of the new model, they highlight several important upgrades. One is generating clear text, sharp and in exactly the right setup as it needs to be. Some of the examples that they're showing them are absolutely mind blowing, such as an actual comic strip with the images and the text above and the text below in handwritten notes and multiple other examples of amazing text across different setups, which means you can now generate ads or comics or anything you need with text on top of it, floor charts, et cetera. With Gemini, another very interesting capability is real world knowledge. So because it is a part of Gemini, it's not just an image generation tool. It can do really cool things. One of the examples they gave is they wrote a prompt that says high quality, flat lay photography, creating a DIY infographic that simply explains how a solar energy works arranged on a clean, light, gray texture, background, et cetera, et cetera. And the output is incredible. It looks like somebody did like a do it yourself arts and crafts project to explain how this works, and it shows the sun and solar panels and inverter, and the house and the electricity and the grid and so on. All with perfect text with 3D carton looking components that shows the entire process. They're also showing a user manual, if you want, like a really cool strip out of a cookbook showing how to prepare chai. And then a step by step preparation process with really cute drawings and the description below it all with perfect text, perfect images. And it's absolutely mind blowing that this is done with a single simple prompt. They also have the ability to translate ideas and place them on whatever you wanna place them. So the examples that they're showing is taking a can with a full texture and design on it, and just take taking the text on that can and changing it to other languages in this particular case into Korean, and it overlays it on top of the can perfectly just by translating all the text on the can to Korean while keeping everything else consistent. And speaking about product placement, you can take your logo or whatever device design you wanna put, and they're showing how you can put it on a bag, how you can put it on a bag and on a cup, and on a t-shirt and on billboards, and anything that you want. They're also showing studio quality control over the image itself. Different types of shot types, whether wide angle, panoramic, closeups. Also the depth of field, what's gonna be in focus versus not in focus as if you're actually using a real camera that was available before. It's just getting significantly better. They're showing the ability to completely change the lighting and the environment of an existing photo, so they're showing a photo of an elk standing on top of a cliff in a gloomy day in the sunset, and then they're changing it to the middle of the day in a beautiful day with blue skies in a few clouds in the sky, including their shadows on the background and so on. Absolutely incredible. With full control over shadow and contrast and everything you can imagine that you can do in post-production only done with simple words and the input can obviously be real images and not just AI generated images. They're also providing built in ability to upscale at one K, 2K oh 4K resolution, which means you can crop images and then rescale them and then crop them again, and then rescale them and then crop them again and rescale them and keep on generating better and better resolution to different components in larger images. There's now full control of aspect ratio. So that was my biggest gripe. When not a banana came out, you couldn't change the aspect ratio. Then they added basic changes. You can go to like four different aspect ratios, and now you can basically do whatever you want. Wide strips, tall strips, 16 by nine, nine by 16, one to one, whatever you want with a single prompt while keeping everything else exactly the same. extremely powerful capability. Subject consistency across multiple images, including multiple objects and or people. So they're showing multiple examples of either cute, furry creatures in multiple standalone images brought into one single image while making them all look the same or adding an image of a dress and a person and a a dress and a person in a chair in a plant, and making an image of a studio with all these components all in it, or showing six different images of tennis players wearing different clothing and putting them all into one single shot while keeping consistency of the people and the stuff that they're wearing. By the way, the stuff that they're wearing is all made out of balloons, like the long balloons that you used to create balloon figures so that all their clothing is made out of that. But the trick again, is taking six of those images and turning them into just a single image, and they're also adding the capability to ask for multiple frames from the same single prompt. So you can put in one prompt and ask it to generate multiple variations of it just to get ideas and pick the one that you like the most without having to ask it to generate another one and another one. And they've upgraded their ability to create realistic images of basically everything you want. They're saying landscape plants, people, and animals with true to life details. Go to their website, see the examples. They are stunning and there's zero way to tell between them and live photos. What does all of that add to? It adds to professional capabilities to generate visual assets at scale for anything that you want, all within Google Gemini, without having to move to a different platform. Since nano banana was introduced, I am probably generating 90% of the images with nano banana. This will probably push it to the other extra 10%. It is just the best model out there when it comes to real life usability. So if you're just looking for creative outlet, maybe Midjourney is still a little better, and I'm saying that maybe. But as far as usability for business people generating assets for business projects, from my perspective right now, Gemini, nano Banana Pro is far ahead of everything else out there. Combine it with the fact that it is available across all the different tools in Google, including where you need it the most, such as generating images for PowerPoints. And you understand why I switched to using it almost completely, even when I use it in complex processes, in tools like we still, nano Banana is my number two go-to tool because of its ability to keep consistency, to change angles, to change lighting and so on. And that was before the introduction of Gemini three. But to put things in a broader context of how far we've went, I want to quote a segment out of Ethan Malik's latest post. So those of you who don't know if Ethan Molik, you should know him. If Ethan Molik is a professor at Wharton and he is a AI researcher and he publishes really brilliant aspects, both from his ability to research that as well as his team, as well as what he thinks about AI and its impact on the world. In his substack, that is called one useful thing. He shared the following about Gemini three. He talked about the fact that he released his first article in the Substack about cha GPT-3, just as it came out. And how his predictions back then were relevant. But then he thought about what would be the best way to show the growth of the capabilities between ChatGPT back then and Gemini three right now. So while he was thinking about how to show the difference, he just prompted Gemini three to help him figure out the difference and display it in a way that will be relevant. So he said, show me how far AI has become since this post by doing stuff. And Gemini responded with the following. Can definitely show you the difference. The tweet you shared is from November, 2022, right on the cusp of the modern AI boom. Back then, the wow factor was that an AI could write a coherent paragraph or a simple poem about a silly concept like a quotes candy powered FTL drive escaping otters. To demonstrate how far we've come, I'm not just going to write a text about it. I have built you a fully interactive playable candy powered FTL Starship simulator in 2022. AI could describe the engine in 2025. AI can code the engine, design the interface, and let you pilot the ship yourself. And then he talks about the big jump in coding tools from being able to do code snippets to be able to be an actual quote worker that is writing code and working alongside you in creating or fixing significant portions of the code. And he continues with talking about the fact that AI is now at PhD level across multiple aspects of work. And how did Ham measure the PhD work? Well, he gave it work that he will give his PhD students and watch how well it performs the work on some of the things it performed extremely well. On some others it didn't do that great, but it's definitely a huge improvement than anything we had before. In the summary of what Ethan wrote is three years ago, we were impressed that a machine could write a poem about OTs less than a thousand days later. I am debating statistical methodology with an agent that built its own research environment. The era of chatbots is turning into an era of the digital coworker. And while Gemini three is definitely the biggest news this week as far as model releases, it is definitely not the only release of a significant model. OpenAI released Gemini 5.1 Codex Max, which is their new genic environment to create code that can do a few really incredible things. First of all, it is a lot more accurate than the previous one, so it scores 77.9 accuracy on the SWE bench verified coding evaluation compared to 73 in the previous model. So almost a 10% increase while reducing the number of thinking tokens by 30%. So the cost to get to generate these more accurate, better results was cut by 30%, which is very significant. But more interestingly, from my perspective as far as its impact on the broader world of AI and not just on coding, is lying in this one paragraph that I'm going to quote from the release GPT 5.1 Codex Max is built for long-running detailed work. It's our first model, natively trained to operate across multiple context window through a process called compaction coherently working over millions of tokens in a single task. This unlocks project scale refactors, deep debugging sessions and multi-hour agent loops in their internal trials. In some of these cases, this new model, clocked flawless completion over 24 hours straight. So while most of us are not computer developers, this is a breakthrough that we did not have before. What they're basically saying is that the AI on its own knows how to skip from one chat to the other. Basically, continuing a coherent process across multiple chats, more or less, eliminating the context window limitation that models had before. And I need to open parentheses and do a quick explanation for those of you who don't understand what the hell this means. Every chat that you do in each and every one of those tools has a limited memory. It can run in a single chat. It is called the context window. It is measured in tokens. So tokens. Those of you who don't know are segments of words. This is what these tools actually generate. They're token machines. They don't actually generate words. They generate segments of words that are called tokens. Most of the models so far generated between 128 and 256,000 tokens. The biggest outliers were the recent Claude 4.5 with a million tokens, and Gemini 2.5 Pro with 2 million tokens. That was the upper most limit that we got out of every model and what GPT five one Codex Max can do is basically do this in an unlimited way because it can keep on compacting, the outcomes of the previous chat, to start a new chat while using a very small portion of its context we know, and now starting fresh and keep on going, and then it can do it again and again and again. Conceptually eliminating entirely the limit of context we know. And while they didn't say anything about this in the actual post itself, this completely changes how AI can work as a whole. Because if they can do this for code, you can do this for anything else, which means you can do tasks over hours, days, and weeks conceptually. And as long as you can control the drift, you can now do tasks that previously were just not possible for AI and are becoming possible right now. So this is the biggest take for me about this model. It's in addition to obviously its capability to write better code. But as I shared in the beginning of this podcast, the model that came out of nowhere in a completely stealth release that is sharing the number one spot with Gemini three in most of the benchmarks. And on most of the leader tables together with Gemini three is Grok 4.1. So what is Grok 4.1 Great at? Well, as I mentioned in the beginning, more or less everything, it has significantly improved and smarter and sharper emotional capabilities. It has a better creative capability when it comes to writing. It is really good at real world reasoning and delivering more empathetic and significantly less hallucination chats. It is also very good at doing it faster than most models and it optimizes when to think and when not to think better than other models. I have been using grok to generate the preparation for these news episodes for a very long time, and every time a new model comes out from any of the other labs, I test grok against these other labs and every time they keep on winning and now they have an even better model that does an even better job at helping me in that process. So while it may not be best at some things on several different things, it is dramatically better than other models, such as getting live data from the internet and summarizing it in the way that I want. The other interesting aspect, if you remember when GR came out, it was very edgy in its approach to everything, and that was what helped it stand out right now, grok 4.1 is at the top of the EQ bench three benchmark that understands human emotions and it is responding with more empathy than any other model out there. This is more or less the opposite of what it did in the beginning of being very sarcastic and very direct, which is a huge change in big kudos to the Xai team. Another interesting remark about the way they deployed it is they secretly, if you want, quietly deployed it to more and more users between November 1st and November 14th to gauge people's responses and how they're using it and what are the differences between that and the previous model. And only then did the big announcement of the big release to everybody else, which I think is a very smart way to release models and test them, not just on the El Marina, but on the actual user base at a smaller scale. Overall a highly capable model, and I will definitely test it across different things that I do. And as I mentioned right now, between all the different people on Ella Marina, it is sharing the number one spot, more or less on every benchmark together with Gemini three, why would I probably use Gemini three more? Well, because it's integrated into my day-to-day life because I am a Google platform user. Another big interesting release this week is Alibaba just unveiled a free Quinn app that released currently in China on iOS, Android web NC platforms. And the goal is to create a GoTo app hub for everything for the Quinn series. Or as the company phrased it, smart personal assistant that not only chats, but gets things done. From Alibaba's perspective, that's a very obvious next step. They're currently controlling a huge part of how people in China engage with the world, but the WAN models so far focused more on enterprise and not on end user. And this is definitely a push in the opposite direction of going towards a consumer based application that will integrate everything Alibaba into an AI application. So what kind of features it has? It allows you to do deep research, AI assisted coding, smart camera for visual queries, voice communication, just like Chachi, PT Live. An even multis slide, PowerPoint presentation generation from a single prompt, but as I mentioned, Alibaba as the giant it is has integrated it with everything Alibaba, so it includes with services such as map and food delivery, and travel, booking, office suite, booking e-commerce, education, health guidance, et cetera, et cetera. Basically, everything that Alibaba knows how to do will be integrated into this app, very similar to what Google is doing, and very similar to what OpenAI is attempting to do now, while it's currently released only in China, they're already working on international variations of this with the goal to obviously grow globally with this app. Now in addition to the big releases, we got a lot of new features This week, OpenAI just released group chat across the board for everybody. So following a one week pilot in Japan and New Zealand, OpenAI is now deploying group chats for every logged in user, including free go plus and pro plans worldwide. What does that mean? It means that you can invite up to 20 participants via shareable links to join an existing chat. These chats live in a separate universe in your chat GPT application, which means you can see them separately and see all your shared chats, but they also do not impact or impacted by the memory of the application, which is awesome because it means that you can share whatever you want in these chats, and your personal information and the memories about you and your business are not gonna be shared through this chat. And also what you do in these chats are not gonna impact the memory of ChatGPT, about you and your business and so on. Another important aspect of this is all the human communication somehow doesn't count towards your limits or your context window. Meaning when you have a regular conversation with people, it does not consume tokens. However, when you ask the AI to do something, it will, which makes it a more efficient way to engage with AI while engaging with other people. Now, this is obviously the future of how we are going to work. AI will be embedded into most conversations and processes and projects that we're going to have in an organization, and most likely to 100% of work related conversations allowing us to tap into this AI capability as we need. Meaning you can collaborate with people and AI agents as needed at the same time. In the work environment, this is going to be absolutely magical because you can collaborate with other people on your areas of expertise, but every time you need a capability that AI can do better than any of the team members, such as research, data collection from multiple sources, both internal sources and external sources. Report generation code writing, generating applications on the fly for things that you need, creative assistance, et cetera, et cetera, et cetera. You can ask the AI to help and be a part of the conversation and because it was a part of the conversation all the time, it has the full context of what's going on and it can participate in the most effective way. This is the holy grail of collaboration work between humans and I anticipate this to be. As I mentioned, everything, not just from the tools from the big lab like Gemini and Chachi Pity and so on, but it will also be available in any other platform that we're using today, such as Slack, Microsoft Teams, et cetera. It is probably not going to be limited to only chat, which means we will start seeing which means it will be also be available to all the voice and video communication and live meetings. So either Zoom teams meet, et cetera, or in actual live human meetings where there's a microphone open and or a camera where the AI can participate. And I feel that this will become more or less natural, at least in some organizations through 2026. While this sounds a little crazy to many people right now, I. Do think that this is where it is going, and I think teams and organization that will learn how to work this way will see an incredible acceleration in their ability to do the things that they want and need to do. So I highly recommend to all of you to learn how to do that as well. Staying on the conversation of ChatGPT, a new functionality that is coming, a very interesting partnership between OpenAI and Intuit, the company behind TurboTax. So Intuit is going to pay OpenAI more than a hundred million dollars per year for integrating and enabling ChatGPT users to tap into Turbo Tax for instant refund estimates, or Credit Karma credit reviews directly in your chat. So as many other tools integrate into ChatGPT such as travel booking and ordering food and ordering stuff from the supermarket. This is going to be another application. And on the other side of this partnership, OpenAI are going to provide enterprise licenses for internal usage of Intuit employees over 18,000 of them. And they're gonna be used for everything from coding to customer support, research, et cetera. As I just mentioned, and as we detailed in previous episodes, this is a big push by OpenAI to have multiple applications running inside of ChatGPT, such as Spotify and Shopify and Zillow and so on. And this is most likely gonna be a big revenue channel for OpenAI in the future. And right now just builds an ecosystem that will, from their perspective, be able to compete with the centralized platform such as Google, in order to drive people to do everything within the open AI environment. That being said, this gets a lot more sensitive, right? There's a very big difference between ordering pasta and tomatoes from the supermarket or even doing research about available flights to a specific destination to allowing AI to get access to your financial information and taxes. But this is the direction that OpenAI is pushing it. And if they can make it work, it will open the doors for a lot more other personal aspects of communication together with ai, including other sensitive data such as, healthcare or legal and so on. And speaking of travel and app capabilities, Google just increased their access to their Google AI powered Flight Deals, tools. So this tool that was released earlier this year only in the US and Canada, is now available in beta for over 200 countries, including uk, France, Germany, Mexico, Brazil, Indonesia, Japan, Korea, and so on and in more than 60 languages now in AI mode. And it has two separate modes in AI mode that is currently available through Google Labs on desktop and only in the us. The new Canvas tool that comes as part of it lets users kick off complete trip planning with a single prompt. Which will then generate a hyper personalized agendas pulled from realtime search data, Google maps, reviews, photos, web intelligence, and so on, to create a detailed plan step-by-step for the trip. And because it's done in Canvas, you can then relate, change, copy, paste, and whatever you want to every step of your trip. Now, in the initial step, which is what we have right now, AI Mode can only book restaurant reservations for US users only by querying. Multiple aspects such as party size and date and time and location and the type of restaurant you want. But in the future they are planning to integrate it together with flight and hotels bookings as well. Allowing you to plan, but also book an entire trip just by having a chat with your personal agent. But speaking of agents, as I mentioned in the beginning, Microsoft held their Ignite event this week, which was all about, as you can expect, AI agents in the enterprise. The keynote was about two and a half hours long. So while I highly recommended you watching keynotes in the past, this is not one I recommend watching, but I will try to summarize the key points that came out out of that entire event. In general, as I mentioned, it's about agents in the enterprise, but they added a few very important capabilities to push the concept of enterprise wide agent implementation to a whole different level. Maybe the most interesting aspect of this too, most of you not being IT professionals and working probably in smaller businesses and not in huge enterprises, is the reduction of the price of Microsoft 365 co-pilot for businesses from$30 to$21. That's an over 30% discount, and the goal is very, very clear to make this accessible to other companies who have lower budgets for AI implementation. So more and more companies can afford to do that now. They also added several different layers to make the control and management of agent deployment more manageable for large organizations. And they introduced a tool called Agent 365. What Agent 365 is supposed to do is to give it professionals or whoever's gonna manage the AI deployment in the organization, better visibility and control on all the different agents that are deployed across the organization. And they've included five different key capabilities registry, which is a single source of truth for the inventory of all agents, including shadow agents, meaning stuff that people develop on their own and see exactly who's running the, where are they being used, and so on. Access control. That requiring a unique agent ID and enforcing principle of least privileges to allow people or not allow people to access different kind of agents who can do different things in the organization. Visualization, which is unified dashboards to track connections and monitor. ROI. Interoperability, which allows it to access organizational context via a new tool that they called work ai and obviously security, which is in depth protection of using tools like Microsoft Defender and Purview applied to the AI agents' capability. They also created a new program called Agent Factory, which is designed to help organization to get assistance and move faster from ideas to production. It includes several different components. One of them is the ability to switch to a Microsoft agent pre-purchase plan, also known as P three, which is a single metered. Plan that allows you to use agent across everything that you developed. It is using a new measurement called agent commit units, acus, which is replacing the concept of tokens only in the agent world, meaning you commit to upfront buying of X number of these acus, and then you can run agents across your entire organization dramatically reducing the level of complexity of different levels of licensing and setups that you needed before. Which I think is a very smart idea from Google's perspective, driving this to a much broader adoption in organizations because you know what you are capping your organization with and combine that with the ability to track what agents are actually being used, what they're being used for, and what's the ROI allows you to then make much smarter decisions as far as which agents to develop, how many of them to keep, which one to tweak and so on. This plan also includes a army of what they call forward deployed engineers or fds, which are experts that are going to work together with clients of this program, helping them to accelerate AI solutions and get them to production much faster than organizations can do on their own. The Agent Factory also provides tailor role-based training across live and instructor-led training to push the AI fluency across different kinds of teams in the organization. And as I mentioned earlier, they introduced a new tool called Work iq, which is the intelligence layer that is going to power future copilot agents. Work IQ helps copilot understand the user, their job, their company, the data of the company, emails, files, et cetera, and memory and inference all in the same time. These tools thrive on one thing and one thing only, and that is context. The more context they have, the better they work. And the idea of Worker IQ is to gather the user's context and provide it seamlessly to the agent so the user doesn't have to, which in return will deliver much better results, which is very smart from Microsoft's perspective. And very similar to what Google is doing, only with a much bigger focus on enterprise, they are integrating all these new capabilities into more or less everything. Microsoft, including they now have teams mode for Microsoft Copilot 365, which means that your one-to-one copilot chats can turn into group chats inside of teams. We just talked about this as a concept and now that you know it is already available and possible additional, there is a facilitator agent in teams that now is generally available and it's helping to manage agendas, take notes, and keep meetings on track. Now the free Copilot chat integration into Outlook will soon be upgraded into content Aware across entire Outlook, inbox calendars, meetings, allowing the user to get information to everything of your day-to-day knowledge and agent mode in Word, Excel, and PowerPoint is coming to all Microsoft 365 subscribers, enabling them to generate complex documents, spreadsheets, presentations, and so on straight in the apps while using ai. So what does that put us? It is showing how aggressive Microsoft are in becoming the everything AI for any Microsoft user. And the more they're gonna integrate this into the existing tools, and the more they will allow it to be context and content aware across more and more aspect of the Microsoft ecosystem, the more it will provide value to users, which in return will increase the usage of AI across the board. But definitely a huge step forward from an enterprise management perspective and agent creation and deployment perspective from Microsoft in this recent announce. Staying on the enterprise level, Salesforce just announced a new capability called E Verse, which is a simulation sandbox that lets developers stress, test and refine voice and text agents using synthetic data for reinforcement learning. So what does that mean? It means that previously you could develop an agent and test it a little bit with people and then deploy it to the real world, letting it face real world issues only when it is going live. Why? Because that was the only way to test it with live data. So what Evers does is it simulates real world. Chaotic data such as bad connections and different accents and crosstalking between different participants in the call to expose the AI to real world environments and real world situation, allowing you to test models and then fix them while still in development phase versus after rollout, which is highly beneficial. It is just looking at your real data and then creating synthetic data that looks like the real data in order for you to test these models. This is not a new concept, but the fact that it's now built into Salesforce is obviously a big deal because it will allow Salesforce users to develop even more agents that are safer and better and do it a lot faster and in significantly less time. Another enterprise related piece of news is that GitHub copilot. CLI now has all the latest and greatest tools. So the latest G PT 5.1, and Gemini three Pro, which will enable significantly faster and bigger code generation and reviews inside of GitHub copilot. And from all the different releases of this week, which was definitely a lot into some scary projections coming from multiple angles as far as the impact of AI on jobs and on our world. We will start with Gartner in the IT Symposium Expo in Barcelona. A Gartner shared a somewhat apocalyptic future of what they're calling job chaos in the near future, going to a peak between 2028 and 2029. Now, they are assuming that AI will ultimately create more jobs than it displaces, but that in, in the short term, it is going to create, as they called it, a jobs chaos. You heard me say that from my personal opinion, many, many times, I first of all do not necessarily agree will generate more jobs than it eliminates. I don't see how that is even possible, but let's assume they are right In the short to medium term or in their specifically said 2028 to 2029, it is definitely gonna take a lot more jobs away than it is going to create. And what they're saying that it will be forcing virtually every business to recalibrate how they're running the business. What is the org chart? What is the tech stack and every other aspect of the business in order to stay competitive in this AI future? And they're not talking about a very long future, and they're describing four different scenarios that CEOs and leadership teams will have to consider when they're considering this future. One is human oversights endures, which means it's a lean amount of workers that man the fort basically, while AI is running and this group is monitoring and fixing all the small things that AI is not doing well, this idea is obviously keeping humans in the loop and with the goal of preventing mishaps while AI is running. Most of the show. Scenario number two is AI runs the show autonomous agents that sees basically full reigns of the business functions. With very little or no human involvement in routine workflows and jobs from data crunching to basic logistics, customer service, and so on. Scenario number three is augment speed search, which is many, many employees still in place, just running significantly faster, generating a massive, greater output that they can do today. Think about, and the best immediate example is code writing and debugging, right? The ability of code writers to generate code right now is 10 x or a hundred x what it was just a couple of years ago. And the same thing with reviewing code and fixing it. Now take that concept and deploy it across every aspect of the business. And then scenario number four is revolutionary reinvention. Which is AI Pros harnessing the power of AI for completely changing and re-imagining aspects of different industries and doing significant leaps very, very quickly, and not just a gradual change, dramatically changing the business landscape of specific industries. Now what they're saying is no matter which scenario executives leader choose, they need to be prepared to support all four of them, because potentially different aspects of the organization or different industries will require a mixed approach to all of the above. I agree with most of what I said other than, as I mentioned, their long term assumption that it will generate more jobs. But as far as the short and medium term, we're a hundred percent in agreement and it is crystal clear that organizations must invest in training of both their employees and their leadership teams in order to stay relevant and competitive. This is no longer a question of being early adopters and trying to be geeky about ai. It is potentially a question of survival for many businesses, and I would say most businesses now, my company multiply this is what we do, right? I mentioned in the beginning I deliver AI workshops. I did six or seven of them in the last two and a half months to different companies from different industries in different sizes in different places around the world. And they are always tailored to the specific organization and it is the ultimate accelerator of an entire company into the AI era. Now, most of the organizations I work with are either in the tens of millions or hundreds of millions of dollar in revenue, but I definitely have outliers on both sides. I have several different clients in the billions, and I have several different clients in the single digit millions. But if you are in a leadership position in your organization and you feel that you're either not moving fast enough or that you need to get started and you wanna accelerate or kickstart the AI adoption process company wide, these workshops are the perfect solution for you. It is the ultimate accelerator of adoption. And just go back to the beginning of this episode where I talked about what were people able to generate in just a day and a half of training? And you understand why this is a complete game changer if you want to push AI faster into 2026. But I also serve small companies or individuals who want to accelerate, and we have two courses for that. One is our AI business transformation course that I have been running for over two and a half years now, and trained thousands of business professionals and business leaders. And it is showing you how to effectively deploy AI across multiple aspects of the business. The next. Cohort starts on the week of January 20th, which is right around the corner and it's perfect in order to kick off your 2026 with the right foot forward. It is a four week course, two hours a week with homework and in between hands holding sessions where you can get to learn more, and it is the perfect way to get the basics of AI correct. We also just launched an advanced course for integrating AI assistant into workflow automation. This course is for people who already have a solid knowledge in using large language models and other AI capabilities who just want to take your knowledge into the next level start automating business processes, tying it into your existing tech stack. This course starts on December 1st with another session on December 8th. Again, the perfect way to get ready for 2026 and start creating automations and building a much more efficient organization. And all these insights are not, are obviously not just coming from Gartner. The reason I'm doing this is because I've been working with multiple organizations across the board, but also this week we got big hints from other organizations on how scary the current situation is, both from a workforce perspective, as well as from a risk to humanity perspective. So PWC, global Chairman Mohammed Kde just stated that the rise of AI is likely to lead to fewer entry level graduate jobs at firms like PWC. As AI is taking more and more of these tasks that was previously performed by junior staff. Now, he also said that the recent cuts that they had were not due to ai, but he's definitely seeing this is as a critical aspect of their future hiring strategy and that their biggest problem right now is struggling to hire scaled AI engineers to implement the technology even faster, which in return will reduce the needs for even more entry level jobs. He did share that PWC abandoned the plans to continue increasing its headcount and is now focusing on hiring a different mix of people and skill sets, specifically focusing on AI capabilities and AI engineering. And if you need another example to show you how significant this push is, comes from the recent earning call from Klarna. So Klarna is a European country that specializes in financing of online purchases and credit lines, and they have been all in on AI since early 2023, and in their Q3 earnings call. Their CEO shared that their headcount plunged from over 5,500 people in 2022 to just 2,900. In 2023. That is about half the employees. He also shared that AI is now handling task equivalent of 853 full-time employees up from 700 just earlier this year, driving the revenue per employee to$1.1 million. Now, how significant that is an average SaaS company before ai, the revenue per employee is about$250,000. So this is four x the revenue per employee because they're pushing AI across the board in. By the way, to give you the other end of the scale company who are AI first, basically AI native companies from Silicon Valley that has been established in the past two to three years and are all AI focused. The revenue per employees over 2.3 million, so it's more than double what Klarna is doing right now, and it's about 10 x the average SaaS company. The other interesting aspect is that they have grown their salaries on average from$126,000 in 2022 to$203,000 right now, based on their CEO and I'm quoting, we have made a commitment to our employees that all of these efficient gains and especially the applications of AI should also, to some degree come back to their paychecks. So the half of the employees that weren't let go or left as part of natural attrition are making significantly more money. I'm not sure that this average is actually fair to look at because I'm assuming that they've hired a significant amount of AI developers and because of the need right now, these guys don't come cheap. That by itself will dramatically increase the average salary per employee, But based on what they're saying, this is not limited to just developers, but all employees are enjoying this benefit as long as they are willing and pushing to use the AI tools that the company is delivering to them. What is that translating to? 108% revenue increase since 2022 to now. This is insane, and this is, if you think about it, the dream of every CEO, especially public traded companies, rapid growth with no growth in expenses, and in their case, an even reduction in the workforce by a very dramatic way so the bottom line is this is not slowing down. These tools will do more and will allow companies to cut the workforce, or at least not grow the workforce while allowing the company to grow. This is obviously not possible for all companies all at the same time because there's a limited amount of demand for every service or product that companies are selling, which means it will lead to an even bigger reduction of jobs as companies cannot grow any further and have to look at their current cost and look for cost cutting. And this will be a very simple way to do that, which is not a good thing for the global economy as a whole, because then you won't have consumers to actually consume the goods because people will not have money. But in the short term, this is a huge opportunity for companies to push more AI capabilities. hint workshops, in order to get competitive edge and as individuals, it is a very clear need to know how to use AI if you want to keep your job and potentially make more money in the future. So go check out our courses, your Future Self will. Thank you if you do that. But beyond the impact on jobs, we got two different scary predictions from two of the leading labs. One of them is open AI and the other is Anthropic. OpenAI just released a blog post called AI Progress and Recommendations. In this blog post, OpenAI talks about the dramatic reduction in cost per unit of intelligence estimated to be 40 x per year based on the past few years and looking into the next few years. This means that this dramatic reduction in cost means that more and more sophisticated capabilities will become significantly more accessible to everyone. Large organizations and small organizations, including individual entrepreneurs, which in return will accelerate product development and potentially lower the barrier of entry for every aspect that AI can be applied in. Now, they're also saying that the current AI capabilities as are vastly underestimated by most people. Definitely the broader public, as most people are still using AI primarily as basic chatbots or an improved way to search the internet instead of enjoying the incredible benefits that AI delivers. And they mentioned that systems that exist today already outperformed the smartest humans in some of the challenging intellectual competitions in the world. We talked about the coding competition and the math competition in the past few episodes. Share with you that AI is now better than top humans in the world, are both these topics. So what they're saying, they're saying that for professionals that are seeking career growth, the skill gap between general AI use and expert level application is immense, and it is growing rapidly, and it requires significant push towards more training and more adoption of AI in order to remain competitive, either as individuals or as companies. And they also talked about the growth in the type of tasks that AI can do. So they said AI initially was able to do tasks that would take person seconds to then tackling tasks that can take, that will take an average person an hour. But this trajectory suggests that companies will be able to create AI systems and will be capable of handling multi-day or multi-week projects that will run autonomously, and that is not in the far future. Which means companies need to plan for that and arrange for that and prepare for that across both tactical and strategic investments and oversight and collaboration and everything that requires in order to make this a successful transition. And now I want to quote to what I feel is maybe the most important and scary aspect of this blog post. So they said, although the potential upsides are enormous, we treat the risks of super intelligent systems as potentially catastrophic and believe that empirically studying safety and alignment can help global decisions, like whether the whole field should slow development to more carefully study these systems. As we get closer to systems capable of recursive self-improvement. Obviously no one should deploy Superintelligent systems without being able to robustly align and control them, and this requires more technical work. So what are they suggesting that would happen is that they're suggesting that there would be shared standard and insights from Frontier Labs. And they're advocating for agreements on safety principles and sharing safety research and establishing mechanisms to reduce the race dynamics that is going on in AI right now. They're also acknowledging that the changes driven by AI will most likely require us to change the fundamental socioeconomic contract that exists today in order to support a completely new kind of future that will still allow us to sustain the kind of life we expect to sustain. And a very similar statement was shared by Dario Amede, the CEO of Anthropic in an interview to Fortune magazine. and he said, and I'm quoting, I'm deeply uncomfortable talking about the fact that there's too much concentration of power and technological capabilities in a short list of company and the companies. And he's highlighting the existential risks of superintelligence predicting that systems that are smarter than humans could emerge by 2027 or 2028, which is just around the corner. What is he calling for? He is calling for US regulation, proposing a federal AI safety agency similar to the FDA or the FAA with a red teaming capabilities, testing new models while in while collaborating with international treaties to prevent dangerous global race. And I'm quoting, we need to slow down the race a bit. Now he is also saying that voluntary industry self-regulation has failed because the ai profit driven incentives are just too high and it clashes with safety priorities in this race to a GI and beyond. Now, I said that and I said that all along. We have to, as a global society to find a way to work together China, us Russia, Europe, Japan, everybody else. To figure this out together. I actually don't think it's like the FDA or the FAA. I think it needs to be a group like the International Monitoring of Nuclear Weapons that will be in charge of looking into what every single company and every single country is doing around the world. And if you have two of the leaders of two of the most advanced labs telling us that they think they are running too fast because the incentives are too strong and they're not going to stop, this is not just a red flag. This is sounding every alarm you can imagine because what they're saying is based on what they are seeing in-house, this is not concepts. This is not me making predictions. This is stuff that they're seeing inside their labs because they can see six or 12 months into the future. And if they're saying that this is a very, very risky point in the future of the human race potentially, but definitely in the AI race, and if both of them are saying that they need to slow down, they probably need to slow down. The thing is, without slowing everybody else down, because of the business incentive, because of the financial incentive, and because of the billions and trillions of dollars that are involved, they will not slow down unless some group body, government, or other forces them to do so. I truly hope that the government will pick this up and we'll figure this out. We'll talk more later on about new regulations and laws coming in from the Oval Office, and you'll see that this may or may not be the case, at least in the US in the immediate future. Now, staying on the scary side of ai, and again, this was not the goal to create a scary episode, but this just all happened this week. UB Tech, which is a Chinese company that has been developing a robot called Walker S two, has just released a video of showing their first full production batch of these AI robots and this video is scary as hell. So this company has signed multiple deals with some of the leading industries inside and outside of China when it comes to manufacturing and specifically car manufacturing, including BYD and Gilly and Volkswagen and Dun Lisu Motors, and several other companies driving their orders to over$113 million for these robots. One of the interesting thing about the S two robot is that it has battery packs in its back and it has the capability to swap its batteries, which means it will never stop as long as you have a charging station with enough batteries for these robots to come in, they can pull a new battery from the stack, pull out an old one, stick the new one in, and continue working, literally 24 7. Now, the reason the video is really scary, and we're gonna put a link of that in the show notes, is it's showing hundreds of robots marching together out of the factory, all synchronized. It looks like a scene out of Terminator or like Storm Troopers in Star Wars. Think about masses of lines and lines and lines of robots or marching together. And now, and while this particular robot was built for factory production floors, military variations is an obvious next step. For countries and for manufacturers, which will drive, again, billions of dollars in revenue. And as I mentioned, this is really, really scary from a very personal perspective. And just check the video yourself and let me know in on LinkedIn or in any other way you wanna communicate with me, what do you think about it staying on robotics and how they perform? Figure ai, which is a company from the US just pulled out its figure O2, human O robot from the BMW assembly lines where it has been working for an entire year. And they shared some very interesting statistics about these robots. So in 11 months, these robots helped to build over 30,000 vehicles with zero major meltdowns. So this FO two robot. Clocked over 1,250 runtime hours across 10 hour shifts Monday through Friday, loading over 90,000 sheets of metal and parts into welding machines with 99 plus percent accuracy on the job. Now in addition, the robots racked up to estimated 200 miles of walking inside the facility, in this past year. And the reason they pulled them out is because they're now coming up with a new model and they wanna learn by investigating the previous model in order to implement it. In the oh three model that is just coming out, their CEO shared a post on X with all this information plus some images showing the war wounds, as they said, of these robots showing like small scarring and scratches on their metal bodies, but overall, as I mentioned, 99 plus percent accuracy and deliverability across these factories. very big success for this company that is now coming up with their new model. I told you before that we're going to touch a little bit on new regulation coming from the White House. So the Trump administration is now crafting an executive order to basically block state level AI regulation, prioritizing federal, national level innovation over the fragmented local rules. Now this executive order, which a draft of it has made its way to, the information, would task the attorney General with forming a unit to challenge state AI regulations on constitutional grounds. Basically zeroing in on several different measures that California and Colorado that are demanding AI transparency and other aspects and safeguards that might slow down innovation. And they do not want that to turn into a 50 state patchwork across the entire country, which will slow down AI innovation. Now to force this, the executive order was, would instruct the Commerce Department to withhold federal broadband from states with quote, burdensome AI rules that might erode the US global AI leadership, which is fully aligned with the White House July AI action plan that I shared back then. Now, in parallel, the Commerce Secretary, will assemble a team that would collaborate with Federal Communications Commission, the FCC and the White House AI advisor David Sachs, to draft a unified federal AI standard, which will pave the way to progress the right conditions to maintain America's edge in international AI race. This does not sound. something that is collaborating with the alarms that are sounded by open AI and philanthropic On the risk of continuing to run at the pace we're running right now. It definitely sounds like we need to beat China or else, and we will do everything in order to make that happen. Now, I might be wrong and I hope I am wrong, but currently it is fueled by a, the current administration core belief in people like David Sachs is a very clear example of that. And in addition, it is driven by Super pacs that are targeting anti AI regulation such as Anderson Ho Horowitz firm just poured$50 million into a super PAC like that to reduce the amount of AI regulation in this administration. So on one hand I am happy that there's not gonna be state specific patchwork of different regulations, on both aspects, both on slowing innovation down as well as increasing risk because you won't be able to have a unified approach to all of it. And I see this as a opportunity to actually potentially generate a more controlled nationwide and then maybe global environment because we'll come from the US federal government and not from specific states. But I don't have a feeling that this is the direction that it is going. And this is somewhat or maybe more than somewhat troublesome, at least from my personal perspective. And now for some holiday related news. According to a November 20th release from a nonprofit company called Fairplay, parents should stay away from buying AI toys this season due to serious risks in aspects such as data invasion emotional risks, and disrupted human play. And they're claiming that some of these toys, have already sparked, obsessive use, explicit chats, violence prompts, and self-harm nudges in young individuals. Now fair play warn that these gadgets that are often chatbots that are stuffed into teddy bears and different kind of stuffed animals might also erode these kids' ability to develop human relationships, as they will develop dependencies on these furry, cute little animals that can now talk to them and that might hurt their long-term ability to develop human relationship and even sensory skills that will, that are required for them to grow as normal humans. Now while in the future, I see these kind of toys as an incredible way for us to drive early teaching of different skills. I do believe that the current versions are very far from that. They're built as toys with very little supervision and control on how it engages with your young kids. And hence, I tend to agree with Fairplay assessments and recommendation not to buy these toys to your kids. right now connecting it to the previous point, I really hope that in the future we'll have regulation that will define exactly what kind of AI tools and what kind of level of engagement can be delivered to young individuals, either through computers, their cell phones, or cute little furry toys. So as you're planning your Thanksgiving shopping or other holiday shopping, take that into consideration. And on that note, happy Thanksgiving to all of you. We have a lot to be grateful, and I will start by saying that I'm really, really grateful and thankful for each and every one of you listening to this podcast for inviting me to deliver workshops to your companies for attending my courses, for participating in our Friday hangouts, and for being a part of my personal journey of driving towards a future where AI is a part of our lives, but making it better rather than making it worse or putting it at risk. So a better future driven by ai. That is it for this week. Have an awesome rest of your weekend. Have a happy Thanksgiving, and I will be back on Tuesday with another how to episode, where we're going to dive on how to implement AI for specific aspect in your business. And until then, have an amazing rest of your weekend.