Leveraging AI

207 | ChatGPT Agent is now live which changes everything, AGAIN!!! What does this mean for you and your business?

• Isar Meitis • Season 1 • Episode 207

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting August 11 — spots are limited - http://multiplai.ai/ai-course/

Is your business ready for an AI that can act — not just answer?

This week, OpenAI dropped a bombshell: a powerful new agentic AI that can think, browse, analyze, and execute — all without you lifting a finger. And they're not alone. China's Moonshot AI also launched Kimi K2, an open-source model that’s not just fast — it’s freakishly cheap and seriously capable.

So what does this agentic evolution mean for your company, your job, and your future?

Host Isar Meitis breaks his own vacation to deliver a special, can’t-miss solo episode where he dissects the monumental AI moves of the week — and how they might quietly rewrite the rules of modern business.

In this session, you’ll discover:

  • The game-changing capabilities of OpenAI’s new Agent and how it combines browsing, coding, analyzing, and executing.
  • How Kimi K2 is shaking up the market with 1/100th the cost of top-tier models — and still competing on performance.
  • Why knowing how to use AI tools properly is now the biggest competitive edge for businesses and individuals.
  • The real difference between agents and regular chatbots — and why it matters more than you think.
  • How agentic AI could disrupt e-commerce, office tools, and even coding teams.
  • The widening AI literacy gap — and why training your team isn’t optional anymore.
  • Real stats from studies showing that untrained AI users can be less productive than non-users.


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage your eye to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and I'm actually on vacation right now and I was not planning to record an episode this week. It is my dad's 80th birthday and the whole family got together and the nieces and nephews and cousins and my kids and everybody, and it's a lot of fun and it's really, really, great and I'm very excited for him and for everybody else for getting together. But because of that, I was not planning to record an episode today. And yet a few really big things have happened and I could not leave you in the dark when really big things are happening. And so I decided to record at least a short episode. Today we're gonna focus on the really big things and dive into what they mean. We're going to touch about how the world is turning agentic, and what does that mean for you and your business? What does it mean for you and your life? What does it mean for the world? And so there's a lot to talk about, even though we're gonna dive into only one or two topics. So let's do this. It all started early in the week when the Chinese startup Moonshot AI released Kimi K two, which is an open source model with 1 trillion parameters in the backend and 32 billion active parameters, and it is a really. Good model. This is an open source model, so it's scoring higher than most or all open source models across multiple benchmarks, and it's also scoring better than some of the leading closed source Western hemisphere models. In addition, it is an agentic model, and on top of all of that it is. Really fast and really cheap. So the trick here is a few things. First of all, it's a mixture of expert style model, which is most of the recent models are, which means it has different areas of the model that are specializing in different things, and you can train each and every one of those separately, and the agent knows how to call them separately. In addition, they have developed a new way to train the model. And they're calling it one Clip optimizer. Now this may sound like Chinese pun intended, but what they're claiming, and I'm quoting now, it enables stable training of a trillion parameter model with zero training instability. What is training instability? Well, training instability has been maybe one of the biggest issues with training large models, what basically happens is that it creates issues with really large training runs, forcing companies to try to run them to either restart the run or implement very costly safety measures and accept, in many cases, suboptimal performance in order to avoid the model crashing while it's being trained. Either way, it's a very big tax on training large language models, and with this new methodology. They're able to avoid that altogether, which allow them to train the model much cheaper than any other model of its size and of its capabilities, which in return makes the model itself much cheaper. So what does much cheaper means? Well if you use it through the API, it's 15 cents per million input tokens and it's two and a half dollars for every million output tokens. If you compare that to GPT. 4.1. GPT. 4.1 is$2 for every input token, so more than 10 x and$8 for every output token, which is almost four x. If you compare that to Claude four Opus, it's$15 for every input token. That's a hundred x more expensive and$75 for every output token, which is 30. X more expensive. Now, Is it as good as these models? Maybe. Maybe not. Either way, it's close and it's much, much cheaper Now, the way I always go to measure things is how much people are actually using it, especially through the API. So I went and checked one of my favorite API tools that is called Open Router. That allows you to actually connect to one API and through that, get access to more or less every model on the planet. They take a little bit off the top, but you can do one implementation and then get access to all the different models. I've been using it for a very long time, and over there there is a dashboard that shows you how much people are using from different models this week, and on the programming side, Kimi K two is now number five, meaning more developers are using it through the API this past week than Claude 3.7 sonnet, Claude four, Opus Gemini 2.5, flash preview, dipe V three and GPT-4 0.1. All our very capable models. The only models are ahead of it are Claude Sonnet four, Gemini 2.5, pro, Gemini 2.5, flash, and Grok forth. That's it. Now, does that mean it's better than all these other models? No, it means it's more cost effective than all these other models, which is what really matters. It means it's good enough for the task. These developers are using it for at a fraction of the cost, which is an important part of the game when you're building applications around APIs from different large language models. Now, in addition to all of that, it is an agentic model, meaning it knows how to autonomously use different tools and define its own instructions and pave a path to complete the goal that you set for it. Whether it's browsing the internet, writing and executing code, et cetera, et cetera. It's a very capable agentic tool for a relatively small cost. So I was very excited to report about this, but I'm like, you know what? This could wait maybe another week. Nothing will happen. I will report about it a week later. But then towards the end of the week, OpenAI introduced their version of the same thing. But then something bigger happened. So every time I reported about these generic agents that can do a lot of stuff based on instructions, you give them. Like Genspark, and Manus. I had several different episodes where I talked about them. SPecifically if you wanna check one episodes where we talked a lot about these tools and how to run them safely. You should check out episode 1 96. It was labeled How to Safely Run Powerful AI Agents like Manus and Sparq with No Risk. But in that episode and in many other episodes, I said the same thing. I said, these tools are incredible. They are the future or the present. If you're a geek like me and they're changing everything we know because they're significantly more powerful and capable. And with knowing very little, you can generate a lot. But what I said is a lot of these tools are A more geeky and B, a little riskier, and all of that is going to change if open AI are going to issue Their own agentic model that does the same thing because OpenAI has the trust of about 800 million weekly users right now, with all the respect to Manus and Genspark and other tools like this, they probably all combined drive less than 10% than traffic that OpenAI gets on a single day may be less than 5% of the traffic that OpenAI gets on a single day. And so that event happened on the 16th. On the 16th, OpenAI announced that they're releasing OpenAI agent, which is their version of a generic agent that can do a lot of things. And before we dive into what can do. Let's talk about what it is. What OpenAI did is they took several different capabilities that they've developed before and they combined them together into a very powerful tool that on its own can decide which of those capabilities to use. It's a combination of deep research operator, data analysis and coder all combined into one agent tool. And what this tool knows how to do is it knows how to research stuff online in its own little browser, which is actually really cool. So the approach they took is instead of having it used your browser, which generates a lot of risks because as an example, your browser usually has access to all your saved passwords and sometimes credit cards and so on. Instead of that, it has its own little browser within the OpenAI interface that pops up, and then it runs things within that browser so it can browse the web, it can research the web in similar ways as deep research. It can also. Operate webpages like operator does, meaning it can click on things, fill out forms, et cetera, that deep research does not know how to do. So just this combination on its own is extremely powerful. But in addition to that, it knows how to analyze data, so the data that it brings from these different sources. It can write Python code, put it in spreadsheets, and analyze it in multiple ways. That gives it an even better, bigger benefit. And on top of that, it knows how to write code and it has access to terminal and even code execution. So the combination of all these things. Make it an extremely powerful tool in completing more or less every task that you can imagine, because it can do the research, it can figure out what it needs to do even deeper. It can define its own process, it can write code, it can analyze data, and it can do all these things very, very quickly. A few additional cool things that you added is the ability to interrupt the model in the middle of work. So you said something and then you watch the model doing its thing, and that gives you an idea or just you suddenly think about something you forgot to add. You can add it while the model is working on the thing that it's working on, and you will take that into account. It's very conversational, meaning it will stop and ask you questions if it's not sure about stuff. Just like a employee hopefully would. And it can connect to existing data sources that you currently connected your chat chip pt account to such as Gmail or SharePoint or Google Drive and so on, which makes it even more powerful now. It also because it can write code. Because it knows how to analyze data. It can create spreadsheets that can be exported to Excel or Google Sheets and it knows how to create PowerPoint presentations, including generating the images for the presentations. And so what we are getting is an extremely powerful tool similar to Manus and Spar and these kind of tools on, its coming from ChatGPT. ChatGPT comes with a lot more trust with the population. It definitely has a bigger footprint and distribution with the broader population, meaning we're gonna have more and more and more people using really advanced agentic tools. Now, the. new agent capability is going to be rolled out to everybody, including Pro Plus and Teams users. The way it works is just like you pick all the other modes, like deep research, there's just agent mode as part of that in the dropdown menu on the left of your prompt box. I still don't have access to it. I have the plus license, but I assume it's just rolling out and just like everything else with chat GPT, it will take a few days and everybody will get access to it. there's very different limits. You get 400 runs of this tool in the Pro license, and you get 40 in the Plus and teams license. But to be fair. 40 is more than one a day, which for the average user should be way more than enough. And if you need more, that means that paying the$200 a month for the pro version makes perfect sense to you because you are using this a lot more. Now, what can you do with this? You can do the examples they gave in the actual launch, which I highly recommend watching the video. We're gonna drop a link to that in the show notes. They've shown stuff like doing research for shopping and planning for a wedding. They also shown an example how to allow the tool to measure itself and bring information about how well it's doing compared to other models, and you can also do a lot of work related stuff with it. Obviously, like market research, scheduling, analyzing resources, deploying different things, preparing for presentations, et cetera, et cetera. There's probably thousands of business use cases where this tool will be extremely helpful. Now before I tell you what I think about it and where I think this is all going or what's the impact of that, I wanna share one more aspect, which is rumors that are coming from several different reliable sources right now, which is OpenAI is planning to take this to the next step and really build around it. A tool that has a suite of workspace tool just like Microsoft 365 or Google's G Suite, which means they are going straight after the main driver of business for two of the most successful software companies ever. And they're literally going after their bread and butter, the things we use every single day. Now, other planning to replace the office suite or other planning just to compliment it. I'm not exactly sure. I think Tam will tell. But it is very clear to me that if I had the choice, if there were really solid tools within ChatGPT that are fully integrated with all the other things that ChatGPT does, I will reconsider my usage of Google Suite. Now, will it replace everything? Probably not in the beginning. Can you replace everything over time? Absolutely. Can you do it better than what these tools are doing right now? Right now it seems that that's the case. Will Microsoft and Google catch up is a very big question. From what it seems right now, Microsoft is doing a pretty poor job of implementing ChatGPT within its workspace. As copilot and Google, while they're doing better than copilot, they're still not doing great. And I've said that multiple times on this show. I think that both these companies have an incredible opportunity. I thought they will capitalize on this opportunity before the end of 2024. I was obviously wrong, but they need to get their act together and bring together a model that actually looks into. Everything in their ecosystem. I don't want Gemini for slides and Gemini for sheets and Gemini for docs and a Gemini for Gmail. And the same thing with copilot. I want just one Gemini that connects to all these tools that knows everything that I'm doing and has access to all the information within that universe, whether it's my G Suite or my Microsoft environment, including everything that comes with it, whether it's the Microsoft 365 Office Suite, whether it's SharePoint, whether it's Dynamics 365, et cetera. Literally everything Microsoft. I wanted to know and I wanted to understand which of the tools to use when in order to be most helpful to me. Because that is how they're going to win against open ai. And right now it seems that OpenAI is doing it to them because OpenAI now has connectors to many of these environments and you can turn them on and off in order to prevent it from going to the internet and focus it wherever to get the data. And now they have these genetic tools that can generate outputs that compete directly with the outputs that are generated by the office suite or G suite, and so I think this is a serious wake up call for Microsoft and Google, and I'm very curious to see how quickly they respond and how well they respond to this very big threat. Now, if you think that's the last component, there are even more rumors that OpenAI plans to integrate a payment checkout system straight into Chachi. That will lead to several different things. The first thing that it will lead to is you can use the agent tool to do everything that you want it to do, to go and research a specific topic, to compare different options, to pick the right options, and then to actually go and purchase that option for you. That could be a trip. This could be clothing, this could be food. This could be booking a place at the restaurant. It could be anything you can imagine, including doing the checkout for you. Now what OpenAI shared in their launch is that when it comes to payments, you can decide to put your payment tool straight into ChatGPT, or you can ask it just to send you the link to do the checkout on your own. I think over time as we give it more trust, it's gonna be a no-brainer. Just like today, we're used to saving our credit cards on Google or other sources. We will probably do the same with chatGPT, which means you may wanna see what it's about to buy for you, but once you trust it completely, maybe, that's even gonna be redundant and you're just gonna allow it to do your shopping for you across the board. This kills many. If not all, or at least most of e-commerce websites. So if I was Amazon, I would be thinking very, very hard right now, how do I counter this new threat if I am Shopify? Well, Shopify made the right move. They've done a partnership with ChatGPT and that's gonna be the first big partner where you'll be able to go and shop things across all the Shopify stores, which makes perfect sense to Shopify. Makes perfect sense to Shopify users. It makes perfect sense to chatGPT. It makes perfect sense to open AI. Now, in addition to providing a great service, OpenAI will take a cut off the top. So they will take a few percentages out of every transaction happens, which will give them another very significant potentially revenue stream. Again, if you think about 800 million weekly active users, and if you think about, there's an opportunity to convert all of them to shoppers as well, because this will help you shop across multiple platforms, find the best price, compare different options, read the reviews. Basically find the best option for you. This is way better than any other option out there today, which means more and more people are going to do it, which means less and less traffic to traditional e-commerce websites. So. Why is this so important and why did I decide to step away from my entire family to record this episode? This is a complete game changer from my perspective. The release of an agent by Chachi Piti, as I said all along, is a new gPT moment, meaning it's as big and as important as the release of the original chatGPT, because it changes the way we interact with computers and with the data around us. It allows us to do significantly more. With significantly less effort. And if you still don't understand the difference between that and a regular chat with chat GPT in a regular chat with chat GPT, you have to give it very specific instructions on what it needs to do, step by step one by one, monitor what it's doing correct as you're doing it. Also, when it needs access to a tool, if it needs to write a document, if it needs to browse the web, if it needs to do different things. In many cases, you need to do it for it, meaning you need to take the data and now do the research and bring it back. You need to take the output and create the document. You need to do all these things, and now you don't have to. It does all of that for you. It figures out and corrects as it's doing the process, so you need to do a lot less for getting significantly more. The stuff that I've done with Manus and Genspark that now I'll be able to do with ChatGPT is mind blowing compared to the amount of investment I had to put into them. Now the impact of that on everything we're doing is profound. First and foremost, we will be moving ourselves one step or a few steps further away from the actual tasks. So if right now you have to prompt step by step. AI to do things for you, which removes you from some of the steps in the task. Now, you won't even define the tasks. You will define the goal and the tasks themselves are going to be defined by the ai, which means you don't even know what the AI is doing now. Yes, you can look right now at exactly what it did, and you can follow what it's doing and you can stop it and change it and fine tune it at any given point. But once these tools evolve. And you will consistently see that they're delivering the right results. You will stop doing that altogether, which means you won't really know what the tools are doing. You will just give it an input. You define what the output needs to be, and you will then use the output. This will be true for university students, for high school students, for our personal lives, and definitely for the day-to-day in our businesses. Now, is that scary? Yes, probably to most of us. Is that gonna be very helpful? Well, it's gonna be very helpful if we know how to use it a effectively and b, safely, because otherwise it's a terrible opportunity for really bad things to happen because we remove ourselves from the process. Now, the other thing that we need to ask ourselves is how good is this tool right now? Is it currently a cool demo tool, like the demos that OpenAI have done when they launched it? Or like, I'm sure we're going to see thousands of demos online within the next few weeks. Is it good enough for basic tasks or is it at an enterprise grade deployment level? I don't know. If I had to guess, I would say that right now it's probably somewhere between a good demo level to a good enough or basic tasks level, and depending on the tasks, but the reality is it doesn't matter. It doesn't matter because of two different reasons. Reason number one, once you open this Pandora box, there's no going back. You cannot put it back in the box. Once more and more people understand a gen capability, they will want more of it because it is really magical. The other reason that it doesn't matter whether it's there or not, is that it's going to get there. In the very short term, companies and individuals are going to find workarounds for the big issues that are stopping them, for using it at a wide level. And yes, it will not be able to do everything, but it will be able to do a lot if you know the limitations and you can work around them. Think about our kids in high schools and universities, having the opportunities to basically do the work of an entire course in a few minutes by just giving the right prompt and letting it run through the entire content of the course, summarizing all of it, and creating whatever report they're supposed to create. I don't see. Any student doing anything else unless he's sitting in a classroom with a pen and paper and needs to do it by hand, which has its benefits, but it definitely does not prepare that young individual to the future of doing that at the workplace or in the society. So there's a lot of questions to be asked and a lot of unknowns when it comes to the future of these systems. And then the last reason is obviously Open AI will now have access to huge amounts of data of actual real life usage, which will give them more information on what is working and not working with this tool, and allow them to upgrade and update the tool in order to make it enterprise grade tool that can be used for more or less everything in our work. Now, as I mentioned, I don't have access to it yet. I started seeing people that do have access to it. But based on my experience with Manus and Gens Spark, which are similar tools, I can tell it is a complete game changer, and within the next few weeks, we'll start seeing more and more examples from more and more companies than individuals sharing how they're using the tool, how it works, what are the limitations, and so on. But then there is the final question that is related to this, which is how good of a prompter do you need to be in order to actually enjoy these tools? So what this tool does is it opens an even bigger gap between the people who know how to use AI versus the people who do not. Know how to use ai. I meet these people every single week. This is what I do. I teach courses. I teach workshops to different companies, and there are more people right now who do not know how to properly use the basic AI tools like Chachi, pt, Claude Gemini, et cetera, and they're using it in a very superficial way without having deep knowledge on how to do this and this new functionality is just gonna wider the gap between the people who know what they're doing with AI to those who don't. If you. Know what you're doing. That puts you a very significant advantage, both from a career perspective, as well as from a company wide perspective. If you are in a leadership position and you and other people in your company know and understand how to use these tools, you can run circles around your competition, and that's gonna be even more dramatic now with access to this tool, if you are not one of those people, if you're not one of these companies, your competition might learn that first and then then will run circles around you. Now if you want proof, OpenAI themselves just shared that they've developed. Open AI Codex, which is their cloud-based coding agent in just seven weeks from scratch. So one of the most advanced coding tools that was developed using AI by people who know how to use AI, was developed in just seven weeks. By the way, codex itself generated 630,000 pull requests, prs, which is a process in a code development in just 53 days. That's over 10,000 prs per day, and the numbers are just going up. This is outpacing traditional coding teams by 50%. That means the people that are using Codex are creating code 50% faster than people who are not using it. It is very similar to what we hear from other sources. If you look at the recent information from Microsoft, they have generated 600,000 PRS using GitHub copilot. And so similar numbers, a huge spike they're reporting 30% faster code reviews and better code reviews than they did manually with people before, which means they can develop the next version faster, which means they can now deploy it, which means they can now develop it even faster and so on. We're getting to systems that are basically accelerating their own development, are becoming better and better, and the same thing will happen to companies who figure it out. Versus companies who do not figure it out. Now if you need a little more proof that agents are the next big deal or that you have to learn or you will stay behind Butterfly Effect. The Chinese startup behind the viral a. Gent, Manus, which again I've been using for a while. It started as a Chinese company. They then opened an another office outside of China, and now they're closing down their Chinese base team and moving a hundred percent of their operations outside of mainland China. They have relocated all Core 40 engineers to Singapore, and they've established a headquarters in the us. All of that in order to attract US investors and US users and disengage themselves from the scrutiny of being a part of the Chinese economy. Part of it is just the way they wanna be seen and part of it because the US government is restricting AI investment in quote unquote countries of concern, China being one of them. So where are we? We are at a point where, as of the next few days, 800 million weekly users will have access to very powerful agent capabilities. That, by the way, did not exist at all by any tool for months ago. So the Manus moment happened in March. It's just very, very recent, and again, it probably has tens of thousands of users, maybe hundreds of thousands of users. But Chachi PITI has 800 million. The problem with that is that a huge part of the people around the world, even people who somewhat use AI, are completely not ready for this. They don't understand how the tool works, and they definitely don't have the skills and the knowledge gap and skill gap is just growing. As I mentioned, I meet these people every single week and most of them barely knows how to prompt properly a basic AI tool. So what are we doing? We're basically taking somebody who's learning how to drive and giving them access to a Formula One car. That is not a good idea. Now I know what some of you are thinking. Some of you're thinking that agents, because they're so sophisticated and they know how to define their own tasks, may reduce the need to know how to prompt. And in the long run, I would probably agree with you, but in the immediate future, I think there's gonna be a huge difference between the people who know how to use these tools properly and the people who don't. And to be fair, I think the people who don't are actually gonna waste more time than the time that they're going to gain. And if you want proof for that, a new research study by me, METR, which is a company we talked about before. They're doing AI research. They shared a lot of interesting stuff that we covered on this podcast. They did a very interesting research about developers using tools like Cursor Pro and Claude 3.5 and 3.7 sonnet to improve their code writing speed and efficiency. They took 16 developers, divided them into two different groups, and let some of them use ai and some of them don't. The people who use AI expected to be able to complete the tasks 24% faster than the people who didn't. When in reality, it took them 19% more time to complete the tasks with ai. Now, this wasn't a two minute kind of like research. They've done the research from February to June, again with 16 people on randomly controlled trial with 246 different coding tasks from bug fixing features, refactoring and so on, and. These people saw actually a decrease inefficiency by using ai. Why is that? Because it was people who were not trained how to use AI tools properly. A lot of the time that got wasted was them waiting for the AI to do its thing, versus developing new processes where you can jump back and forth and change context quickly between different tasks or writing. Which is something that I do all the time. I will give AI a task. I will go and do a few emails. I will wait for it to finish. I will come back when it's finished, and then I will continue from there and jump back and forth. This is not traditional way of working, but it is the need in order to make the most out of these tools, and so getting proper training and the right education for yourself, for your employees. And for everybody in your ecosystem in order to really benefit from this additional incredible capability that OpenAI just gave us, has to happen. How do you train your people? Well, first you know that we have the AI Business Transformation course. The next variation of it starts on August 11th, so if you wanna learn how to use AI effectively. Don't miss this course because the next public course will probably happen around November, and that's a whole additional quarter. We teach private courses all the time and workshops for specific companies and organizations. And if you are interested in our course, you can use promo code LeveragingAI100 for$100 off the price of the course. So take advantage of the fact that you are a listener of this podcast and enjoy this discount and come and join us on August 11th. There's a link in the show notes or reach out to me on LinkedIn and I will gladly help you figure out what's the best solution for you. Or you can go with somebody else, but whatever you do, find a way to train yourself, to train people in your company on how to use AI effectively, because the speed in which people who are using it are pulling away is increasing all the time. And this newgen capability just takes it into a whole new level. Now, if you wanna learn, what are the key things that are important to consider when selecting a course or what kind of training you can deliver to your company, or what can you do as a leader of a business in order to drive the most results from ai? We just recorded an episode that is gonna be released this coming Tuesday, that will share all of that in detail, allowing you to understand what are your options, what are probably the best ones for you, and what action should you take, either as an individual or as a leader of a company. But there is so much more that happened this week. Some of the big news come from Meta with a very interesting interview from zuckerberg talking about investing hundreds of billions of dollars in compute and how he sees their future in this race. Why they're investing in what they're investing. Why does he think a lot of talent is jumping ship to them? And I will tell you that it's not just paying them hundreds of millions of dollars. That's probably a big part of it though. There are a few more interesting model releases, including a very interesting voice model from MIS trial. Some updates on the new grok, It's unacceptable anti-Semitic outbursts last week and many other news that you can read about if you sign up to our newsletter. So I'm not going to cover them today. I will go back to spending time with my family to celebrate my dad's 80th birthday. But I really wanted to share this with you. But if you wanna know the rest of the news that happened this week, again, there's gonna be a link in the show notes. You can click on that. Sign up to get our newsletter where we're going to cover all the rest of the news While you already have your phone in your hand, in order to sign up for this newsletter, click the share button on your podcast player and share this podcast with anyone you know that can benefit from it. This is your way of increasing AI literacy and AI education around the world, which right now becomes more and more critical. And if you are on Spotify or Apple Podcast, I would appreciate if you leave us a review as well. That's it for this weekend. Keep on experimenting with ai, keep on learning and sharing what you've learned with other people. And I will see you back on Tuesday. Have awesome rest of your weekend.

People on this episode