Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
254 | Inside Google’s AI Power Play: What Business Leaders Must Know About Gemini 3
📢 Want to thrive in 2026?
Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/
Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/
Is Google about to outpace OpenAI in the business AI race?
With the release of Gemini 3 and its shockingly powerful and efficient Flash model, Google is quietly building the most deeply integrated AI ecosystem in the world — and business leaders need to pay attention.
In this solo episode, host Isar Meitis breaks down how Google is embedding AI everywhere and why this could reshape how you work, create, and compete. From Gemini Flash to Nano Banana Pro to Google Vids, the AI upgrades aren't just flashy, they're practical, affordable, and enterprise-grade.
If you're a business leader in the Google Workspace world (or wondering if you should be), this is your executive briefing on what matters — and what’s next.
In this session, you’ll discover:
- The real-world power of Gemini 3 Flash and how it stacks up against GPT-4, Claude 2.1, and Grok 4.1
- Why Google’s AI cost-efficiency curve might disrupt the pricing models of all major LLMs
- The new AI-enhanced “Beautify this slide” feature and how Nano Banana Pro transforms presentations
- How NotebookLM now auto-generates infographics and slides from complex docs — in seconds
- Why Google Vids might replace your explainer video tools in 2026
- How Gemini’s agentic browser and scheduling AI push the limits of work automation
- The strategic advantage of Google's fully integrated AI ecosystem for business leaders
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This Isar Metis, your host, and today we are going to talk about Gemini and more specifically all the recent releases and capabilities inside of Gemini three and how amazing they are for anybody who lives in the Gemini universe. I know some of you are not in the Google universe and are not Gemini users, and if you are not one of those users, uh, A maybe you should consider trying Gemini and B. It will give you a great idea of where the world is going. So even if you're using copilot or Claude or Chachi pt or whatever the case that you're using, I think it will be a great understanding for you what capabilities exist today and how things starts getting integrated into more and more aspects of our daily lives as well as our work life. And so I thought that as a heavy Gemini user that uses Gemini every single day across multiple aspects that I do, it'll be helpful for many of you to understand how I'm using AI on the day-to-day tools. What has Google launched in the past month and a half, and how well it is integrated into the Gemini universe? Again, I assume very similar things are happening in the copilot universe inside of Microsoft environment. Inside of the Microsoft environment. So even if you are jealous at this point and you can switch because it's a part of your enterprise, uh, software tech stack, then don't worry. I assume you'll get something very similar from Microsoft in the near future. The first thing that I want to talk about is the recent release last week of Gemini three Flash. So what the hell is Gemini three flash? in the recent year or so, every time one of the companies released a large model, they shortly after released a smaller brother, if you want, of the large scale model that is supposed to be significantly faster, significantly cheaper, and in most cases, roughly as good as the previous pro model. So the idea is that you can get very capable AI tools built on the new concept, infrastructure, and model, but significantly faster and cheaper that still exceed the capabilities of the previous model. Gemini three Flash is most likely the best example of this concept. So right now, if you are Gemini user, if you're a paid Gemini user, which means either you have a Google workspace like me, or you pay for the paid version of Gemini as a free Google personal user. You now have in your dropdown menu, three options. You have fast thinking and pro. Let's break this down fast, give you Gemini three flash, which is the fastest, most efficient model that they have right now, which is a great way to get answers, quickly and do quick brainstorming and some fast thing and any other immediate activity that you need. The second option is thinking, which is still Gemini three flash, but with its reasoning mode. So those of you who still don't know, there are two kind of models today. There is the regular model that are just pulling information out of its memory, which is the only models we had until the end of 2024 with the introduction of chat GPT oh one. But since then, more or less, all the models have added reasoning capabilities baked into the models. So right now. The thinking version inside of Gemini is Gemini three flash with reasoning, meaning it will think about the tasks that you give it and the promises that you give it, and you will try to understand what you mean and you will consider different options and it will give you much better answers. The payment you pay for that is time. You're gonna wait a little longer to get the answers that you need, but you're gonna get much more sophisticated answers with a lot more analysis baked into them. The third option in the dropdown menu is pro, which gives you Gemini three Pro, which is the most amount of intelligence, the longest time for thinking. And you should use this for any larger, more complex task, advanced coding capabilities, analysis, et cetera, et cetera. The big deal is how good this model is compared to the price and the speed in which it is working. So let's start with the high level on the high level. Google shared a graph that is showing on the Y axis the. ELO score from the Ella Marina. Basically the chat score, uh, and showing how good it is and on the x axis, how much it costs. But in a reverse order, the further you go to the right, the cheaper the model is. So it's basically showing you the ratio between how good people rank this model as far as its capabilities versus how much it is going to cost you. And as you go up, it gives you a better model. As you go to the right, it gives you a more efficient model. And kind of like the best place to be, if you want, is kinda like that 45 degree range in where you get a lot of intelligence for less price. And the arc, if you want, from the all the way to the top, to all the way to the right. Currently has four models all the way to the top Gemini three pro. The best trade off right now is Gemini. Gemini three, flash. The next one that is not as intelligent but is incredibly fast and cheap is Grok 4.1, fast reasoning, and then the most efficient one, but with a lot less intelligence is Gemini 2.5 flashlight, which is the previous version of the flash models from Google. So what does that tell us? It tells us that if you want to get a lot of intelligence, that still works fast and cheap. The two best options right now, if you're looking for more intelligence, it's Gini three flash. If you're looking for a faster model, it's grok 4.1. Fast reasoning. The crazy thing, by the way, looking at this graph is that Gemini three flash is better from an score perspective, again on the Ella Marina ELO score. Is higher than any other model other than Gemini three Pro, and it's parallel to Grok 4.1 reasoning. So it's giving you not a compromise in intelligence, but actually the best results ranked on actual people's use cases, but for less price than the vast majority of the models out there. The other thing that Google shared is a very detailed table comparing Gemini three flash thinking, which again would be the middle option if you are using the dropdown menu inside the Gemini application. And it's comparing it to Gemini three Pro thinking to Gemini 2.5, flash to Gemini 2.5 pro thinking to Clason at 4.5, thinking to G PT 5.2 extra high and to grok 4.1. Fast reasoning. So a wide range of leading models from Google and beyond. And then it is comparing it across a huge variety of benchmarks on very different aspects from visual capabilities to text capabilities, to coding capabilities, to information synthesis and analysis and so on and so forth. On many of these, it is ranked first on many others. it is ranked second, but a very close second to many other models. And as I mentioned, in some cases it's even better than its bigger. Brother Gemini three pro thinking. But it costs only 50 cents for million input tokens and$3 for 1 million output tokens. To put things in perspective, the input tokens of 50 cents is 25% of what you're gonna pay in Juvenile three Pro, which is$2. Claude Sonnet 4.5 is$3, and Gemini 2.5, and ChatGPT 5.2 is a buck 75. Still more than three times more expensive for roughly the same level of intelligence. The output token price is$3 for a million tokens. Again, putting it in perspective is exactly one quarter for Juvenile three Pro again, which is$12. Claude 4.5 sonnet at$15 GPT, 5.2 at$14. And again, the only model that is cheaper but does not come even close on the benchmarks across the different benchmarks is Grok 4.1 Fast reasoning, which is again, currently from a cost perspective and a speed perspective, the most cost effective model out there, out of the bigger, better models. The bottom line is right now you can use Gemini three flash to do amazing things. And it is receiving amazing reviews online as well. I've been using it since it's been launched, and I'm extremely happy with the results across multiple use cases and again, at a much cheaper price point than everything else. So a kudos to Google for being able to release such a powerful model at such a cheap price that makes more intelligence available to more people through the API. Again, if you're just using it in the Gemini platform, you don't really care, but if you wanna use it through the API for any application, it is a very big deal on the bigger picture that shows you where we're going. Every new model is gonna give us faster, better capabilities at a much cheaper price than the previous model. So while this is fun and interesting and good to know, it's not very practical, just knowing that you have a better, cheaper model. So let's go from now on to very practical things. The first thing is Nano Banana Pro. So nano Banana Pro was released a few weeks ago, and it is an improvement over nano banana, which by itself was a very capable model. What Nano Banana Pro allows you to do is to combine a lot more consistent aspects together. So multiple people, multiple faces, multiple devices, multiple products, whatever it is that you want, you can combine into a single image. It is even better at keeping consistency across images. It generates even more lively and capable tools. It is better at editing existing images, but in addition, they are adding a lot of new cool things. The first thing that I find extremely powerful is that they've combined Nano Banana Pro into Google Slides. So how is Nano Banana Pro integrated into Google Slides? There are several different features that exist built into Google Slides right now, which are really cool. The first one is for every slide you create under the slide, there's now a button that's called Beautify this slide, and it has a banana logo next to it, which gives you a hint what it is going to do. So those of you who are watching my screen right now, if you are watching this on YouTube or if you're watching the video on any other platform, you can see a table that is just a pretty boring table that says Options galore on top, and it is comparing the core capabilities and the different features of the main workflow automation tools such as NA 10, make.com, Zapier relay relevance, AI and gum loop. And it's again, just a table comparing them that I created with AI, with Gemini, and then brought it into a slide, but it's just a table. But then I clicked the button and it created this version. But for those of you who are just listening to this, what it did is it took the table and made it way cooler, so it added this glare to the headline. The lines of the table are now these laser glowing blue kind of line that looks like very futuristic, which is great for this kind of slide. It changed the headings of each and every one of the columns to be this glowing box. It added on its own logos of the different brands. I did not ask it literally, I just clicked the button and now it has the logos of the different companies and it added this cool background that looks like a mix of tron and kinda like computer chip glowing blue lines in the background. Maybe a little overboard, but overall definitely makes the slide look a lot cooler. The two disadvantages of this over the slide that I had before. One is that it's not editable. This is literally a nano banana image that is taking the information from the slide and turning it to just look cooler so I cannot edit what's on the screen right now. The second thing, which is a little depressing is that the fact that it is not as crisp as the original image. So because it is rendering it versus just taking the lines and the text that was there before, it is not as crisp as the previous version. I assume both of these will be improved over time, meaning the ability to actually edit after it, quote unquote beautifies the slide, as well as it's gonna have the same exact resolution because it may actually use the text and the lines as it should be. But even now, it is a very cool and fun thing to play with as far as creating slides that are better than what you can create on your own. The second thing that you can do right now, which is very cool, is when you click on any image, whether it was created with an AI tool, or it's just an image you got off the internet, or an image you actually took with a camera, if that is still a thing, you can now click on edit image with nano banana inside of Google Slides. So when you click on that, it opens a nano banana menu on the right, in which you can do several different things. You can click on the image and describe exactly what you want to edit on this image, or you can just, if you just open this menu, you can write whatever you wanna write. So let's say I wanna write an image of a Formula One car lying above a city. An image of a Formula One car flying above a busy city, and if I click on that, it will do what NATA does. It will create an image for me, but once it's creates the image, I will be able to pick what I actually want to do with it so I can actually create it as an image inside a slide, or I can create. An entire new slide with the image that I created. Why is that important? Because you can now open the nano banana menu and describe what you want in the slide, including the text. And now you can see again, those of you're seeing, you see I have three different options after I created this really cool image, by the way, of a four wheel of one flying above the, uh, traffic of a crowded city in a sunset. you see I have three buttons. One button says slide. If I click on that, it creates an entire, a new slide for me. The other one is image. We'll create a new image, and the third one is infographics, where I can create infographics in all different options. You can have text and image all combined together that I can define in a lot of details and embed into my presentation an amazingly functional capability, especially for somebody like me who needs to create a lot of presentations. Now because of these new capabilities inside of Google Slides, I recently canceled my Mid Journey membership because I found out that I literally create 100% of the graphics I need for slides, which is most of the graphics that I need inside of Google Slides, because then it knows what I'm doing. It understands the rest of the images, it knows the style that I'm using, it understands what's in the slides. I can have a conversation with it with Gemini inside of Google Slides about what could be great ideas for graphics, for the overall presentation, for a specific slide, for a specific point I'm trying to deliver. And it gives me ideas and we brainstorm these ideas and then it goes and creates the image or creates the entire slide. So this capability by itself is so powerful that I actually don't need any other image generation capability. But before we switch gears to other ways that Nano Banana is now being used across the Google universe. I wanna share with you a new capability that now exists inside of the regular Gemini chat. But Google also added something else really powerful inside the Gemini solution inside of Google Slides, which did not exist until about a week and a half ago that I find amazing, which is the little plus button that exists in the regular Gemini but did not exist inside of the tools like Docs and slides and exist right now. And this allows you to upload. Documents from your Google Drive into the context of the Gemini universe inside of slides. How does that help you? Well, you can use your brand guidelines in there right now. You can upload multiple documents and have it help you brainstorm on how to create a good presentation from the information that you uploaded. You can take a summary of a meeting that you have there and turn it into slides and so on and so forth without having to go through the regular Gemini or any other tool. You can open it straight here inside of Google Slides and use that as context inside of your slides universe while still using all the benefits of Gemini. I find this really helpful. I've used it multiple times since it came out and it's less than two weeks, and I suggest you at least try it and do the same thing. If you are in the Google universe. So if you are now in the regular Gemini interface and you upload an image, in this particular case, I upload an image of a Honda Odyssey. So staying away from the Formula One car by doing something cool as well. But anyway, any image that you upload, if you click on it right now, you have a drawing and a notation capability on the image. So you get to pick, five different colors, white, blue, green, yellow, and red, and you can just draw whatever you want on the image. So what I'm going to do is I'm going to draw four circles on the sides of the wheels of the Honda Odyssey to explain what I'm trying to do, kinda like annotating it, and then when I click done it is embedded into the image. You can see it in the top small image up here. And then I will say, replace the car's wheels with enclosed props like a quad copter, and have it flying above a busy traffic going over the interstate in Florida. Now because I referenced the specific area of where I want the props to be, the nano banana gets a better understanding of exactly what I want to change. And you can do this with any other image. You can use the different colors until it turn the yellow area to this, use the red for that add text in a specific area to give it more information. So it allows you to give annotation and get better ideas. And the image came out really, really cool. A blue Honda Odyssey that has these, quad circular props and flying above traffic in an area that definitely looks like the interstate in Florida. And so you can get much more granular with your feedback that you are giving to the nano banana model in order to get exactly the results that you want. But that's not where it ends. Google has added the Nano Banana Pro capabilities into Notebook LM as well. Those of you who don't know Notebook lm, it is one of the most incredible tools that are free from Google for us to use, which allows you to bring in multiple resources. These could be files that you have, like files on your Google Drive, but these could be PDFs or Google Docs or Excel files. Or links to videos, links to websites and so on. And just combine them into a notebook, which is technically just a container of data, and then turn that information into either just a q and a or many other different functions. So if you are in an example here, you can see that I have multiple resources here talking about the latest open router, state of AI research and other articles that are mentioning it, and different comments that people made on that from multiple sources. So there's, I dunno, about eight or 10 different articles here, and I literally ask it about what are the findings? And it gives you a quick summary. But on the right side here, what they had before was. Audio overview, which creates this cool podcast that will walk you through what's happening. A video overview that is the same thing, but with some graphics and they had mind map and reports and flashcards and quiz. All of that existed before, but they added two new really cool capabilities. One is infographics and the other is slide deck. What does this do? Well, it takes the entire content, it learned from the all the different files that you uploaded, and it turns it into a infographics or a slide. So if you look at the infographics as an example, and those of you who are not watching, I will walk you through it. It created this really cool infographics that says The state of AI insights from 100 trillion tokens, open router analysis of real world usage in 2025. And it has these separate boxes with really colorful graphs and different illustrations, all explaining all the findings or all the major findings from this really long detailed research. So. Really amazing infographics that I created very easily. Now, there are two ways to use the infographics function. One is to press the button. The other is next to the button. There's a little pencil. And then if you do that, you can actually choose more options. So you can choose the language of the infographics, you can choose whether it's landscape, portrait, or square. You can choose whether you want it concise, standard, or detailed. And then you can prompt it on what you wanna focus on, what the colors that you want to use, and whatever other details you wanna provide it to. Give it more guidance on the kind of infographics you want and exactly how you want it. And then it generates the infographics for you, or as many infographics as you want. So you can do one for each chapter. You can do one for each topic. You can do one for different brands with different color guidelines, whatever that you want. It is. Insanely powerful capability, especially when it knows how to combine data from so many different sources and figure out what's important on its own. But let's say you want more information than just an infographics, well, you can create an entire deck of slides, and if I click on that, you can see that it has created all these different slides, and if I click on them, it is actually a presentation that I can go through. So again, the state of AI and it created for each slide. The first one is the AI landscape is experiencing an explosive, unprecedented growth in models, investment and complexity. And it shows this really cool timeline and information related to the timeline. And you can see that it is creating really cool graphics with very relevant details. Perfect slides, just the right balance between the amount of text and the graphics that describes it and a good flow that you can follow all without me doing nothing other than giving it a short prompt, which I didn't even have to do. All I had to do is click the button to create this outcome. So if you need a presentation that summarizes a lot of information for any purpose, whether it's personal usage for school or for your business, you can just upload the information into Notebook Lamb, and then click on the slide deck. Or as I mentioned, click on the pencil inside the slide deck, explain exactly what you want, and you will get a presentation within a couple of minutes. Now the next tool that Google released earlier, but it's adding more and more capabilities to it that I found really cool is Google Vids. So Google Vids is Google's new video tool that allows you to edit, create, or manipulate videos that you have right now. And you can use this for either personal usage or for business usage. You can use it to create new introduction videos for new employees, onboarding for clients, customer service explanations about your software, product or service, et cetera, et cetera, whatever kind of videos you want. So it is built as separate, different scenes. They built it very similar on purpose to using Google Slides. So if you go there, the interface will look very familiar to you if you are a Google Slides user. And you can add new scenes, which are basically just like slides, but they are video inside of them. And what you can do in each and every one of the scenes is you can generate a video with VO 3.1. So you can generate up to eight seconds video clips on whatever you want. Just by prompting it. You can create an AI avatar. So just like Hagen or Synthesia or other avatar tools, you can do this. As of now for free inside of Google Vids, you can convert slides, which I find really, really cool. So if you have a presentation, whether you created it or AI created it for you, you can now upload it here. And what it does is actually really cool. It takes the slides, converts them into short videos, and if you have presenter notes under the slides, which I always do, it knows how to convert them into an narration done by an AI avatar that explains watching it slide. So if you use AI to create the presentation and ask AI to create detailed speaking notes for an avatar as presenter notes in the slides, and then you bring it here, you can have a video of the presentation walking people through the entire process. You can obviously record on your own straight from here. So you can record your screen, you can record your camera, or you can record both very similar to tools like Loom. You can upload obviously any video that you have, either from your Google Drive or from your photos or from your computer. You can use their own templates and they have really cool templates on how to create cool transitions and deliver a story or a presentation or whatever it is that you're trying to deliver. Or you can use their storyboard tool to brainstorm what should be your different scenes in order to deliver the message or the outcome that you're trying to deliver. And you can pick whether it's gonna be landscape, portrait, or square. Incredibly powerful, completely free, easy to use, and combines all the different AI goodies and old school tools that Google knows how to deliver. So what you're seeing overall is that Google is combining more and more of their tools into more and more of the other tools, creating a seamless work environment where you can switch tools to get the best out of all of them while using the resources and the capabilities from the other tools. Let's talk on a few other examples that demonstrate what I just said. One of the things they added this past week is that now deep research reports can show visuals inside of the report. So those of you who use deep research, and I use deep research all the time, it's an amazing function that exists today in all the large language models. but I find that Google's is actually the best, and I've been using it more than I'm using the other tools. When it goes to deep research, it can now create graphs and charts and flow charts and other visuals inside of the research document showing you exactly better illustrating what it found in the research, which is something that none of the other tools had and so far the only tool that knew how to combine visuals into text in a seamless way was Claude, was the recent models from Claude that I absolutely love. And so now you can get visuals inside of deep research reports from Gemini. This is currently available to ultra users only, but I assume this will trickle down to other more basic licensing schemes as well. Another really cool feature is called Dynamic View, and what is Gen Dynamic view is if you are trying to learn a topic, and the example they gave is learning the functions of the cell. And in the example they gave, they're using an illustration of how to learn the different components of the cell. So they've asked Gemini to explain. What are the different components of the plant cell and it has created a fully interactive view of the cell where when you click at different components, it opens these tickets if you want, on the right, explaining the different components of the cell. This is created all by Google Gemini, so it's didn't exist before. It is creating it for this function specifically, and you can create similar things to learn how a combustion engine works or analyzing a flow in your company or anything else. You can explain it with a fully interactive image that is created by the Google tools. So it's a combination of writing code and nano banana, creating the image, and then it knows how to make it interactive and to combine the different components from the image with different explanations on the right. And now I wanna talk about four different things that are embedded into the day-to-day tools that you use all the time. If it wasn't enough, everything I talked about so far. The first one is actually working with the Gemini sidebar inside of Gmail. I'm not gonna share my screen because I can't show you all my emails, but Gemini inside of Gmail is absolutely magical. Meaning you can find any email and actually also any calendar event and attachments and files inside of your Google Drive straight from the Gmail Gemini interface, and you can ask it really sophisticated questions and you would still find it for you. I'm sometimes amazed with how good this tool is, so you can ask it, find all the emails in which I have open tasks from the past week and show them to me it will find and it will give you a list of all the open tasks with the links to the sourcing emails. it comes from find me all the emails from clients that are asking for something that I did not respond to yet, that sound urgent to you and that are from the past month, and it will do that as well. So literally anything you want. I asked it multiple times to locate specific detail about flights I have in the future and things like that, and he just knows how to pull information from the emails, but it also gives you the links to the relevant emails so you can verify the information or see more context and so on. As I mentioned, extremely powerful. Staying on the Google Workspace environment, they now have the capability that they call help me schedule. You can have the AI look through your calendar and find for the right opportunities to schedule whatever it is you wanna schedule. Either go through an email thread that goes back and forth between you and a potential somebody you wanna meet and say, help me schedule. Related to the communication and then it will say, oh, you're asking for one hour or 45 minutes, or an hour and a half, or whatever it is that the conversation mentioned. And with the times that the client or the other person suggested they might be available. And it will check your calendar and you will suggest a meeting and if you'll say schedule it, it will actually go and create the meeting and invite the other person. So this is next step, kind of inject assistant behavior. That is really helpful if you're not using a third party scheduling tool. So sometimes it just can save you and the other person a lot of back and forth and find the right times for you to meet with whoever it is that you wanna meet, whether external or internal, or just stuff you need to do and you need to block time in your calendar. It can help you do that as well. Another thing that they just added to Gemini that I find really helpful is local results. So the local results that used to solely exist inside of Google search now show up in Gemini. So if you ask Gemini for a vegan restaurant in your area that is no more than 10 minutes away from where you are staying at your hotel and that serves your favorite dish, it will try to find that and it will show you inside of Gemini with the ability to click and see all the details just like you would on a regular Google search. Again, as I mentioned before, all the lines between the different Google universes are blurring and it is becoming more and more helpful for us as users. Another cool thing that they added just this past week is the capability to reference Notebook LM notebooks inside of Gemini. So if you are in a regular Gemini conversation, inside the plus button. You used to have upload photos, add from drive and photos, and now there's notebook LM on the bottom. If you click on Notebook, lm, it opens all the all the notebooks that you have in the Notebook, LM environment. What does that help you do? It helps you have a conversation inside of Gemini about a specific content universe that exists in one of your notebooks inside of Notebook. As an example, I have a notebook with all my podcast episodes. Now I can go to Notebook and ask questions over there, but inside of Gemini, there's more stuff that I can do because it connects to different tools and it has different models that I can choose and I can create images and I can do a lot of stuff that I cannot do necessarily inside of Notebook lm. So I can now combine the knowledge base of any notebook in Notebook. Lm. Into my regular Gemini conversation. As of right now, it works only inside of the personal account and not on my business account. But again, I'm sure this is coming next and it will connect over there as well. And now two more things that are more advanced capabilities. One, they now have the age agentic browser. So just like Claude now has it for Chrome and Atlas inside of Chachi PT and Perplexities Comet and so on, you can now click on the top right corner. If you are anywhere in Chrome, there's a Gemini button and if you click on that, it pops up a Gemini popup that can now engage with anything in your browser across the different tabs. And. Work collaboratively with you on anything that you're working and it knows how to fetch information and work in anything in your browser. It also knows how to work with voice, so you can click on go live and have a conversation with it about what's on the website, and you can choose the model in the dropdown menu just like you can in the regular Gemini tool if you have not used any of the agentic browsers. I will say two things. One is absolutely try it out because it's a completely game changer to how I'm using browsers and how I'm doing different things in different workflows that I'm doing right now. The other thing is that all these labs are saying that these tools are really dangerous and that they're prone to prompt injections and other cybersecurity risks, so do not share with it information that you want to keep safe. But as far as using it for many things on the day to day, it is a very highly recommended tool. I've been using Comet for probably six months now, several times a week. I've been using a little bit of Atlas. I've used Gemini and I've also tried Claude, this past week as well. All very capable and helpful across different kinds of use cases. But then the last thing that I wanna mention from Gemini is they now have what they call Gemini agent. Gemini agent is currently only available to the ultra users, which are personal accounts that are paying for their paid personal license of Gemini. And if you have that, you can now build really powerful automations that go across everything that Gemini knows how to do, combined with. More or less everything in your Google workspace, so it knows how to build automations similar to NA 10 and make.com and Zapier, et cetera. But it's doing it inside the Google Universe combined together with Google search. And you can build really sophisticated automations that will work across your entire Google universe. Is it as helpful are NA 10 or make? Absolutely not. Is it way easier to use and it works really well inside your Google universe? The answer is yes. So I would say there's room for both of these components, and if Google decides to then build it beyond, then allow you to connect with third party tools, which I assume they will. That puts at risk a lot of other companies, including Mac and Zapier and NA 10 and so on, because it will be perfectly integrated into everything in your existing Google universe, including your drive and your notebook and your emails and everything else, but also then integrated potentially with third party platforms. So what is the bottom line and why this is important and why is this important at the end of 2025 when we are looking into 2026? first of all, you can see how the Google ecosystem is embedding AI into everything everywhere and interconnecting the different components to provide us the users more value. This was the dream since day one, right? I don't want to have gemini for Docs and Gemini for slides and Gemini, for sheets and General Gemini, and then notebook. I wanna have just one Gemini that will know how to do all these different things and we'll be able to connect the data from all the different sources and we're coming closer and closer to that point that Gemini can work across all my different environments and reference information from Google search and Google Slides and my calendar and my. My emails and third party platforms and everything else, and having an extremely powerful assistant that can also take action. And this is why I kept the agentic part last, this is the next frontier, right, is the ability to tell it, to actually do things for you, whether inside of the Google universe, like create a summary document based on a list of emails, or write an email based on multiple documents inside of your notebook, lm, or whatever it is that you are trying to do. Everything will talk to everything in a seamless way. It will understand your entire context universe, and we'll be able to a help you understand information. Analyze information, make better decisions, but also actually take action for you. And all of this is already available one way or another, and it will become much better going into 2026. As I mentioned, this was Gemini, but I'm sure Microsoft is going to do very similar things in the Microsoft universe. It will be very interesting to see how Chachi PT and Grok and Claude stay relevant in a universe where everything in my ecosystem is connected through a platform that has AI into it. And then unless they provide the same level of connectivity, which they probably won't be able to, they'll be able to come close and the same level of security, which they definitely will not be able to. And added value beyond what I can get in Gemini, in the Google universe, or from copilot in the Microsoft universe, it will be very hard for them to compete in the business arena. This is my personal opinion because that's where most of the value is having access to all my data, all my context, all my experiences, all my connections, all my emails and everything that I do in my business, and it'll be able to be significantly more helpful than just a generic model that has memories about what I've done with it in the past year and a half. If you are a Google user and you haven't tried one or a few of the things that I just mentioned, go test it out and find ways to make it useful for you. I promise you, this will make you significantly more effective in 2026. That's it for today. Keep on exploring ai, keep sharing what you're learning with other people. Help us all learn together and have an amazing 2026