Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
253 | The AI milestones in 2025 and what it means for businesses, and the world in 2026
📢 Want to thrive in 2026?
Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/
Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/
Is your business ready for a world run by AI agents?
2025 changed the game. From reasoning models to real-time context-aware agents, AI didn’t just evolve—it exploded. If you lead a business and you're still thinking in "tools," you're already behind.
In this year-end solo special, we unpack the most critical AI breakthroughs of 2025 and the seismic shifts heading for business in 2026. Consider this your strategy briefing for the year ahead—without the fluff, hype, or hallucinations.
If you’re leading a team, a division, or a company, this episode will give you the competitive lens you must adopt before Q1 gains too much momentum.
In this session, you'll discover:
- Why reasoning models will shape every AI strategy in 2026
- How agents are becoming your next coworkers (or competitors)
- The critical rise of AI infrastructure in China and what it means geopolitically
- What agent-to-agent protocols mean for customer service, commerce, and collaboration
- The real business risks of world models and continuous learners
- How robotics leapt forward in 2025—and how close we are to robots on your payroll
- Why AI is pushing businesses toward personalization at scale
- What agentic browsers are and why they’ll soon take over how work gets done
- The 2026 forecast: AGI, automation, job disruption, politics, and the global AI arms race
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a special end of year episode of the Leveraging AI Podcast. This week we are not going to talk about the weekly news. We are going to talk about the summary of 2025 and my projections, or if you want my thoughts about what's coming in 2026 based on the trend. That we have seen evolving in 2025. In addition, I'm not going to do a news episode at the end of next week. I'm taking a week off with my family and spending some quality time together. There will be a Tuesday episode in between with a how to episode as we release every single Tuesday, and we'll be back with a Tuesday episode in the beginning of 2026 and with a weekend news episode at the end of the first week of 2026. Today's episode is brought to you by the AI Business Transformation Course, which is the course that I've been personally teaching since April of 2023. I've been teaching this course at least once a month, in many months, more than once, because I've done a lot of private versions of this course. So if you have not yet taken. Structured training about AI beyond listening to podcasts and following people on YouTube, it is definitely the time to do that. In the beginning of 2026, we're launching another cohort that starts on the third week of January and goes for four weeks in a row, two hours every single week, plus a one hour ask me anything sessions on Fridays. So if you, again, haven't taken any such training, this is a great time to come and join us and dramatically accelerate your AI knowledge when it comes to how to use it in a business context. Thousands of individuals and business people and business leaders from all around the world have taken this course in the past two and a half years and dramatically, and Have transformed their personal careers and or their businesses using the knowledge they acquired during that course. If you are a business leader and you're looking for structured training for your entire team or your entire company, please reach out to me through my email or LinkedIn. There are links to both of them in the show notes. Most of what I do is private training for organizations through different kinds of workshops and courses and so on, so reach out to me for that. We're also launching the second cohort of our more advanced introduction to workflow automation with AI in which we build on top of what people learn in the first course and teach you how to apply this and connect this to your entire tech stack and build actual automations that can do stuff in your business. This will start immediately after the first course ends, so you can take both of them back to back, or if you have the basic knowledge, you can join just the second course. And now let's dive into the summary of what happened in 2025 for a short little while. What is probably going to happen in 2026, or at least the trends that I can see right now. And then a little bit on how you can prepare for what's coming. So just like the previous decade has laid the foundation that made generative AI an option for us, such as the transformer architecture, such as distributed internet, and huge data centers such as fast internet and so on. All these things played a role in enabling the ai. We know today in the same exact way, the things that happen in 2025 are laying the foundations for what's most likely coming in 2026. So the first thing that I'm going to talk about in 2025 is reasoning models. While, they were technically introduced in Q4 of 2025 with introduction of ChatGPT oh one, they became a lot more available in the middle of the year when ChatGPT released GPT five and baked the reasoning model into the regular model that everybody uses, because before that, based on open AI themselves, less than 10% of the population actually used their reasoning models. So if you've been listening to this podcast, and you have been following me and others talking about this, you might have been using reasoning models at the end of 24 and the beginning of 25, but the vast majority of the population did not use any of them before. The models themselves started picking the reasoning capability as part of their workflow. And so the jump in usage of reasoning models and the jump in the understanding of the labs on how to use them more effectively has jumped dramatically in the second half of 2025, and that's gonna play a very big role into 2026. Another big aspect that happened in 2025 is the rise of China when it comes to AI capabilities. For now, mostly software, but hardware is on the rise as well. This started in the beginning of 2025 with a deep seek moment when suddenly there was a model from China that was. At par ish with the Western leading models and really place China on the map. Since then, we have multiple really powerful, highly capable models from China that are not only really good, they're also, in most cases, much cheaper than the Western models. The two models on the Western side that actually broke that equation to an extent is the most recent Gemini three flash and the one before that, Gemini 2.5 flash. Both of them are extremely capable models. Gemini three, flash more or less as good as Gemini three Pro and better, or at least equal, or at least somewhat equal to the knitting models out there, but for a fraction of the cost. But there are many more like that from the Chinese side of the world, which has led to a global competition between the US and China when it comes to global domination in the world of ai. More on that later. The third big component that I want to talk about when it comes to trends in 2025 is. Agents. So while the conversation in 2024 was about better and better models, the conversation about agents intensified and became a lot more real in 2025. And in addition to the fact more and more tools and more and more platforms enabled, building, developing, and deploying agents and beyond the fact that many more companies, most of them in the enterprise level started testing and some deploying agents. Three more big things happen that will enable the most likely agent explosion we are going to see in 2026. One of them is introduction of MCP. We've talked about MCP multiple times in the show, and we even showed a few use cases on how to apply it. But MCP stands for model context Protocol, and it was invented by Anthropic, but then open sourced so everybody can use it, and it was widely adopted. What it enables to do is it enables to develop an interface to an existing data set. And or an existing tool that already exists and connect it then to any AI tool seamlessly in a few lines of code that you can copy and paste, which means you can connect to your ERP, to your CRM, to your email platform, to your marketing platform, to any other tool that you want. And to relevant data sets just by developing the MCP once and then that connector to that particular tool, let's say Salesforce is now available to everybody to use across any AI tool in a matter of seconds. So this is a huge deal because it saves a crazy amount of work across the board to many, many different companies, which drove adoption for agents and their connectivity to existing tech stacks. The other development that happened that will enable. A lot of growth in 2026 is what's called A to A or agent to agent protocol, which was also open sourced. And it is a protocol that defines how one agent effectively talks to another agent, regardless of who created them on what platform, which tools, et cetera. So it standardizes the communication between agents, which then enables a lot of the stuff that we're going to see in the future. More about that shortly. Another huge deal that happened late in 2025 is the ability to go beyond the current context window limitations. So context window is the amount of data you can put in a single chat, and it has grown dramatically in the past few years, and the leading models right now have usually between a million to 2 million tokens. Context window. What is a token? It's the way these models work. They don't actually know words. They know tokens, which is about 0.7 or 0.75 words, which means in the most advanced tools today, the context window allows you to bring in between 750,000 words to about 1.5 million words. That is a lot, but it is not enough to bring in huge context such as your entire. Database or your entire code base or the entire data set of everything in your company. However, both OpenAI and Anthropic has launched two capabilities right towards the end of the year, OpenAI with what they call context compaction that came out together with GPT 5.2 just a couple of weeks ago. And what it enables the AI to do is to effectively compact the context from a previous chat and push it to the next chat so the AI can continue working on a much longer context without breaking the knowledge that it had in the previous conversation. Once this process is perfected, it basically allows it to run indefinitely while rolling forward information from one context window to the next and working in huge amounts of data. Right now it is geared specifically for coding and looking at huge code bases, but this will change and will most likely be available to any kind of long context, long duration tasks or entire projects that individuals and companies will want to do A few weeks before that, in the launch of Opus 4.5, anthropic released what they called Agent Harness as part of their multi-session a DK that practically does the same thing. So it works in slightly different ways. In this particular case, the way it's working is that there are two agents. One is an initializer agent and the other is the actual coding agent. An initializer agent sets up the persistent environment and logs all the different actions and steps that are being taken. So the coding agent can continue independently regardless of the actual context windows and how many times it was changed in between the different steps. Regardless of the way it actually works, the outcome is the same way. We now have practically the ability to run extremely long sessions with keeping the context so the AI agent can know what happened over a very long period of time. Again, in both cases, it was developed for coding. In both cases, it will most likely go beyond agents and transition to any other type of knowledge work. The final big change that happened towards the end of the year that will also enable these systems to be significantly wider spread is a research from the Cognizant AI lab and University of Texas Austin, that published a paper called Solving a Million Step LLM Task with Zero Errors. And they developed a concept they called Maker, which stands for maximal age agent decomposition, K threshold error mitigation, and red flagging. That's a really long mouthful, but what it actually does is it takes a tasks and then the AI breaks it into very, very, very small segments of the tasks, but then it lets several different agents solve the same task in parallel. Then it compares the results. The process then needs basically a voting scheme where the agents vote on the most likely correct answer based on statistical parameters. So it weeds out the mistakes that specific agent makes because it compares it to the results of other agents. So if you have five agents are solving a problem, four get to the same conclusion, and one gets to a different conclusion, it'll continue with the four agents. Now because they're breaking it into really, really small tasks, and because every task is evaluated separately, they were able to run a process that has over a million steps and run it all the way through the end with zero errors, no hallucinations, no mistakes, no formatting problems, et cetera. Now to make it efficient, they're using GPT-4 one Mini in their tests, which is a cheap and fast model. They could have now probably used Gemini three flash, but any of these models that are smaller and fast and efficient will do the task well. And because they're much cheaper, you can run all these parallel agents and still make it cost effective, especially that you're guaranteeing correct and accurate results over a very long, complex task. So what does all of these components together give us? They give us the capability to now run tasks and entire projects that could be hours or potentially days still in 2026 while delivering accurate results. Again, right now many of these tools are built around writing code, but this will go way beyond that into marketing and sales and strategy and operations and so on. Meaning we will have agents that will be able to act over much more complex environment and to much more complex tasks and projects independently of humans. Now, to be fair, we are not a hundred percent there yet, especially not with the regular setup that people have. And so one of the things that I'm sure will start happening in 2026, we'll start seeing companies higher for people that will evaluate the outputs of ai. Or they will define specific time windows as or part of job descriptions of other people that currently working in the company to evaluate the output of AI in their area of expertise. Now, this may sound counterintuitive, we're adding work because of ai. No, the reality is not, you're not adding work. You are saving five hours of work that the AI is doing for you and then investing one hour in evaluating the work. So you're still saving four hours and getting to better results and faster. Another big thing that happened in 2025 that will connect to all of this is real time context awareness. And this comes in several different shapes and forms. The first one is connectors to your entire tech stack. So even out of the box inside your ChatGPT Claw, Gemini, et cetera, there are connectors to many tools that you use in the day-to-day. But there are many other connectors not straight in these tools that can go far beyond that. And connecting that with MCP solutions from different providers tells you that the AI can now get access to your real work data regardless of where it is stored. That's obviously not across the board. There's still a lot of issues. The data is siloed in different places and different schemes and all databases. I'm not saying it's solved. I'm saying we've made a huge step forward in 2025 to giving access to our real business data, to these AI tools. The second thing that improved dramatically is live mode. The ability of AI to have a live conversation with you based on a video feed and or viewing your screen, which is now available, built in to most of these tools. I use it in both ChatGPT and Gemini all the time across the board from fixing stuff at my house to solving complex automation problems that I'm building on NA 10 or other automation tools. And these tools are extremely helpful in doing that, and they will keep on getting better, but they fall under the category of real time context awareness. The third component is agentic browsers, which are becoming a big deal and now are a part of every offering of all the major companies. So you can click the Gemini button on the top right of Chrome. There is now a Cloud Chrome extension that I've tested in the past couple of weeks. There are several agentic browsers with the leading ones being comment from Perplexity and Atlas from chat pt. All of these allow AI agents to view your screen in real time, take over your screen, and basically perform most tasks which you can perform in the browser, which means most tasks you can perform the digital world. All of these come with serious warnings of security and prompt injection and so on. So be aware of that if you're planning to use them excessively. But going back to the point of real time context awareness, these tools are a very big deal, and I'm sure we'll see more and more of them while solving the security problems that they bring right now. The next component is true multimodality the models in 2025 are real multimodal. Tools, meaning they can generate, see and understand, video, images, text, et cetera, all in a single model, which gives them a much better understanding of our real world and how it works. And the next step of that, which is going to be a huge trend in 2026, is world models. So one person who's been talking about world models for a very long time is young Koon, who recently left Meta in order to start a company that focuses exactly on that. Another person that's been a big support of this is Fei. FEI Lee, one of the founding fathers of the modern ai. But in the recent and fascinating interview with Debe from Google DeepMind's podcast themselves, he talks about that the next breakthrough will most likely come by combining world models with the models that we have right now. And the last person that talked about learning in real time from the actual world is Ilia sca, who is one of the co-founders of OpenAI, and currently the founder of SSI, self Safe Super Intelligence, who also talked about models that can learn over time from the world and from everything that they're experiencing. So all of these people are talking about world models. What are world models? There are models that can see and experience the world and not just live inside the box of a computer. They have access to cameras. They start to understand physics. They start to understand the real world around us, and they learn like babies and kids learn by experiencing the world around them. This will allow the models to be a lot more grounded with the realities we live in, and it will be a much more solid baseline for robotics, which we're going to talk about shortly. Another aspect that I just mentioned is continuous learners, both Demi and Ilia s and many others are talking about models that will continuously learn on their own. So instead of doing it the way it's done right now, meaning let's collect all the knowledge in the world and run a training run, which will probably continue to exist as the baseline. You develop a model that is really good at learning and then it is learning on its own continuously, meaning you don't need to do additional training runs as the model learns as it goes forward, which dramatically accelerates the learning. Combine that with what we talked about last week. That became a big deal in the past few weeks. Which is recursive self-learning AI systems, meaning AI systems, and as we spoke last week, hardware as well, that improves over time on its own in a closed loop without the input from humans. And you understand how AI models and AI hardware can grow faster and faster while being continuous learners and understanding the world better. And you can develop much more advanced tools, which are the path based on all these experts to a GI and beyond. Now from a practical perspective, what we have seen is we have seen all of these AI capabilities being more and more integrated into the tools and the hardware that we use every single day. So now you have AI capabilities inside of Microsoft Office, inside of Google Workspace, inside of Salesforce Notion, et cetera. More and more tools in your tech stack have AI built into them, and it's getting integrated between these tools as well through CPS and other connectors. But it's not just software. We have seen AI being integrated into hardware as well, and that will continue moving forward. So right now, as an example, I'm driving a Tesla. I have grok inside my Tesla. Initially it was just a fun way to have a conversation while you're driving and learn what's the weather going to be at the destination or the results or asking trivia questions with my kids. But right now it is starting to get integrated into the car system such as navigation and so on. And very shortly, grok will be your interface into the car, and I'm sure Tesla is not the last company to do this. We have Microsoft Copilot plus PCs, which in which AI is built into the actual hardware of the device. You have Pixel 10 Pro from. Google that is a device that has many AI functionality built into the device and there is the, so far non-existent, but hopefully will change in 2026. Apple Intelligence, which will enable AI to run on the device. Connecting the dots for a second, think about AI like Gemini Three Flash, which requires significantly less resources and still is a very capable model. That trend will keep on happening and will enable running high-end advanced AI on device, which provides a lot more security, safety, and speed, which is suitable for many more use cases going the next step from just devices that we have today and combining them with AI robotics has made a huge jump in 2025. We now have multiple companies in the humanoid and quasi humanoid robotics race that are developing amazing robots and starting to deploy them at growing quantities, and the prices are dropping as well. So while the top models still cost hundreds of thousands of dollars per model, many of them are now in tens of thousands of dollars and smaller variations of them are now in single digits, thousands of dollars. So as an example, unit three, which is a Chinese companies who was one of the most advanced humanoid robots out there, started selling G one. Which was their first small robot. It's only four feet tall, but it can still do a lot of stuff around the house or in the business for 16 to$40,000, depending on the variation. And now they have a even cheaper version that's called R one that they're selling for$6,000. Now, can you do everything the big models can do? Absolutely not. But it is showing you the trend of highly capable robots that can do more and more things that humans could only do before. And obviously beyond that because they're stronger and more capable across multiple different things such as accuracy and vision and wavelength that they can see in the amount of capacity that they have, and the amount of hours obviously they can work. And you understand, combined with the drop in prices, that we're going to start seeing robots more or less everywhere, starting with factories, then going into service providers across different things, then in coffee shops and restaurants and shortly after that in our homes as well. There's obviously currently safety concerns and other issues, but the trajectory is very clear. In the last two components that we have seen as trends in 25 that will continue into 2026 is model personalization. And skills. So let's start with model personalization. The first company that provided us a glimpse into that is ChatGPT with memory, but then with the ability to create your own custom instructions to the model, and now it is becoming even more and more customizable as the year continued. The same thing is now happening in the other models as well. So once long-term memory was introduced, the these models become more and more custom to who you are. They understand your universe, they understand your context, they understand your company, they understand your family, they understand your needs, and they provide more personalized answers. The same thing will happen with the personality of the models that again, already exist in Chui on a basic level and will probably continue happening into 2026. What does that mean? It means that if you and your coworker. Ask the same question from the model that you're using, you're gonna get different answers because you provided it different context over time, and it knows how you like to receive your information, what kind of data you're looking for, how to connect the dots for you in a way that is helpful for you, which is different than somebody else. This has huge benefits from the value that AI provides and huge risks from a society perspective when there is no unified truth or processes for doing things, and every single individuals is seeing their own version of the truth and the process. So there's good and bad in that like many other aspects of ai. And the last component before we dive into what would be the impact in 2026 is skills. Skills were introduced by anthropic. And they just in the last 10 days, open sourced skills as well, just like they did with MCP. I think skills are incredible. So skills are the ability to teach the AI how to do something specific, a scale and package it as a package that the AI knows how to pull only when it needs it, so it doesn't consume your context window. And the AI can still use it as needed, when it needs it. It can connect to specific tools, it can have specific data and so on. And it will perform and enhance what AI can do across the board. And I anticipate now with them open sourcing it to see more and more companies starting to use skills. And that will become a second nature and almost in an afterthought, probably by mid or the end of next year. Well, these will just happen in the background and we won't even think about it. I'm gonna connect some of these dots to some key aspects that Demi Sabe, the CEO of DeepMind from Google has shared in the recent interview he just did as part of the DeepMind podcast. So he mentioned several different things that we need to address in order to go to the next level with ai and that he's expecting in 2026. One is solving what he's calling the jagged intelligence. And what he means by jagged intelligence is the fact that AI can show PHD level of knowledge in specific things, but fail miserably in some other stuff that a 12-year-old can perform. So that inconsistency in its ability to understand, analyze, or deliver data is something that has to be resolved. The broader aspect of this is what he calls the shift to reliable agentic systems, meaning agents that can consistently deliver correct results. And he is talking about the fact that the balance will shift from raw power to consistency and accuracy. Meaning instead of investing more in growing bigger and better models, which are already really, really good, the focus will become on, forget about making the model better. Just give me reliability. Give me a hundred percent consistency on the tasks that I'm doing today. Forget about what it can do three years from now. I don't care. Just let me solve the current immediate problems, and hearing from them tells you that's the focus in DeepMind, which tells you that that's most likely the focus in the other major labs as well. So where is all of this going to lead us in 2026? The first thing is agentic explosion. We are going to see agents more and more everywhere, first of all, in large enterprises for internal processes. But over time in 2026, it will grow into more and more complex tasks because of all the capabilities we talked about before. Much longer running agents that can take much longer tasks and work for again, hours and potentially complete days. On their own, on specific tasks, we will see either no or much lower level of errors coming out of these models. Significantly less hallucinations either because the models themselves will learn to self-correct or with mechanisms such as I mentioned earlier from the research that were just published. And from a concept, it means that we're going to go from a tool that can get some data in and provide some answers out, or generate images or generate anything else to true team members, actual AI collaborators that will work together with humans as part of a team, inside companies and organizations to achieve the tasks and the goals of the organization. Now this requires a complete mindset shift and a complete technological shift, and a complete workforce strategy shift from what we know today. Because in the beginning, people will be working with one or two agents, but over time, people will manage multiple agents just like they manage employees right now. This requires teaching people how to do that. This requires developing the right infrastructure and the right guardrails for this to work effectively and safely in large scales. We're also going to see more and more agent to agent interaction, initially more inside of companies where one agent will work with other agents to achieve different tasks. This is how agentic systems are built. They're built usually even today with an orchestrator agent that delivers smaller tasks to very specific agents, and then it orchestrates, as the name suggests, the process between the different agents. But this will grow way beyond that, both in scale as well as outside the organization. And we're going to see more and more agent to agent interaction outside the company, such as agent based commerce. So you will have an AI agent that will help you shop, and that agent will talk to the provider's agent, and they will together figure out what is the best solution, product, or service for you. And potentially with the right permissions and safety guards, we'll also make the purchase and the transaction without humans being involved or with just humans approval. So it will show you what is the outcome of the conversation between the agents, and you can say yay or nay, or pick between option one, two, or three, or whatever it is that you request it to do for you. The initial steps of this is currently happening right now. The same thing will happen in customer service. The same thing will happen with proposals. The same thing will happen in many other aspects of our businesses, where the client agents will interact with company agents to get to whatever step in the process that will be allowed for that particular task, and that will be executed fully by agents with humans either just approving or giving a final selection between a short list of different options. Another huge trend that I anticipate in 2025 is vibe everything. What I mean by that is the phrase vibe Coding was coined by Andre Carpathy in the beginning of 2025 when it comes to writing code with ai. We went from a world in which only computer science majors and people who actually know how to write code can develop software to an era within one year that literally anyone can write pretty sophisticated applications and definitely really simple and quick applications. I'll give you three examples from the past few weeks that either me or people I'm working with have developed that have provided immense amount of value and didn't take too much time. One of it is I am developing a new website for my software company that does agent based. Invoice reconciliation and vouching, and I developed a website while Vibe coding it, and it's a really cool, really advanced, sophisticated website, and I haven't used anybody external to do any of it. I haven't used any third party tool like Elementor or Wix or any of those to create the tool. I'm literally just vibe coding the whole thing, including the front end, the backend, the infrastructure, the connectors, everything that it needs in order to provide value to my clients and potential clients. Another great example is in a workshop that I delivered a few weeks ago, one of the participants in the workshop has vibe coded, a tool that we used in and during the workshop in order to track people's ideas for the hackathon that we did at the end, and to vote on the ideas that they actually wanna develop for the hackathon. That was a very helpful tool that was vibe coded in a few minutes to be used right there. And then, so a very quick turnaround, that was very helpful as far as the results study provided. Another great example that I can give you from the past couple of weeks, this is the most wonderful time of the year in the US as you know, and which means open enrollment, which means you're gonna get a thousand different options or x number of options for medical insurance. And it's so hard to pick because there's so many options and variations. There's all the different plans with the different copay and the different maximum out of pocket and the different cost of the plan itself and et cetera, et cetera, et cetera. And it's just so hard to pick. And I literally took all that information, dropped it into Gemini, and asked it to create a simulator that will help me evaluate the different options based on different scenarios and in five minutes it created it. And in another five minutes I knew exactly which plan works best for me and my family. These are things that were just not possible a year ago and are. Easy to do right now for each and every one of you for any need that you have. So this is the end of 2025. But what will happen in 2026 is that these capabilities will be expanded beyond just coding. Meaning you will be able to give it information about anything and ask it to develop that thing, whether it's a marketing strategy or a complete plan, or your actual execution of your social media capabilities or an entire sales funnel process, including the emails and everything in it, or different aspects of your operations in your company or customer service, et cetera. You'll be able to just explain to the AI what you wanted to do and you will figure out the way to do this in an effective way. So that's a big trend that I see coming. All of these will require a complete workforce change in the way we address how we work. Meaning this goes way beyond the technology. The technologies actually might be the easier part we need to rethink. The strategy of our organization, what else we can do with AI that we couldn't do before in a profitable way. How can we go after new markets? What kind of new clients we can serve? What kind of new services and products we can sell effectively that we couldn't do before? Because the ROI just didn't make sense, and now it does with ai. So going from a mindset of let's build small efficiencies that save us 5% here and 2% there to a mindset of 10 x my business, because AI enables it right now. This is the direction that we're going to see in 2026, but it will require a complete rethinking. Of the organization as we know it, including very dramatic HR changes from a hiring, firing, and structure perspective. Very significant investment in training, both initial training and ongoing training and investment in infrastructure for the technology and the entire tech stack of the company. That is not a simple task, but we'll start seeing more and more of that happening across the board. And obviously we'll start seeing a lot more success stories from companies who are AI native, who doesn't have to go through this transformation to just get created in the DNA of using ai, which allows them to grow very fast with very few people to very significant scale while still providing all the services to their clients. And now let's go to the bigger picture. How is that going to impact the world and not just businesses, how it's gonna impact society? How is it gonna have global impact across the board on much on many more things that we have today? So first of all, I anticipate that the race between China and the US will intensify even further. I assume we'll start seeing the European Union investing more in actually developing significant real AI and putting them on the map, which they are not right now, other than very specific points in which they have some things to talk about. Uh, but overall, they're very far behind right now compared to China and the US. We're going to see a growing negative impact on jobs. While we have talked about the recent research that shows that there hasn't been significant impact of AI on jobs as of right now, I definitely suspect with everything that we talked about in the past year and a half or two and a half years on this podcast and what we talked about in this episode, that the growing. Capability of AI to do more tasks that are longer and more complex will do more work that humans do. And since there's a finite amount of work that needs to be done because there's a finite consumption and a finite demand for services and products, this will lead to companies having to cut people and to completely change the way they work, which will drive unemployment and a lot of unease in the world, starting with the US where AI adoption is just faster than most other places. Now the other thing that will continue that has a global impact is the growing need for resources to keep pushing forward the AI race. So more and more data centers, usage of natural resources such as water, huge amounts of money that are poured into this instead of to other aspects, pollution that it drives in order to generate the power that these data centers require and so on. There's been a letter sent out last week by Bernie Sanders calling to stop completely the development of new data centers in the US because of many different negative implications that it has the combination of the natural damage that mostly the combination of three things. The negative impact on the environment, the job losses that it's going to drive, and the fact that very few people are gonna get richer and everybody else is gonna get poorer as a result of that. I'm expecting that together with more and more pushback that we're seeing right now to data centers being built in rural America, that will grow dramatically in 2026, and that will lead into the world of politics, which is already there. So 2026 is an election year. And you look at the fact that everybody will look for the right messages in order to win more votes, and AI will become very political. So far it is not completely clear which sides of the aisle believe in which kind of aspect. On the Bernie Sanders.Side, it is very clear, but there are people on both sides that believe that we need to push forward in order to stay competitive with China. And there are people that believe that we need to stop because of the risks that it produces, whatever they are, or the combination of all of them together. But either way, there is gonna be a lot more politics involved in AI talking about this, we have seen the recent head bashing between the states and the federal government in the US with the new executive order by President Trump. That is basically preventing or threatening with very serious measures, both financial and legal measures against states that will put in place AI regulation that will slow down or stop the innovation in their particular states. How are that evolved? Not sure yet, but we're gonna learn a lot more about it in 2026. We're also going to see bigger and maybe the outcome of really large legal battles around ai. And some of it has to do with the rights on the training data. Is it fair use or not? Some of it has to do with liability, meaning who is to blame when a model that runs in your company is now messing something up, or who is to blame when a self-driving car is in an accident, and so on. So the issue of liability in an agent world is currently not solved by the current laws that we have right now because we did not have agents. If you remember about a year ago, there's been the whole craze around creating rap songs of famous rappers with their style of musics and with their voice, and they couldn't sue because there was no law that says that you own your voice because there was no other way to clone your voice, so it didn't matter. So creating a rule like this did not make sense, and I'm sure we're gonna start seeing more and more laws again, either at the federal level or at the local level that will put in place laws that will define liability and different outcomes when an agent or an AI tool does something. There's been also the psychological aspect of things where several different people committed suicide or did really bad things to themselves or to others because whatever AI tool recommended it. So all these things will end up in court and we'll start seeing the outcomes of these legal battles unraveling, which will give us more guidance on how the future will AI will look like. 2026 we'll also see most likely huge IPOs and more consolidation coming in that space. The amount of money that gets poured into the AI race right now is insane. It's nothing like anything we've seen before and it's obviously not sustainable in the long run just because there's not enough money to keep sustaining it, meaning we're going to start seeing more and more consolidation and we're gonna start seeing big IPOs. There's already been lots of rumors on both open AI and Anthropic potentially going public in 2026, or at least getting close to that and maybe going public in 2027. Another great example of consolidation from this past week is Nvidia just basically took over Grok Grok with a Q, meaning the hardware company that has been developing chips that specializes in inference, and doing much better job than GPUs as far as the inference side of using ai. And for those of you who are new to the show, there are two types of processes that needs to happen for you to use your Chacha Pity or Claude or et cetera. One is to train the models. So taking a huge amount of data and writing a training set and GPUs are still the best option on the planet right now. Even though there are some other contenders like Traum from Amazon and like TPUs from Google, but GPUs from Nvidia are still ruling by a big spread, especially with their ability to create a much larger scale of them right now and supplying the growing demand for these. But the other aspect is when you use the ai, when you write a prompt or ask for an image and so on, there is the process of using the AI to generate whatever it needs to generate. And that is called inference. And grok with a Q is one of the leading platforms out there today to use AI and create really fast, effective inference across many different AI tools. And now Nvidia couldn't buy them because the regulator would've slowed it down. So they bought the licensing to do that and the team at the same time. So we're gonna see more of these aqua hires or different kind of mechanisms of larger companies to take over smaller companies without practically purchasing them. So they can go around regulators blocking the move. But I anticipate this to go way beyond just projects and specific companies and get to the national level and the geopolitics level between different companies, nations and continents. So we talked about the US China race in the eu, but we are already starting to see national projects. And last week they announced the first 24 organizations that will participate, that will combine together the knowledge, the power, the resources, the research, et cetera, across multiple different agencies and companies to use AI to promote national level projects and research. And I anticipate international collaboration projects like this where specific nations will not have enough money or other resources to do this, and they will collaborate with other nations in the region or globally in order to achieve bigger goals. And hopefully if this is more wishful thinking than me actually thinking it's going to happen, I really hope we're going to start seeing broad international collaboration between governments, academia and companies, to start working on preparing the world, the society, the global economy, to the age of a GI and A SI, because it's going to have profound impacts on more or less everything we know nobody is ready and no single organization on their own can either stop or can either stop or start preparing for what's right around the corner. And again, I hope that we're gonna start seeing these kind of collaborations evolving, or at least being set up in 2026. And one thing that I'm personally really excited about that came out of the interview with Demis Es that I mentioned earlier is what he calls the scientific root node breakthroughs. Basically what he's saying is that what they did with alpha fold, which awarded him the Noble Prize and is the ability to use AI to simulate the folding of actual proteins, every protein in nature, and understand its exact structure, which really is the basic building blocks of living materials. They are. What he's saying is that the same concepts that were used to create, if fourfold can be used for more or less anything else. Or as Demis said in the interview, and I'm paraphrasing, he hasn't seen anything in the world that is not computable yet. So they're deeply involved in. Advanced research of energy and fusion. They're working together with Commonwealth Fusion to hopefully accelerate reactors that will do fusion power, which means completely clean, endless energy that can solve all the energy problems of the world without creating any pollution. They're working with several different companies on material science, including room temperature semiconductors, which can completely transform the world of it as we know it today. They're working on multiple biological related projects. Demi himself said that he believes that within less than a decade, we'll be able to simulate every single function of the human cell at an accurate level, meaning we can simulate and develop new treatments to any disease that exists today significantly faster than we can write now. If you haven't watched the movie about Demi's life, you should, because it will tell you that he's very serious about this being his life mission to solve all these really big problems for humanity. So now you're probably really excited and terrified at the same time, which is the feeling I get every single day when I work and use ai, and especially when I sit down to think about where the world of AI is going to take the world as we know it. And so what can you do to prepare? The first thing you're already doing is teach yourself about AI and continue teaching yourself about ai. So I assume that other than my podcast, you're consuming other AI related content, either a podcast or YouTube videos or whatever it is that you're consuming in order to teach yourself. But you also need to put yourself and if you're in the leadership position, the people under you into more structured training, either workshop courses, uh, and this could be a combination of external and internal resources to deliver this kind of training. I've done workshops for companies such as Salesforce, so they obviously do a lot of internal training themselves, but they also hire people like me to deliver training to. Compliment the stuff that they're doing internally. I've done this in small and large organizations as well. Many of them do not have any internal capabilities to do this, but even those who do invite me regularly to do workshops, to focus on specific capabilities that they want to learn, and you need to do the same thing in your organization. If you are an individual, you have to take care of yourself. If your company slash organization is not doing that for you, it has been. Proven in multiple research from multiple reputable sources that people with AI skills are highly sought after right now, and they can command 30 to 57% higher salaries compared to other people in the same roles without AI skills. And it also dramatically reduces your chances of losing your job because of AI capabilities that are being implemented in your company, because you're gonna be the one that will help implement them. Does that provide you a long-term job secure? I don't think any of us has a long-term job secure right now with the way AI is moving forward, but it least it gives you much longer time figure this out from both a financial perspective and from an organizational perspective. The second thing that you can do inside your organization as an individual is start asking questions. What are we doing? What is the plan? Where are we getting training? What are the systems that we can and cannot use and why? And so on. So ask questions and become a small AI leader inside your organization, even if you are not announced as such. I know multiple individuals that got significant promotions and got very important roles in their AI transformation of their companies by just being there and by pushing forward and by sharing their scales, and then becoming the leaders of that transformation, or at least. Participants in the transformation in their organizations. So that's the other thing that you can do as an individual. We'll end it here. I would like to wish all of you a happy new year. has been an amazing pleasure serving all of you. It warms my heart with every single message that I get on LinkedIn of people who are listening and consuming the podcast and find value in it, and are learning from me in any way, whether through my paid courses and workshops, or through the YouTube channel, or participating in the Friday AI Hangouts. Or however it is that you are learning from me. I appreciate every single one of you. I'm learning from every single one of you through the engagements that I'm having with you, and I cannot be more thankful for everything that we've done together and the journey that I went through with your help in 2025. And all I can say is put a helmet on and strap in because 2026 is going to be a hell of a ride. Happy New Year everyone.