Leveraging AI

259 | “Claude Cowork” is Claude Code for All Work, Anthropic jumps to a $350 B valuation 💥 Apple swaps Siri’s brain for Gemini, hiring recession hits hard, and more AI news for the week ending on January 16, 2026

Isar Meitis Season 1 Episode 259

📢 Want to thrive in 2026?
Join Isar’s flagship AI Business Transformation course and save $100 with code LEVERAGINGAI100.
🔗 Sign up here and secure your competitive advantage for 2026.  https://multiplai.ai/ai-course/

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/

Is your AI stack building your competitive edge — or making you obsolete?

With Claude Cowork's shockingly fast debut and Microsoft pouring half a billion into Anthropic while simultaneously being undercut by them, the AI arms race just got personal. This episode dives deep into what business leaders actually need to know about the evolving AI ecosystem — and how it’s reshaping productivity, platforms, and power plays.

If you're still thinking of AI in terms of chatbots and gimmicks, you're already behind. The new Claude tools are turning file folders into autonomous workflows and redefining what a "knowledge worker" even is.

This episode breaks down the business implications — with a healthy dose of skepticism, strategy, and snowboarding metaphors.

In this session, you’ll discover:

  • How Claude Cowork is redefining the productivity suite — and what it means for Microsoft
  • Why Anthropic’s tooling strategy is a bigger deal than their models
  • How AI agents are handling multi-step tasks — and generating revenue
  • Claude Code’s new search-based architecture and what it means for token usage, context windows, and developer workflows
  • The evolving frenemy relationship between Microsoft and Anthropic
  • How AI is encroaching on healthcare, with both OpenAI and Anthropic launching personalized tools
  • Why AI productivity gains may lead to white-collar job loss — faster than most leaders are ready for
  • The rapid rise (and market share war) between Claude, Gemini, and ChatGPT
  • Real numbers: $10B funding round, $9B ARR, $350B valuation, and what it tells us about where Anthropic is headed
  • Why enterprise adoption and AI tooling are the true battlegrounds for 2026

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast such as practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and I just got back from an amazing snowboarding vacation in Austria. It was amazing weather, great snow, great friends. Got to meet. One of the members of my community in person for the first time, which is also very exciting. So people get to meet on Zoom every single week, and suddenly you get to meet in person. That's always fantastic. So thank you Catherine, for an amazing hospitality in St. Anton. But during this week, a lot has happened in the AI space. Just like in every other week. The fact that I was on vacation doesn't slow anything down. And so we have a lot to cover today. The first thing we're going to cover is Anthropic, who has been making lots of really impressive moves recently. We shared a lot last week on the huge success of Claude code with Opus 4.5 and how impactful that has been. So we're going to talk a lot about philanthropic. In the beginning of this episode. We're going to talk about impacts on the workforce of AI overall. We're going to talk about some traffic trends based on similar web and what happened in the AI space in 2025. And we have lots and lots of really interesting rapid fire items, including many small updates from OpenAI and Gemini and others. So let's get started. The biggest news of this week comes from Anthropic with the release of Claude Cowork. So what is Claude Cowork? Well, Claude Cowork is what they're describing as Claude Code for the rest of your work. Now, to be fair, you can use Claude code for a lot of things other than writing code, and that was very obvious to many people like me, or people who are slightly more technical or more geeky, who are willing to get their hands dirty and try to use it for different things. It's an amazing agentic tool for more or less anything you want. But because that's obvious, not just to me, but to many other people, and definitely to philanthropic, they're now coming out Claude Cowork. So what is Claude Cowork? Claude Cowork allows you to connect Claude to a folder or folders on your computer, and I'm sure later on in other sources as well, and use it in collaboration with all the other Claude Universe and make it work for you and do more or less anything you want. So as examples that they gave, you can drop into that folder, multiple invoices, and it will create an Excel file for you that summarizes information in these invoices. You can drop into this folder, multiple notes that you took in meetings, and it can create a summary document for you of everything that happened still within that folder or folders. As long as you give it access to it, it will be able to use the information in that particular location. Now a few things. Why this is so important and powerful and worthy of being the first piece of news in this week's episode. First of all, Claude Code is an extremely powerful agent. It understands context, it can reason on its own, it can break things into smaller tasks and execute them. Being able to do that across other aspects of work beyond code is something I mentioned several times before, and we talked about this last week when we talked about how powerful the new coding capabilities and the Newgen capabilities of Claude Opus 4.5 is, and I said that it's just a matter of time before Anthropic expands that to other aspects of our work. I just didn't think it's gonna take just one week for them to release this tool. But they have, and so they're taking proven. Agentic capabilities that understand context and that can take actions in a very effective way into the broader aspects of our daily work. Now the other incredible aspect of this is that according to Boris journey, who is the head of Claude Code his team has built Claude Cowork in approximately one and a half weeks, mostly using Claude code to create the application for Claude Cowork, I wanna take a second and connect several dots that we discussed in previous episodes as well. One is something that I've been saying for a while that the tooling of AI right now matters more than the underlying models themselves. So the models are all really good, literally every single one of them. And yes, there's nuances and they're getting better on different things, but overall they're very good. And now it's about how do you make it easier for people to actually get daily value out of them? And this sounds like a very powerful move in the right direction that Philanthropic is doing right now with Claude Cowork. The other dot that I want to connect is something that we discussed last week where Boris Charney, the same exact guy, said that he wrote most of the code he wrote in the past month, and he gave exact numbers. He wrote most of it with Claude Code, and so they're now using Claude Code, which became very powerful to develop new tools very, very quickly. More about this in a couple of minutes. Still about cowork philanthropic has addressed in their release notes, some of the concerns related to security and data and prompt injections and so on, and they stated, and I'm quoting, we've built sophisticated defenses against prompt injection, but agent safety, that is the task of securing Claude's. Real world action is still an active area of development in the industry. What they're saying is that they're taking every action they can in order to make the usage of this tool safe. However, they cannot guarantee it. That's basically what this statement says. aNd in general, this is where we are in the agentic AI world right now. You are getting incredibly powerful capabilities, but you are taking greater risk when it comes to data security. The good news about this particular implementation of Anthropic is that it only has access to specific folders you give it access to. So as long as you know, as long as you know that you're taking a risk on the data that is in that folder, and it might be eliminated, changed, manipulated, or anything else, and you have the relevant backups in place, then it should be relatively safe to use because the only thing at risk is the data in that particular folder. Another interesting aspect of the release of Cowork comes with the evolving relationship of Anthropic with Microsoft. So in several different articles this week, there's been a report that Microsoft is on track to spending more than$500 million annually on using Anthropic models and serving it to its clients. That comes in combination with the fact that Anthropic has committed to use huge amounts of compute and rent compute service from Microsoft. So a lot of that money, again, in the circular method, like a lot of the recent agreements in the past few months between the Giants, they're gonna invest 500 million in buying tokens from Anthropic, which is gonna go back to Microsoft in investment in using and renting its Claude services. But when you think about what cowork does, cowork competes directly with copilot on many things that copilot is supposed to do if you wanna make it even more extreme. Satya Nadella, the CEO of Microsoft, many times previously referred to Copilots as a digital cowork, which uses the same exact term as co-work, which is the new tool from philanthropic. I had some very interesting conversations about AI in the workforce over this snowboarding vacation with several different people that I either met on the slopes or friends of mine that came to me Snowboarding. All of them are senior in different industries, and I truly believe that open AI and Anthropic are going to somewhat replace the need for an office suite that we're so used to using right now. I've said that for a very long time, but the recent moves are making it clearer that this is a direction that it is going. Instead of being a co-pilot to a war document, eventually I will not need the word document because I don't need the tool word. I need an output. The output is a well-crafted document that achieves a specific goal. If I can do that inside of Chachi PT or inside of Claude, I don't need Microsoft Word. The same thing comes with Excel. I don't need Excel as a platform. I need the ability to push in specific type of information and get an output in a format that helps me solve problems, understand my current status, et cetera, et cetera. And if I can do that with a. Large language model, or I don't think it's the right name anymore, but with an AI model, then I don't need Excel and so on and so forth. Now, do I think we're there yet? Absolutely not. I think Excel right now does a lot of things consistently and effectively like it has been doing for the last 40 years that the AI models cannot do consistently yet. But the focus is on the word yet? It is something that the big labs will solve, and then the need and the dependency that we have today on many of the components of these platforms will be eroded and then eventually eliminated, and I'm already seeing it today. A lot of the documents that I used to write previously, whether it's proposals or preparation for speaking gigs or many data analysis that I'm doing for myself or for my clients. I'm not doing with a Microsoft or Google Suite anymore. I'm doing it straight inside, the AI models themselves. Does that mean that applies for every use case in every industry for every person? No. Does that define a very clear path and direction for the future? I think so, and so this relationship between Microsoft and Anthropic is very interesting. Like a lot of the other frenemies relationships that we've seen in the past, such as OpenAI and Microsoft because they are on one hand competing with one another. On the other hand, collaborating with one another. It will be very interesting to follow and see how this evolves over time. But what philanthropic is doing right now is definitely putting at risk a lot of the things that Microsoft are delivering right now as the biggest value to its customers. Now, overall, if you look at the huge crazy growth that Anthropic have seen this past year, a lot of it came from side projects if you want, and not necessarily the main Claude platform, meaning they came up with MCP in the beginning of 2025, which was a huge hit. Now it reaches a hundred million monthly downloads and new MCP servers are spun up every single day, and more and more people are using it across everything that they're doing. They also created scales, which is another breakthrough that I absolutely love and a lot of people are using more and more. They already created Claude for Chrome. They created Cowork just now and so on, and each and every one of these was a s was a small research project that then became a significant component that helped Anthropic grow, especially in the enterprise space. And because Anthropic has noticed the same thing, that its side gigs and little experiments are driving huge success and incredible revenue. And when I say incredible Revenue Claude code on its own, generated over a billion dollars in the first six months of its existence. So in order to capitalize on this concept of developing quick products, experiments, and turning them into actual products that are generating revenue and growing anthropic, anthropic is going through a small but significant reorg where they have established a new department called Anthropic Labs. So Anthropic Labs is a dedicated team that is going to be focused on exactly these experimental products and push the boundaries of what Claude AI models can do by building the right tools and components around them, very similar to what they've done with previous things, and perfectly aligned with everything I'm saying all along that tooling is a big part of AI success in the future. So in this shuffling of senior executives, Mike Crager, who is one of the co-founders of Instagram who have joined Philanthropic as the chief product officer less than two years ago, is stepping down as the Chief Product Officer and he's going to be joining the new labs team together with Ben Man, who is one of the co-founders of Anthropic and one of the main product engineers in Anthropic. In order to fill that void, they just hired Amy Vora who has joined the company just in the end of 2025, who will lead the product team together with the CTO Rahul Patil. And if you're wondering who Amy Vora is, she was the chief product officer at Fair, she was the VP of product design at WhatsApp and she was the VP of product on Facebook for many, many years. So she has significant experience in running large scale product departments at the highest levels of the technology universe. So I'm sure she'll be just fine in that role. So what is going to be exactly the way they're going to work? Well, I think that's exactly the point. I think they want less structure and more flexibility and if you want the best way to summarize it, let's look at the quote from Daniella Amide, the president of philanthropic, the sister of Dario. And she said The speed of advancement in AI demands a different approach to how we build, how we organize, and how we focus. Labs gives us room to break the mold and explore. Basically, they're building a team of super tinkerers who can experiment very, very fast, build new products, test them, and release them to the world just like they just did with Claude Cowork in a week and a half. So while Anthropic has been on fire this past year and definitely this past quarter, this is probably just going to continue, maybe even at a faster pace because of their now focus on exploring and experimentation in small tools that can provide lots of value. So kudos to Anthropic and let's see what's the next thing they're going to release. And with that in mind, let's talk another about another thing they just released this week, which I find really, really cool. And for most of you, it's gonna make very small difference in the near future, but in the longer sense, it will probably become the standard of how all these tools work, because it's just beautiful in its simplicity and the value that it provides. So let me explain what they released. They just released a tool, search tool inside of Claude code. So let me explain what the hell that is and why it is important. As we mentioned earlier, more and more MCP servers are available. What is an MCP server? An MCP server is basically a connector that allows Claude code or Claude, or any other AI tool to very quickly connect to multiple other environments, data sources and so on. Basically connect to third party APIs without developing the API. Just by adding several lines of code and you are connected, but when you make that connection, you get exposed to potentially hundreds of mini tools and calls and functions that are available through the MCP server. And if you are using several of those inside of Claude code, well that consumes the context window because the context window will now upload all the different instructions of potentially hundreds of different tools and functions that are available through the MCP servers that you connected to your environment. And since context window is one of the limiting factors of how much work you can do, this has negative consequences on how much you can actually use the coding function because a lot of the context window is being used by the data in the MCP servers. So that's the background. What has changed now is that they have created this new tool that is basically not loading the MCP server data if it's going to take more than 10% of the available context window. Instead of that, the system switches its strategy and instead of dumping the raw documentation into the prompt, it loads a lightweight search index of the. The raw data, and then when the code or the user or the coding agent is looking for something specific, it is going to search for that particular capability, that particular tool inside the list of potentially hundreds and gonna implement just that one. So instead of scanning a massive preloaded list of 200 commands, it queries the index, find just the relevant tool definition, pulls only that specific tool into the context, and then it consumes a tiny fraction of the context window instead of a significant portion of it. By doing that, it is solving one of the biggest problems that programmers and developers had while using MCP servers inside of code. To put things in perspective on how significant this is in internal testing done by philanthropic and shared on VentureBeat, they're saying that on average it was able to reduce token usage in their tests from 134,000 tokens to 5,000. That's an 85% reduction in the amount of tokens, still achieving the same exact output because you need just that one tool or several of them, and it's gonna call just them instead of calling the entire documentation of the MCP server. Now, in addition to making it significantly more efficient on token usage and obviously speed, it also increases the accuracy of the information on using the MCP because it uses just very specific portions of it instead of the entire thing. So that increase the accuracy of accessing relevant tools from 49% to 74% on Claude Opus four. In concept, this is very similar to Claude skills. Those of you haven't used Claude skills. We talked about it multiple times on this podcast, but Claude Skills allows you to develop something similar to, let's say, a custom GPT or a Claude project. But instead of having all the instructions there all the time, it can call it in real time as it needs it. So you can build small scales and then call them into the context window as needed. Again, very similar concept, slightly different implementation, but the idea is similar, better tooling that makes the work of the model faster and more efficient from a token usage perspective. So again, kudos to philanthropic for solving a really annoying problem and enabling more work to be done in a more effective way. Now, if you're not writing code, this is less important to you right now. But as we just mentioned, they just developed cowork, which is basically the same exact concept for everything else you're doing for creating documents, analyzing data, reading images, PDFs, and so on. And so the same exact tool and the same exact capabilities will roll over to the universe we're all using and is gonna make all work more efficient. Now since the Claude Code Universe is becoming so powerful and more or less a necessity for developers, and we shared last week that developers from other companies such as Microsoft and Google has shared how much they are using an amazed by how good Claude code is. Anthropic just pulled the plug on X AI's team from using Claude AI models inside of their IDs such as Cursor and others. This is not new. It is something that Philanthropic has done before to open AI and has pulled the plug from their developers from using Claude Code. Well, now they've done the same for X. And to show you how significant that is, there has been a leaked internal email from Tony Wu, who is one of ex AI co-founders who acknowledges the ban in an email to his engineering team saying, we. Hi team. I believe many of you have already discovered that Anthropic models are not responding on Cursor. According to Cursor. This is a new policy anthropic is enforcing for all of its major competitors. This is both bad news and good news. We'll get a hit on productivity, but it really pushes us to develop our own coding product slash models. We are at a time in which AI is now critical technology for our own productivity. This coming year is really going to be wildly exciting for all of us. Elon Musk responded to the ban on X. First of all, admitting that, and I'm quoting philanthropic, has done something special with coding, but then he moved to talk about specifically the block and said it is not good for their karma, which in both cases we understand something very, very clear. The current level of Claude code with Opus 4.5 is beyond everything anybody else has right now. And all the teams across their internal team developing new capabilities as we just mentioned, and building a whole new department to build features and new tools around it, using Claude code is now a given. Now the fact that they're putting the plug on their competitors makes a lot of sense because you don't wanna provide tools for your competitors to be able to chase you in a more effective way. The fact that their competitors are admitting that their tool is so good is just telling you how good Claude code is right now and how much better it is in this point in time than the competition is. Now since we're speaking on some bad things that are happening to slow down XAI, another interesting point, and I'm just opening parenthesis, we're gonna go back to talk about philanthropic in a second. The EPA just updated one of its rules this week. The updated rule says that companies must obtain Clean Air Act permits to operate turbines and can no longer treat them as temporary quote non road engines. Why I'm saying that is specific to XAI Because A XAI has set up multiple gas burning turbines to run their Memphis facility last year, saying, using a loophole in the EPA ruling, saying that you can use them temporarily, even though it does not define how long temporary is, well, that is no longer an option. I'm not a hundred percent sure if the ones they already set up get grandfathered in or not, but either way, the new expansions, Colossus two and three, that what is already being built and the other is already have the land and the facility and the area where it's going to be built will not be able to use this functionality. And based on what I shared with you last week, Elon Musk is already saying that the electrical infrastructure is the limiting factor right now. They are now setting up huge solar farms and they're also setting up huge battery farms next to these so they can consume electricity from the grid at night, store it in those huge battery packs, and then use them during the day. Is that going to be enough to power all the data centers? I'm not sure, but it will be very interesting to see how Elon gets out of this particular pain point. Back to philanthropic, the push to continue their dominance in the enterprise world is continuing with Anthropic. This week they announced they just signed a mega deal with Allianz, which is a Munich based insurance giant that is selling insurance all over the world and mostly in Europe. This strategic agreement includes three different components, deploying Claude code, which is the coding tool to all of Allianz's employees. And I assume this now also means Claude cowork developing custom AI agents capable of executing multi-step workflows with human oversight specifically for Allianz. And implementing a transparency system that logs all the interactions to safely strict regulatory requirements because they are in a highly regulated industry. Oliver Bate, and I'm not sure I'm pronouncing his last name correctly, who is the CEO of Allianz, highlighted the importance of safety in the deal, say stating, and I'm quoting Anthropics. Focus on safety and transparency. Compliments our strong dedication to customer excellence and stakeholders trust. Together we are building solutions that prioritize what matters most to our customers while setting new standards in innovation and resilience. While this is very clearly a very well wordsmith PR statement, it includes an important aspect, which is the enterprise world is a really appreciative of anthropics. Of ANTHROPICS capabilities and B, it really appreciates anthropics pushed towards higher security and transparency. And the combination of these two things is definitely pushing them forward. I shared with you back in December, the survey from Menlo Ventures that revealed that philanthropic now holds over 40% of the enterprise AI market. Growing dramatically from the 32% they held back in July and significantly higher than the 23% that OpenAI currently holds in the enterprise market. What's even more impressive is that philanthropic is currently controlling 54% of the market share for AI coding specifically. Now, the Allianz announcement is just following the blitz that they have been pushing in the past year and even more in the past six months from Anthropic to capture big important names in the enterprise space big names such as Snowflake, Accenture, Deloitte, and so on. All of these, just in the past few months. So it is very clear that Anthropic are pushing the right buttons in order to capture really big clients in the enterprise space, which is helping them to do the following. They have just announced they are finalizing a monumental$10 billion founding round that is going to value them at$350 billion. To put things in perspective, their previous valuation was$183 billion in September. So in one quarter their almost doubled their valuation. You wanna make it even more extreme, their valuation. March of 2025 was 61 billion, so this is five and a half x. What their valuation was just nine months ago. Now this$10 billion of a raise is independent of the$15 billion that they got committed from Microsoft and Nvidia, which is another circular deal that we discussed a little bit earlier where they're going to commit to purchase$30 billion in Claude compute from Microsoft using Nvidia chips. So again, they got$15 billion from NVIDIA and Microsoft to use. Double that in Nvidia and Microsoft deliverables. Overall, this has been a spectacular growth year for Anthropic and they're definitely moving forward in the right direction for 2026. Based on different reports, they're going to hit about$9 billion in recurring annual revenue with internal forecast, expecting that figure to nearly triple in this coming year. Now, in an interesting interview about this with Daniella Amide, again, the president, co-founder and the sister of Dario. She said that their focus has always been to do more with less, or if you want the exact quote, philanthropic has always had a fraction of what our competitors have had in terms of compute and capital, and yet pretty consistently we've had the most powerful, most performant models now, while they've been focused on efficiency and finding cool new capabilities and algorithms and tooling, like I just mentioned in the previous segment. They have also disclosed that they have secured approximately a hundred billion in compute commitments to secure their future growth in order to stay relevant this crazy AI race. In this interview, Daniella also drew a sharp distinction between what is technically possible and what is economically viable. And she stated, and I'm quoting, regardless of how good the technology is, it takes time for that to be used in a business. The real question to me is how quickly can businesses in particular leverage their technology? I mentioned that multiple times on the show and more and more in the past six months. The biggest bottleneck for achieving business growth with AI is currently not the AI models. Most of these models are better than most of the needs are. The biggest bottleneck is people and the organization and the way it is organized and the way work gets done right now. And the question is, how quickly can you train and re-skill people to use the technology in an effective way? And how quickly can you reorganize your company and the structure of it and the way work gets done in order to be best positioned to gain the most out of the existing technology that we already have? Forget about what's gonna get developed in the future. Daniella also said that in many aspects, AI is already possessing superhuman skills. She was kind of like laughing about the concept of a GI with the idea that it's superhuman on multiple things and completely dumb on others. But on some of these things where it's superhuman, you can already leverage that to get huge efficiencies and penetrate new markets and sell new products and services that you couldn't do previously in a profitable way. And that is a complete game changer to companies if they learn how to do that, which means as a company, you need to focus not just on the technology or maybe even less on which technology to use, but more on how to evaluate the opportunity with AI for your business, for your niche in your sector, and maybe slightly beyond. And how to train your people and organize your company in order to capitalize on that. And I have some really exciting news related to that, due to some personal reasons, which I'm not going to share right now. I am forced to postpone the beginning of the AI Business Transformation course that was supposed to start this Tuesday, January 20th. Now, while I'm personally sad that I cannot start the course this week, and I apologize for those of you who signed up, but don't worry, the course is starting. But for those of you who did not get the chance to sign up yet, it gives you the opportunity to still do that and join the course. The course is a complete game changer, and it has changed the trajectory of entire businesses and definitely careers of hundreds and maybe thousands of people who have taken this course in the last two and a half years. So if you want, make this your New Year's resolution to set up a solid baseline for your AI knowledge in order to drive your career. And if you are in a leadership position your business forward based on AI in 2026. Now, a few people have approached me on LinkedIn and on other channels, and ask me about the price point of a thousand dollars or 9 97 if you wanna be specific. All you need to do is come and join us on the AI Friday Hangouts. It is our weekly community AI meetup that we have every single Friday. It is completely free to join and there are incredible people there. All of them have taken the course, and there are people there from senior positions in large enterprises, all the way to really successful solopreneurs. Many of them became solopreneurs leveraging AI, following, taking the course. So completely changed the trajectory of their career because of the knowledge they gained in the course and asked them if it was worth a thousand dollars. So if your career and if your business is worth more than a thousand dollars, do not think twice. First of all, come and join us on Friday and literally ask the people, I have no problem with you being blunt and asking people in the community what do they think, but also come and join the course. It is going to be the best investment of the year for you. And if you do this right now, you are setting yourself up for a solid start in 2026 on how to leverage AI from a practical perspective across multiple aspects of the business. And because you are listeners of this podcast, you also get a hundred dollars off, so it's only going to be less than$900, and you can do that with promo code leveraging AI 100, all uppercase One word. Now to our next topic, which is SimilarWeb just released their January, 2026 global AI tracker. Those of you who don't know SimilarWeb, they've been around for many, many years and they're tracking web traffic and they just released their ai. Traffic report and you can go and check off the report. The link is in the show notes, but I'm gonna share some of the key findings with you. First of all, Chachi's web traffic share has dropped to 64.5%, which is a relatively steep decline from the 86.7% that it held just one year ago. Now, while they're still far ahead of number two, which is Gemini. Gemini went up from 5.7% market share last year to 25 and a half. So about four x the scale, and they're definitely growing very, very fast. In the last 12 weeks alone, visits to Gemini has surged by 49%, whereas traffic to open AI's platform has declined by 22%. So the vectors right now are in definitely opposite directions where Gemini is growing and Chacha PT is shrinking, and yet Chacha PT is still pretty far ahead with roughly three x the traffic that Gemini has right now. The other thing that Gemini is ahead is engagement with Gemini users spending about seven minutes and 20 seconds on the platform while Chachi PT users are only spending six minutes and 32 seconds on average on that platform. Maybe the most disappointing number there, not for me, but for Microsoft, is that copilot only accounts for 1.1% of web traffic on the open market, falling behind much smaller competitors and declining from 1.5% last year. Now, the pack is now, while the pack has one clear leader and a very clear number two, the rest of the companies are pretty tied together with deep secret number three with 3.7%, followed by grok perplexity, and then clawed. All with relatively small single digits, percentage of the web traffic. Another not too surprising dynamics is that specialized AI writing platforms, which were a huge success in 2024, took a big hit in 2025. So tools like Jasper fell by 16%, right? Sonic dropped by 17% just in the last 12 weeks. And with some smaller writing tools have dropped more or less to zero over the past year. Why is that? Because the non-specialized tools became very, very common that they were not very common in 23 and 24, and they are now providing as good as a value, especially when you have connectors into your environment and you can train the model to work the way you want it to work, and you can create projects and custom gpt and scales and so on. It takes away a lot of the benefits and the value that these specialized AI writing tools had before, and now they don't have any additional edge. While they definitely don't have the marketing and the brand awareness that the large models do. Now, while this is all interesting statistics, we need to remember that this is the open web traffic and it ignores enterprise adoption that drives most of the revenue for these companies. But what it is clearly telling us is that there's still one dominant player, a clear number two, and then a bunch of companies that are significantly smaller despite the amount of news we are hearing about them from a scale perspective, they're not even in the same ballpark as Gemini. Definitely not chat gPT. The next big topic before we switch into rapid fire items is about AI in healthcare and specifically the involvement of the big labs in ai, in healthcare. So open AI has shared that. Close to 200 million users globally asked Chachi PT health related questions every single week, which makes it very, very clear that people have such a need and they most likely wanna know more about their health based on actual data versus random, general generic data of the internet, which drove OpenAI to acquire Healthcare Records Startup Torch, which allows Chachi PT to now connect to multiple data sources, including potential patient data, but also different wearable devices, data including Apple Health and similar solutions. Philanthropic quickly launched Claude for healthcare just a few days after OpenAI HTPT Health was revealed. Offering tools for providers, payers, patients, and including data things from wearable without using any of that data for training. Another OpenAI related company, merge Labs, which is a company that is competing with Neuralink by developing a brain interface for computers. Just raised$250 million seed round at an$850 million valuation, raised from Sam Altman with some money from OpenAI itself as one of new investors. Now, let's analyze a little bit what's going on here. First of all, healthcare is a space that is a, in a very bad shape in the US and it literally just calls and begging to be disrupted with some new technology that will make it significantly more efficient than it is right now. And AI is a prime contender to solve at least some of the problems in the current healthcare space. The second reason is that there is a huge amount of money in healthcare, so being able to solve some of these problems don't just solve problem, but actually could come with a very significant financial reward. The third is really the need because people have lack of healthcare in the us. They're looking for alternative solutions. These AI models in which you can go in and for free, ask questions about health and get reasonable information, is a easy path for people to take in order to ask about their health, consult about the results that they got, and so on, and so being able to make that more accurate by connecting into actual data sources is very attractive for these AI companies. The last component ties back to what I said several times, even on this particular episode, is tooling is becoming more important than the model itself. So if you can build healthcare related tools, connectors, and so on, in order to make the investigation of your personal health, more effective and easy and trusted and secure way for people to do more, people are going to use it. You might be able to charge more money for it one way or another. So monetize it either through healthcare providers or drug companies, et cetera, et cetera, or even just by the users themselves willing to pay to get specific healthcare information. So the bottom line, this makes perfect sense, and that's why we're going to see more and more of that in 2026. And now two rapid fire items. We're going to start with the impact of AI on healthcare, and the first topic comes from a report about the US labor market that is saying that we're entering into what they're calling a hiring recession. Based on the report quoted by CNBC 2025 marked the weakest year for job creation outside, outside of the actual recession. Since 2003, employers were adding only 584,000 jobs, and most of it is in a single industry. So 69% of all jobs created in 2025 were in healthcare now. In addition, most of this growth happened in the first half of the year. So even in healthcare, the growth has been significantly slower in the second half of the year, which made the second half of the year basically flat. Now, based on this report, the hiring recession will likely continue into the first half of 2026. But they're anticipating maybe a potential improvement for job seekers, in the second half of 2026 due to tax cuts, lower interest rates, and clear tariffs picture. I'm not sure that's the case because I think by then we'll see more and more advanced agent and AI capabilities in general, which may make the additional hiring redundant or at least not as attractive as it would've been otherwise be as it would've been otherwise, just based on improved market terms. Another report on a similar topic came from Goldman Sachs Research who warns that AI is now poised to automate tasks accounting for 25% of all work hours in the us. They're saying, they're stating that this could lead to a job apocalypse that results in, and I'm quoting humans going the way of horses. Another report, quoted on the register, came from First Forrester Research that predicts that AI could eliminate 10.4 million jobs by the end of the decade. Which is 6.1% of all jobs in the US. Now, to put things in perspective, Forrester, VP of Principal Analyst, JP, go, Gonder notes I'm quoting. To give you a sense of magnitude, the US lost 8.7 million jobs during the Great Recession. The numbers aren't directly comparable since the job lost to AI are structural and permanent. While those loss during a recession are cyclical and macroeconomic, what he's basically saying is that in the recession, these jobs came back and with the ai, they're not going to come back. The other thing that I will add, and I said that previously, is that in many cases, the kind of jobs that are going to get lost are white collar, high paying jobs that have an even bigger impact on the economy. The other thing that this research finds is that agent AI is actually accelerating the process. And making the prediction, even glimmer Gonder revealed that, and I'm quoting again where our earlier for forecast saw just 29% of US jobs lost to automation. Coming from Gen ai, that number is now 50%, which accounts for a gen AI solutions that leverage gen AI as well. Now, the other aspect of this talks about the jobs that are not going to get lost, and they're saying that more or less, every job that doesn't get lost will change dramatically. So they're saying for every job lost roughly three jobs will be completely transformed and strongly influenced or augmented by ai, nearly a fourfold increase from the previous forecast that they have released. Now, despite the disturbing numbers, Gonder insists that the future remains largely human for the next five years, and is arguing that, and I'm quoting labor productivity would need to accelerate significantly for AI to going back to the beginning of the sentence. Now, despite these really negative numbers, Gonder insists that the future remains, and I'm quoting largely human at least for the next five years. And the problem that I see with that is that five years are literally around the corner for society to figure out how to deal with such significant unemployment and restructuring and reskilling at a very large scale. It's just not something we know how to do, and society just moves a lot slower than that. And so there's a very serious sense of urgency in all the numbers from all these different surveys that I just shared with you. And I have zero doubt that this will become a big deal in the mid elections right now, and it will become a much bigger item the presidential elections of 2028. Now, speaking of job losses and making it a little more specific, meta is downsizing, its realty labs. By a thousand positions. This is the part of the company that has been focusing on the meta verse, which is the reason why Meta changed their name from Facebook to Meta, and now they're cutting a thousand jobs in that particular department, saying that they're going to be focusing the savings into developing wearable technologies and focusing more on the glasses and the partnerships and the success that they have seen over there. To put things in perspective, they have accumulated more than$70 billion in losses since establishing that department in 2021. Switching to our next topic, apple has finally finalized the agreement that was. That was rumored for a very long time to have Gemini Power. Siri, apple would pay about$1 billion annually for Google, for the AI tech to power Siri. This is obviously a massive benefit for Google and the success of its AI models. According. According to people with knowledge about the information, the Gemini powered Siri will have the ability to answer factual questions, tell stories, provide emotional support, and help people accomplish tasks such as booking travel. To align with Apple's strict privacy standards, Gemini based AI will run directly on Apple's devices without going to the Claude at all. This shows you how advanced Google is right now in minimizing the requirements of the hardware to be able to run these models on. And that's obviously a huge win for Google, both from a technological perspective as well as from a financial perspective and from a perception perspective, especially after the initial relationship of Siri was with OpenAI. And I assume they assumed that Siri will use their models and not anybody else's. Now, is that the end of the Siri AI saga? I'm not exactly sure it's been happening for a while, but at least in the short term, this is the direction that Apple is going with and hopefully we'll start seeing Siri work in a way that is somewhat aligned with what we're used to with models like Gemini, Chachi, piti, and so on. The other interesting aspect of this is that the new Siri will have the context of a lot of information from the Apple device itself, meaning it will know the history of chats and images and locations and so on. And we'll be able to provide a lot more personalized information and have a lot more context about who you are, what you've done, who you've communicated with, and so on, which will make it very productive, effective, and personal. Speaking of personal intelligence, Google just officially launched Personal Intelligence, which is a big beta release that allows Gini AI to securely access and reason across your data from Gmail, Google Photos, YouTube, and search. Basically connecting a lot of the stuff that Google's knows about you without training the model, but allowing the model to conclude what you want, what you need, again, based on additional context that it can get from that information. The new personal intelligence feature is first of all rolling out to Google AI Pro and AI Ultra subscribers in the us and then like everything else, it's gonna roll out to other users, free users, and other countries around the world. According to Josh Woodward, the VP of Google Labs. The system is designed to, and I'm is designed with, and I'm quoting two core strengths, reasoning across complex sources and retrieving specific details. Basically, in the bottom line, it is allowing you to solve problems and give you information that just the chat bot itself cannot do because it doesn't have the level of context that the personalized information will have. Google also said from a privacy concern perspective, that connecting your apps is going to be set to off by default, and you can choose to do that or not. So what's the bottom line of these two pieces of news from Apple and from Gemini? That the future of AI is a lot more personalized than it is right now. We heard Sam Altman talk about this many times. Ade talk about this many times. And just like with many other use cases, google has a serious edge going in that direction because they have access to so much information about us. That's even before tapping into Android devices, Android apps, android usage, location based and so on. And I'm sure that's coming next. Why is that important? Because AI thrives on context. The more context it has, the better the answers it can give, the better the solutions it can find, the better it can reason and context can be given to it by you writing better prompts and giving it context or it finding the context in different ways. The very first glimpse to that we saw with memory, right? So these tools start to have memory and remembering things about previous chats, but if it can tap to memories from other sources, you will have significantly more context and will be able to provide us even better information. While we have seen stuff like this on the business side, so connectors to your email and your data sets and so on, this is now slowly moving or maybe not so slowly moving into the personal space as well, whether from Google's universe or from Apple's universe. And we're gonna see more and more of that. And while in the beginning we will see more and more people blocking it and not interested in it over time, as you will see others getting immense value from that, you'll just stop caring and give Google and or Apple access to that information. Just like if I would've told you 20 years ago, Google will know every email that you're receiving, everything you have on your calendar, where you are at every second of the day, which applications you use, what you're buying, and so on. You would've thought, I am crazy. And yet right now you're doing it without even thinking about it. I'm pretty sure the same exact thing is going to happen with the AI aspect of the same kind of tools. Staying on Google. Google just announced the new version of shopping on Gemini, and they're taking a different approach than Chachi PT. So Chachi P t's approach is to take a percentage, basically a commission from every checkout that happens on chat GPT for any purchase. Google is going to take the path that they know very, very well and basically take a cost per click. So if somebody clicks and goes and purchase something that was generated from Gemini, you're gonna pay for that click. They are experts in that. They have gazillion of data points in their history to know how to do that effectively, and that's the path they are going to take. In addition, Gemini has data sets for over 50 billion products with ab, including complete descriptions, sizing, availability, where to purchase them, price comparison, and that is a huge advantage over what OpenAI has that has to start developing and categorizing and cataloging all of that information, which will take them a while. Some major players have already committed to using the Google Checkout in Gemini. That includes Walmart and Shopify, which by itself is gonna give it access to millions and millions and millions of SKUs and products across multiple vendors. And at the same exact week, Microsoft announced the same thing that they're launching copilot checkout, which is a new feature that allows you to check out inside of copilot. Now according to early data from Microsoft Consumer's journey that include copilot, are 194% more likely to purchase than people who just go and shop regularly without using ai. I think that makes a lot of sense. If you're already doing the research, it means you have serious intent. If the AI recommends something, it means you have clear path to what you wanna purchase, which will dramatically increase the chances of purchase. I don't think that's specific to Microsoft Co-pilot. I think that is true for any AI assisted shopping journey. And in general, I think this is just a very first step into a very different online shopping journey than we know today. I think the next steps are gonna be a lot more agentic, where we're not going to even engage with the products. We'll have an agent shopping for us, potentially consulting with us, but in many cases, knowing exactly what we need and want and will just complete the processes for us without us even being involved or with us being involved a lot less than we are right now. And this is just a step that is taking what we know today and making it easier for us to get used to. But I definitely do not think this is how it's going to end up. Switching to a few interesting releases from OpenAI. OpenAI just released quietly. The chat GPT Translate tool, it is basically similar to Google Translate and you can get to it through going to chat gt.com/translate. And you will see that it is very similar to Google Translate. It can translate between 50 different languages and it has the ability to write the output in specific context, which is really cool and is a benefit beyond Google Translate. So you can make it more formal, more childish, towards university and so on. Because it understands, again, context and it understands the target audience and what they might expect to see. There've been some new rumors about the new device that OpenAI is developing together with. Joni i's team. Apparently the product has a code name, suite P. It's potentially going to be up to five different devices that's based on. Instructions delivered to Foxcon, which might be one of the hardware developers and the five devices might be ready for launch by Q4 of 2028, which is the lineup, which is a timeline that is beyond what was initially released. One of the products, at least, again, based on this rumor or leaked information, looks like a tiny little pill that is gonna go behind the person's ear. So a completely different form factor than anything we know. There are also rumors about a home-based device such like a Amazon or a Apple home, and there's also rumors about a pen. So, we'll, I guess we'll just have to wait and see, but the little pill behind the ear looks very interesting. And if that's the direction it's going to go, it will be something you can probably wear the entire day without even remembering it's there while enjoying the benefit of it. So if this information is actually accurate, this is a new and exciting kind of hardware that we've never seen before. Another interesting release from this week is Slack, released a new overhaul of Slack bot. The new Slack bot is again, a context beast that can connect through everything inside of Slack and understand the entire context of a specific person and be able to provide them information related to that context. And more importantly, it can only provide information context on information that it is actually allowed to see. From a security perspective, the Slack bot will only show individuals information that they will have access to through the relevant channels. Meaning people will not have access to information from channels they do not have access to as humans. So it's supposed to cover both sides of the story. Give you very quick efficiency of learning things you wanna learn on what's happening across multiple channels while preventing you from seeing information that you're not supposed to see. Another interesting announcement from this week comes from Repli. Repli is a vibe coding tool that I use regularly and I really like, and they just came up with mobile apps on pl, which allows you to go from an idea to a mobile app, actually deployed on the app store in just a few minutes, so this is just an expansion of everything they've done so far. Now they're also delivering it into the mobile app universe. Another interesting development this week comes from Cursor. Cursor released an interesting research about how to scale autonomous coding agents revealing a new way to orchestrate them that will allow them to work with a lot more agents on a lot smaller staff, on much smaller tasks, while coordinating between workers and managers to complete all these tasks to back together. And using this, they were able to use these agents to develop a brand new web browser completely autonomously, writing 1 million lines of code consuming trillions of tokens as a single project without any human intervention. Now the tool was running independently for over three weeks to do this project. Again, just to give you an idea of how rapidly this technology is moving forward. We were talking about minutes and then hours and then days, and now we're talking about weeks and yes. Is it fully baked? No. Is it a research project? Yes, but the fact that there's a research project out there that was able to create a reasonably working web browser on its own, writing over a million lines of code and making hundreds of thousands of edits, just shows you how quickly this technology is evolving. That's it for this week. You can find a lot more important news in our newsletter that we just can't get to, including some senior people from thinking machines that are going back to open ai, including the fact that Elon Musk lawsuit against OpenAI and Microsoft is going to a jury trial in April and including many other crazy valuation raises that are happened this week. So go and check up our newsletter. You can sign up from the link in our show notes. If you're there, go and check out the link to the course. I promise you it will change your knowledge on AI and can completely change the trajectory of your personal career and potentially your business and you just got a small extension. So take an advantage of it right now and take an advantage of the a hundred hours off with the leveraging AI 100 promo code. We'll be back on Tuesday with another how to episode, and we are going back to our live Thursday interviews. So every Thursday at noon Eastern, we're going to go live on Zoom and on LinkedIn live for you to come and join us with our special guests and ask questions and participate in the conversation. So check out the schedule for that in the link in the show notes as well. And again, if you are already on our newsletter, there are links on how to join on the newsletter as well. That's it. Have a great rest of your weekend and I will see you back on Tuesday.