
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
195 | The AI business application battle is intensifying, Self improving AI, AI for early cancer detection, and more AI news you should know for the week ending on June 6, 2024
Is your business ready for the next wave of AI — or about to be eaten by it?
In this week’s episode of The Leveraging AI Podcast, Isar Meitis breaks down the latest tectonic shifts in the AI landscape. From OpenAI's aggressive move into enterprise applications to self-improving AI models and FDA-approved cancer detection tools, this isn’t just another week in tech — it's a glimpse into the near future of business.
AI is no longer just evolving — it's learning how to evolve itself. That means faster innovation, deeper disruption, and greater opportunity for those paying attention. So if you're leading a company, making decisions, or just trying to stay ahead — you can’t afford to miss this.
Recommendation: If you’re relying on dashboards and human analysts alone, it’s time to consider the AI layer that’s changing enterprise strategy across industries.
In this session, you’ll discover:
- Why OpenAI's enterprise push is terrifying startups — and possibly Google and Microsoft
- How Databricks and Snowflake are redefining BI with "systems of intelligence"
- What Mary Meeker's AI mega-report says about tech acceleration — and what’s not accelerating
- Which AI model is rewriting its own code (yes, you read that right)
- How AI just helped the FDA approve a tool for early breast cancer detection
- Why layoffs tied to AI aren’t slowing down — and why most leaders are still underestimating the shift
- What’s brewing at Microsoft, Meta, Apple, and Anthropic in the battle for enterprise dominance
- How new AI agents may eliminate the need for ad agencies and call centers
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and like most weeks, we have a jam packed episode for you to cover. We have lots of stuff that didn't make it to this week's news, but you can find all of it in our newsletter, so make sure to check it out. There's a link in the show notes to get that. We are going to cover today four main topics. One is about the new glowing hot battle ground of ai, which is applications for enterprise. Followed by a fascinating report by Mary Meer, the Queen of the Internet, about the speed in which AI is happening compared to the internet revolution. We're going to continue talking like in many previous weeks about layoffs across the board and AI impact on that, and we're gonna finalize the deep dive sections with talking about self-improving ai. And then as always, there's a very long list of rapid fire items, including some fascinating and really important scientific discoveries and methods with ai, like the ability to better detect cancer. So lots to talk about. So let's get started. In previous shows, we talked a lot about the fierce battle that is happening right now on the turf of writing code with ai, with all the big players fighting to grab market share between companies who build the applications like Cursor and Windsurf. More about them later, and Open OpenAI, Chachi, pt, Claude, and platforms like ChatGPT, Claude, and Deep Seek fighting to show who has the best coding capabilities. Well, there's a new battleground that is hitting up very, very fast, which is the battleground of corporate and business applications in general. It's not new, but it's just heating up dramatically. And this week was a great example of that. So OpenAI just announced several new different capabilities, which are all focusing on that exact topic. So Chachi PT now started offering meeting recording and transcription, which is something that was saved to companies like Zoom and Teams and Fathom and many others that are focused on that particular field. They also announced new connectors, connecting straight into chat GPT and custom GPTs for Dropbox Box, SharePoint, OneDrive, Google Drive, and the ability to connect MCP servers to the chat GPT universe as well. Those of you who don't know what CPS are, first of all, check the episode re-released on Tuesday of last week. We did a deep dive into what are CPS and how to connect to them and how you can benefit from their, in your AI universe. But in general, they allow you to connect very, very quickly to data repositories and tools that you have in your company, such as your C-R-M-E-R-P, et cetera, as long as they have an MCP server and bring them within minutes into your AI universe, whether chats or agents, and so OpenAI is focusing very aggressively on building applications for businesses. So it's not just that they're a frontier model company, they are a enterprise tool provider that is competing with a very wide range of companies and tools that are out there in the market. And combine that with the fact they currently have grown by 50% in just four months to something between 700 and 800 active million users. And they have combined that with the fact that OpenAI now serves over 3 million enterprise customers. Grown up from just 2 million in February. That a 50% growth in just four months of enterprise customers shows you very clearly where their focus is at right now. And it's a very serious risk to a huge range of applications, companies and startups that have focused on building these kind of applications on top of chat GPT in the past year and a half or two years. Now the only question that obviously is in your mind and everybody's mind is, okay, where is my data going and what's the control over this data? OpenAI is claiming and confirmed to, several different news outlets that they will follow the organization's access control hierarchy. Meaning if you are going to use chat GPT in order to connect to company data, different people will have access through Chat g PT, to different levels of data, depending on their access control of their organization. So if some people cannot see the salaries of all the employees as an example, they will not be able to see that through chat g pt, even though all the data will be connected. Exactly how they're doing that. Is it bulletproof or not? I think time will tell. I think we'll know very quickly because I'm sure a lot of people start implementing this. I can tell you with conversations with several different people who have tried this in organizations, they're saying that it actually works well and they're only seeing the stuff that they're supposed to be seeing. again, I don't know how bulletproof this is, but in probably a few weeks we'll know whether this is working or not, and I'm sure they will patch up whatever is not working right now because that's obviously a critical component to any company. Now, in addition to killing many smaller startups who build applications to do exactly these things. They are moving straight into the fields of giants, right? These are exactly the things that Microsoft and Google have promised us to deliver. The connectivity of all the information straight into a conversation that we can ask any question about any project, about any process, about any tool, about anything that we want, and have the data being collected from our documents, from our ERP, from our CRM, from our emails and so on. And it seems to be that OpenAI are on the frontier of that, potentially even ahead of Microsoft with their own internal tools and Google with their own internal tools. Google is obviously not staying behind. Google introduced a lot of new capabilities in their event a couple of weeks ago, and they're now announcing that Google Drive is launching a catch me Up feature that allows users to click the button and get a summary done by Gemini of what? Change in specific files. In specific drives. So if you are using Google's collaborative environment where multiple people can access the same drive and make changes to a document, and a lot of people, a lot of companies are doing that, then you can go into a specific folder, click on the catch me up button that appears on the top section of the Gemini Bar on the right side of your screen, and it will tell you exactly what changed in which document. I find this feature to be very useful. I don't have access to it yet, but they're rolling it out and I've seen examples of people actually using it, and it looks like an awesome feature for, as I mentioned, anybody who's doing collaboration with Gemini, they introduced a lot of other capabilities, as I mentioned a couple of weeks ago. And it's supposed to become available to everybody in the next few weeks as long as you have a workspace business Standard Plus or Enterprise Standard or Plus or education licenses of Google, and of course the Google one AI premium subscription as well. Microsoft also made a big move with the release of agentic retrieval on Azure. So their new agentic retrieval concept they're claiming delivers 40% improvement in answer, relevant and accuracy compared to traditional rag systems. So the differences is obviously instead of just counting on the embeddings and the data in a vector database like traditional rag, there's an actual agent system and now I'm quoting Autonomously plans and executes retrieval strategies for complex questions by breaking down user questions into focus subqueries that run in parallel across both text and vector embeddings. Basically, what that mambo jumbo means is that instead of trying to do. The search is it tries to understand exactly the information you're trying to find, and then it breaks it down into multiple additional searches, which does two things. One, it allows you to run in parallel, which makes the process significantly faster than doing a traditional rag search on a big piece of data. Those of you who tried it know that it's really frustrating because sometimes you wait five or seven, five or seven minutes to get the answers, and I don't have the same amount of data that large organizations do. and the other thing is it actually provides better results because it understands the context and what you're trying to get, and it does better queries across multiple sources. This is currently in initial beta testing. It's not available to everybody yet, but it's rolling out. And so anybody with an Azure access and data on Azure will be able to use it in the near future. But there are other big players in this field that are aggressively shifting into AI applications for enterprises. Two of the most notable ones are probably Snowflake and Databricks. Both companies or a database, data management, enterprise level platforms. That's what they were until AI came out. And now both companies are all in on AI solutions for enterprises on top of the data just a year ago, snowflake was not even in the AI conversation. Databricks got a lot more news, but now they're delivering already an impressive AI solutions that has pushed their earnings by 14% with a projection of$4.3 billion revenue guide for 2026, which is showing you that they're very confident that they're moving in the right direction. Both them and Databricks are pushing hard on what they call systems of intelligence. Basically a layer of intelligence above your data that allows to connect the dots in a way that is very hard for humans to do both in means of time and in means of efficiency and understanding what dots to actually connect. They're calling it the four dimensional business intelligence, and the idea is to go from static dashboards that just shows you the data which allows you to answer four critical questions. Question number one is what happened? Question number two, which is more important why it happened? Question number three, what will happen? And question number four, what action to take, which basically means that instead of just having data, and you have to figure it out, you have an in-house 24 7, 365 consultant that looks at everything that you're doing and can provide you guidance across any vector on what actions to take because of the underlying problems and issues and opportunities in the data that you have in your business. Now, Databricks went an extra step and acquired tabular, and that gives them access to a much wider range of data sets and a very strong hold into the open source universe, which now means they can play both in closed source and in open source data management and AR layer on top of that. And we've discussed similar approaches from other giants like AWS with their SageMaker platforms. That allows quickly and effectively creating vertically integrated platforms that looks into multiple data points within your data on AWS Salesforce, very aggressive transitions to agent force basically switching their business model from selling seats as a SaaS company to monetizing the actual feedback loop and actions that are happening in companies. Now, if I go back for a second to Snowflake and Databricks, their big summits we're right in the middle between their two big summits. So one of them just had their summit this past week. The other one is having its summit next week, back to back, not by chance. And I'll review all the announcements from those at the next week's news. But what this is telling all of us is that the battle is now not about the underlying model, which is still going on. We're still gonna see open AI and Anthropic and Deep Seek and Gemini. All of those come up with new models and trying to up each other. But the real difference to our day-to-day life is what we can actually do with it. And running our business data through it and getting insights and making better, faster decisions is the key to success in business. And hence, we are gonna see these companies go more and more into areas which were not their core expertise before, to gain market share in this new future way of doing business. So what we're going to see is we're gonna see the lines blurring between a lot of companies who did very distinct things, where all of them are gonna do a lot more stuff and gonna go into other companies areas, trying to capture more market share in this new way of doing business with AI as a critical layer for making better business decisions. And it will be very interesting to see who comes out on top in this new hierarchy. Open AI is definitely creeping into the worlds of giants, but some of the giants are creeping into each others fields as well. And there's obviously a lot of other smaller startups in that whole mix. So it'll be very interesting to watch this field and we'll obviously keep you updated. And just another anecdote on how hot this field is, u.com, which is an AI search company. We've interviewed their CTO and co-founder Brian McCann back in episode 1 44. You should go and check that out. What they have shifted very aggressively from search for the masses to search for the enterprise, and they are now in the process of potentially raising 700 to$900 million at a$1.4 billion valuation, which is showing you how much need and how much investors believe in the strength of enterprise level search and AI data analysis. Our next topic is the report from Mary Meeker. So those of you who don't know Mary Meeker, she is the founder of the VC firm Bond, and she was tagged as the queen of the internet back in the internet boom days, and she was known for releasing really large, really detailed reports about trends of the industry. she hasn't released any report in the past few years, I think since 2019, but she just released a report called Trends in Artificial Intelligence. It is a 340 pages of a report, very, very detailed. We are obviously not going to go into all of it because otherwise it's gonna be three episodes talking just about that. We're gonna cover some of the key findings and the key things. The number one thing that she's hammering is the pace in which this technology is moving and how much faster it is than anything that we've seen before. So she's talking about the pace and scope. So the pace and scope of change related to AI technology evolution is unprecedented. So she's giving a few examples. Chachi PT reaching a hundred millions, faster than any other company. Chachi PT reaching 800 million users in just 17 months. Nothing even remotely close to this ever happened before. The number of companies that are hitting very high annual recurring revenue numbers is also unprecedented and never happened before. The pace in which competitors are matching each other's features and capabilities in this industry is also unprecedented. And she's mentioning that the ability of open source universe to catch up and in some cases even overtake some of the closed models in specific things, is also something that never happened before. The only one area that is not outpacing previous technology revolutions is returns, right? These companies are spending billions into infrastructure and into running faster and into training models, and so far, returns are far behind the returns that we've seen in previous technological revolutions. However, everybody believes right now that's gonna come and come at a much higher multiplier, and hence why they're pouring all these billions of dollars into ai. A few other key things that you mentioned, first of all, is that the cost of AI usage plummets. Basically inference costs for AI usage have dropped 99% over two years. So the same level of information that you got two years ago, paying a dollar you're not paying 1 cent for. This is very dramatic. The flip side, training costs are soaring, so we know that the training costs are going higher and higher and higher. Current level of models are trained at around$1 billion for a training of a new model, which is showing you how hard it is to compete in this world where the rate of use so people are actually paying you to use the model is dramatically shrinking. While the cost of putting new models out there is increasing, which connects back to our previous point that I think we're gonna see more and more application layer innovation and integration rather than just competing on the underlying model capabilities, she's also mentioning that the energy efficient is improving dramatically. She's saying that Nvidia Blackwell's GPUs, their latest ones from 2024, uses 105,000 times less energy per token than it's generating, than its 2014 Kepler GPU from 11 years ago. Now, while this is very promising, a hundred, 5,000 times less energy, the problem is we have millions more of demand. So the demand has grown in several orders of magnitude more than the energy efficiency the actual underlying infrastructure is providing. And so while, yes, this is great, I think we still have a very serious issue with energy generation to support AI and with its impact on emissions and global warming. So if you want, this report is free, you can go and find it, drop it into notebook like I did, and you can see the summary. You can ask questions, you can get a quick podcast. It's not that quick, I can tell you that. But it's still a very good overview of what's happening in the AI industry right now, especially coming from somebody who has done similar research on previous technological revolutions. It seems that we can't have a week go by without talking about additional layoffs and what's happening right now. Well, massive layoffs in the US this year. 275,000 jobs were cut just in March of 2025. A big part of it, 216,000, so about three quarters were driven by Trump's administration, department of Government efficiency, doge, that was ran by Elon Musk that since then left that position and completely trashing the administration. But that's a whole different thing we're gonna, we're not going to even dive into. But 275,000 job loss in one month is a lot. And we talk previously about Klarner, CEO, talking about their 40% reduction in headcount in the past couple of years because of ai, Shopify, CEO, to buy slot key, with his memo to employees saying that they cannot hire new employees unless they prove that AI cannot do the job. Walt Disney just announced cutoffs in several hundreds of jobs globally. Online education form chegg cut to 248 employees, which doesn't sound a lot, but that's 22% of their workforce. Amazon is eliminating jobs in their devices and service unit proctor and Gamble just announced a job cut of 7,000 jobs, which is 15% of its non-manufacturing workforce. It's gonna happen over the next two years. Citigroup is planning to reduce 3,500 positions, specifically in China. And we talked previously about Microsoft cutting 6,000 jobs, meta cutting, 3,600 jobs. Workday slashing 1,750 employees. Salesforce reducing a thousand people from their head count, Autodesk with 1350 and so on and so forth. The list is long as the numbers are very, very big and they're staggering. Now, to be fair, this is not just ai, right? The layoffs are aligning with global economy uncertainty with the tariffs, war of Trump, with the war between Russia and the Ukraine, with uncertainties on what's going on in China and Taiwan and in China's economy. So there's a lot of other factors. It's not just ai, but AI is definitely in the back of that and it's no longer a secret. And as I mentioned, many CEOs are saying it out loud that they are increasing the quote unquote, efficiency of the company by making AI take more and more jobs. Now, to be fair, there's an article from this week that is showing that the tech sector layoffs are actually slowing down in 2025 compared to 2024. the midyear number right now is the 137 companies have cut 62,000 jobs, which means if we stay on this space, we're at 145,000 jobs for the end of the year compared to 152,000 from 2024 and 264,000 for 2023. So 2023 have seen The most job cuts then 2024, and now 2025 at the current pace is actually slightly slower. But the thing is, this is not an improvement because more jobs are being lost. It's not that now jobs are being created to offset the jobs that were lost. It's just more jobs lost just at a smaller pace, which is obviously still not good news. As we shared last week, dario Amadei, the CEO of Anthropic finally came out and said, yes, this is happening. AI is gonna take jobs and the leaders of the world need to address it. I haven't heard anybody addressing it yet, but I think it'll become more and more of a mainstream conversation. I started having people asking me about it because they've seen the news about Dario in every news outlet and asking me what's going on, and most people don't still understand the impact the impact that AI is going to have because they don't understand what it can do and the fact that you're listening to this podcast and probably other podcasts tells me that you're into AI and you're trying to learn and understand, but that's not the common in the society. The amount of places I go to where people just heard of Che Chi Piti maybe played with it to try to answer an email, and that's it is probably more than I see the opposite of people who are all in like me, who test stuff all the time and experiment with AI and integrate it across everything that they're doing. And so we're still very far behind on the understanding of what's the impact is going to be, and I think it's gonna hit a lot of people with a very big surprise. Now, to add on top of that, and that's gonna be our last deep dive component. We talked several times in the past about the point when the AI acceleration is gonna go through the roof, and that's gonna be the point where AI can start self-improving, basically write its own code and accelerate very, very fast. Where every generation of AI can write better code to write the next better generation of ai. Where Sakana ai, which is a research company in the AI field, just built what they call Darwin Global Machine or DGM, and they announced on May 30th that it now auto autonomously rewrites its own Python code, and that it was able to boost its performance by 50% on several different benchmarks between the different versions that it's creating on its own. Now, as the name suggests, with Darwin in the name, the way this works is it works like the Darwinian evolution concept where the machine builds several different variations of the code and then it tests these variations of the code to see if any of them is better than the existing code. If it is better than the existing code, this becomes the next version of the AI and all the other ones get trashed and so on and so forth, and it just keeps on going. Now because it can write code faster and deployed faster and check for errors faster and find mistakes faster, it can do this process way faster than humans can across multiple aspects of the code, on one hand, it is really amazing. On the other hand, it's really scary because the day where this will be the common thing for most AI platforms is coming now, it may not come tomorrow. They're probably doing it on a much smaller scale then let's say GPT-4 0.5 or CLA four. But the concept is there, it's not being tested and proven. And that's gonna trickle into probably all AI companies. And that to me is a very scary thought because even today we're finding it hard to control AI and to keep it in a box and to verify what it's doing. We talked about last week, about the deceptive behaviors and more about that at the end of this episode. But combine that with the fact that it will be able to self-improve and write its next variation very quickly. This in my little head means trouble because we won't be able to control what it's doing, what it's improving, because the pace is gonna be faster than we can actually monitor and update. And the fact that this is a race between companies and the fact that there's many billions of dollars involved, it's gonna drive this regardless of the potential implications. and now to the Rapid Fire News of the week. We'll start with Microsoft. In a UK government trial with 20,000 civil employees using Microsoft 365 copilot is claiming that they're saving on average 26 minutes per day per worker. You put that together, that's roughly two weeks every year per every employee. Now, they've used copilot to draft documents, manage emails, schedule meetings, creating presentations, and streamlining other routine administrative activities across 12 different government organizations. 82% of the employees who participated in this research reported that they would not want to abandon the AI tools after the research is over indicating that most people were very happy with using the tools and the way it helped them be more productive. Now, the big question that I'm asking every time I'm seeing one of these surveys, and if you remember, I shared research with you about this topic a couple weeks ago, is what was this time used for? Meaning those 26 minutes per employee, what were the gains and benefits? The fact that you saved time is awesome, but how was that time used to make the organization more effective, more productive, and so on is not mentioned in this research, as I told you, in the research that I shared with you two weeks ago, there is very strong evidence that in the current structure of things, this save time goes to waste because it's not aggregated into meaningful time. If you're saving two minutes here, five minutes there, 20 minutes here, seven minutes there, it's not a time that you're gonna put together into starting a new task. It might be a time that you go to the bathroom, you go to grab another coffee, you have a chat with your coworkers, which are all important things, but they don't necessarily drive efficiency for the organization. So what I think will happen is that we'll have to figure out new ways and new processes, how to work with AI so that time is more effectively aggregated so we can actually benefit from it. Staying on Microsoft. If you are a video creator, you're gonna love this piece of news. So Open AI's, SOA is now available on Bing for free. Now it's limited, you can only generate five second videos. And currently it's only nine by 16 vertical videos for TikTok, YouTube shorts, et cetera, with support for 16 by nine. So the portrait, so that landscape format is coming soon. As a free user of Bing, you're getting 10 fast generations for free, and then additional videos cost you a hundred Microsoft Rewards points or will take you hours to render because you will stay in the queue. So there's an incentive to engage in the Microsoft ecosystem to get the reward points, to be able to do more videos fast. Now, these videos carry the C two PA digital watermark, that identifies them as AI generated, which I think is a great step in the right direction. The problem is right now there are several different standards and not everybody's following them. So it's still very confusing and still not very helpful to know what's real and what's not. Now, this obviously doesn't happen in a vacuum. Google released VO three a couple of weeks ago. VO three is mind blowing and out of this world. And if you haven't seen VO three videos, go and check anything on the internet. Just search for VO three videos on Google or on any other thing, like an AI tool. And you'll see hundreds of incredible videos that will blow your mind. And so Microsoft didn't wanna stay behind and they're releasing Sora for free. So you can generate AI videos on the Microsoft universe as well. But while they're moving forward and releasing a lot of amazing stuff with ai, and if you don't know what I'm talking about, go back to the news episode two weeks ago after their big event. Not everything is going great. And while Microsoft is making very big announcement and releasing very impressive AI capabilities, there's still a lot of termil internally in Microsoft and they're making another restructuring on who's gonna manage what. This is the third major reorg of the AI Microsoft team since the beginning of 2024. If you remember last March DeepMind, if you remember last March, Microsoft acquired inflection and its co-founder and the team inflection. if you remember last March. Microsoft acquired inflection co-founder, Mustafa Suleman and most of the inflection team to create a new Microsoft AI organization. Last October, they hired former Meta Head of Engineering J Paric, to be the AI apps sar, and now there's a whole new change going on as well. So the Chief Microsoft Corporation for LinkedIn is taking charge of the team that will build email and productivity apps. Ryan Lansky, the professional networking site Chief Executive Officer since 2020 will tack on responsibility for the teams behind Outlook, word, Excel, and the rest of the office bundle. He'll report to Rajesh Ja, who is a top engineering executive, who is organization includes Windows and business software and Lansky, as the CEO of LinkedIn will still report directly to Satya Nadela. So he is gonna be wearing two hats, reporting to two different people. Now the dynamic 365. Corporate vice President Charles Lamana will also join JA organization as well. So lots of big shifts in very senior leadership and the reporting lines that all are tied to how are things developed within Microsoft to connect all their office suite and other things that they're doing to ai. That's never a good sign when you do a lot of reshuffles, especially with these new reshuffles are not mentioning anything about Mustafa Suleman and his role in this whole thing. Exactly. What does he oversee right now? Is it just the development of new models for Microsoft? Unclear. But I will update as the dust settles. So on one hand, Microsoft is delivering amazing new capabilities and vision and on the other hand not very clear what's happening there internally right now. And from Microsoft to philanthropic. We'll start with some not very good news for philanthropic. So Reddit just filed a lawsuit against the philanthropic for breaching contract and using user data without consent. The lawsuit claims that philanthropic attempted to access the platform's data more than a hundred thousand times between July, 2024 and May, 2025. Despite the digital restrictions and the formal requests that were sent to Anthropic to stop this behavior. Now, we had many lawsuits before, mostly from different publishers against the big model labs. But this is the first time that a big tech company he's suing one of the giants for training on their data, or at least attempting to train on their data. Where will this go? I don't know, but this is just another angle where we see this battle between the people who create and own the content to the people who believes that they can use the content as they wish. If you want the exact quote from Reddit, they said, despite what its marketing material says, anthropic does not care about Reddit's rules or users. It believes it is entitled to take whatever content it wants and use that content however it desires with impunity. Now if you want to make the story a little more interesting, Sam Altman, the CEO of OpenAI, owns 8.7% of Reddit making him the third largest shareholder, and he was once one of the board members of Reddit as well. OpenAI themselves have a licensing deal with Reddit, and so this might be a play by Reddit. Now, I'm not saying Sam is involved in this, but it's a very obvious play by Reddit to get another licensing deal. As I mentioned, they already have one with OpenAI and they have a similar agreement with Google, so I believe they'll be able to twist the arm of Anthropic to do the same thing if they want access to Reddit's information. Now, the response from Anthropic was very generic. We disagree with Reddit's claims and will defend ourselves vigorously. What does that mean? I don't know, but as this evolves, I will let you know. Staying on Anthropic. They just announced on June 5th that they're launching claude gov, which is a specialized AI model for US defense and intelligence agencies. It is similar to, and competing with a system that was released by open AI called Chachi pt gov. In their statement, philanthropic revealed that Claude gov has already been deployed for several different agencies at the highest level of US National Security. Though they didn't specify which agencies it was deployed and exactly when, and unlike obviously the consumer version that we have access to, it has a lot less guardrails and it's running within a secured environment that allows the government to use it with huge amounts of data without the risk of that data falling into the wrong hands, connecting some of the dots for you. On November, 2024, they announced a collab on November, 2024, philanthropic announced a collaboration with Palantir, who has been providing multiple AI platforms to the government before on AWS Secure Cloud. So it's a tight partnership between several different companies who are now providing secure AI capabilities for the government. Now, where does that lie with the company's core values when it comes to philanthropic? they're saying it's perfectly aligned with it because their current guidelines are saying that legally authorized usage for things like foreign intelligence analysis is allowed while it prohibits disinformation, weapons development, censorship, and malicious cyber operations. In my eyes, this is a very slippery slope, especially once you're giving the government access to a secure environment that you may not have access to. Even though it's your technology, you don't really know what they're going to use it for. And so this is a very, very big problem, and I shared with you my thoughts before, just like the race on the civilian side, the race on the military side is gonna be as fierce because nobody will want to be the one that doesn't have this technology and allow the other side to have it. So having ai, autonomous weapon systems and so on is something that's coming. It's coming fast and it's gonna change future wars dramatically with AI making decisions instead of humans making decisions. and that opens a whole kind of worms that you can think about on your own. Or maybe we'll do a whole episode about that. But it's definitely not something that I'm happy about. Staying on Anthropic Anthropic just launched Anthropic Explains, which is a blog primary written by Claude. There is human oversight that is looking at the outputs and making sure that they're aligned and making final changes, and the humans are in charge of the final product. But most of the blog is written by Claude itself, and it includes articles on a wide range of things, including highly complex topics such as simplified complex code bases with Claude is one of the articles that Claude itself has written. That comes to show you several different things. One, how good these tools are becoming, and I actually really like the way Claude writes blog posts and longer pieces of content and short pieces of content as well. To be fair, it also shows you the importance of this collaboration between human writers and oversight to AI writing the initial content, and that's the way philanthropic is following, at least right now, or at least this is what they're saying. I can tell you that as these systems get better and better, I think humans will trust them more and more. We'll verify less and less, and we're gonna get information and assume it is correct and accurate and is aligned with our needs and values because we just are going to stop reviewing what it's doing. I don't encourage that. I just think that's human nature. But to show you the power of that and the level of content that it is generating. Claude's posts are gathering 200,000 plus unique reads in its first week. I don't know many blog writers that ever got that. Now obviously it's anthropic, they have a big following. Obviously there's the whole thing of, I wanna read the blog post that was written by Claude to see how good it is and see if I can tell the difference. But it's still a very large number of people who have read the blog that was written, not by a human. Staying on Anthropic and its relationships with other companies. Anthropic just cut Windsurf direct access to its models. So we talked about Windsurf several times in the past. They are one of the leading professional vibe coding platforms out there. There are serious rumors that they're gonna get acquired by OpenAI for$3 billion. And so because of these news that apparently are materializing philanthropic, decided to cut their direct access to all their models. Windsurf, CEO went to X and basically complained about the whole situation, said that it does not understand this move. Said that they're willing to pay more to get access to the models and that the fact that they cut them off within only a five day notice is completely unprofessional and unfair. And on the other hand, philanthropics co-founder Jared Kaplan told TechCrunch. I think it would be odd for us to be selling Claude to open ai, which makes perfect sense to me. And Philanthropics, Steve Minch said, we're prioritizing capacity for sustainable partnership that allow us to effectively serve the broader development community. Not to be fair, you can still use Claude's models on Windsurf, just not directly from Windsurf. You have to bring your own API Keys, which is not a big deal. The problem is it's a lot more expensive than running it through the Windsurf environment, and I'm sure they're going to lose clients to other similar platforms out there, such as Cursor. From Anthropic, let's switch to open ai, which is potentially maybe acquiring Windsurf. And this topic is just a quick mention. A new book was just released by investigative journalist Karen Howe, and the book is labeled Empire of ai, which is following everything that happened from Open AI's launch to becoming a$300 billion fraud profit giant. So how began covering OpenAI in 2019 for MIT technology review? She has interviewed multiple people. She had access to a huge amount of documents and she has a very unique view into everything that happened and happens in OpenAI, which is obviously a fascinating read. I didn't get a chance to read it yet, but I'm letting you know that it exists and if you want to get a lot more insight information on what has evolved and what has happened in OpenAI from being a small research lab nonprofit for humanity to be becoming one of the most powerful companies in the world, then I think it's a must read. And now two specific news from OpenAI. OpenAI just opened its Codex coding agent to Chachi Pity plus users. So if you're paying 20 bucks a month, you now have access to that. Before that, it was only available in enterprise teams and pro tiers. So now it is also available to most of the people who are paying for Chachi pt. Now it's available to all the people who are paying for Chachi PT Codex can now connect to the internet, which is also a new thing, which allow it to find pieces of code and dependencies and instructions and API documentation and so on, on its own. It also allows it to connect to staging servers and running tests with external resources and so on. So it's becoming more and more powerful. Going back to what I said in the beginning, this is starting to look more like an IDE and compete with tools like Cursor and Windsurf, and definitely it's built in the first step in order to compete with the coding capabilities within Claude, which are very impressive. I'm actually, I've actually built several different things in Claude already. Mostly simple stuff, dashboards and games with my kids. But the coding and execution capability in Claude is really amazing. And so this it's not surprising that OpenAI is allowing now access to everybody to use Codex. I haven't tried Codex yet, but I am going to compare the two and give you my opinion sometime in the next few weeks. OpenAI is also planning to launch all three pro, which similar to the all three that we're all using only, it's gonna be only offered to the pro subscribers who pay$200 a month and it'll provide additional computing power, but also enhanced reasoning capabilities beyond the basic model. Well, is that gonna be the thing that's gonna drive more people to pay$200 a month? I don't know. I actually really like O three. I think it's very powerful and it's doing a lot of things, but I don't think I will pay 10 x just to get a little more of that. And some good news also for the free tier users of OpenAI. OpenAI is just rolling a lightweight memory feature for the free tier. So it works very similar to the way memory works for the paying users, but it only has short term memory instead of long-term memory. Meaning it's gonna bring the context from your recent conversations into your new conversation, but it's gonna forget that information that was older. I actually find the memory capability to be very helpful in CIP pity, and I hope I can have it in some of the other platforms in the same efficiency that Chachi Pity does it. So this is really good news for the free users. This is not available to people in the eu, uk, Switzerland, Norway, Iceland, and Lichten Lichtenstein due to strict AI regulations in those places. And if you are a free user, you can disable the memory and control what it's actually remembering, just like everybody else. And from open AI to Apple, which we'll probably talk a lot more about next week because their worldwide developer conference is happening this coming week. But they just shared that they are testing a large language model with 150 billion parameters, which per them and I'm quoting, approaching the quality of recent chate rollouts. Now Apple is actively testing different levels of this model with 3 billion, 7 billion, 33 billion and 150 billion parameters using an internal tool they call playground that allows them to benchmark their models against other models, mostly Chachi pt. Now the large model, the$150 billion model runs on cloud computing similar to other large language models and is obviously outperforming the on-device 3 billion parameter of Apple intelligence. We talked about this a lot on this podcast. Apple's performance on the AI field has been nothing short of embarrassing so far. They haven't released anything significant. The things they have released are very small components of the bigger picture that they've promised. A lot of people have bought new Apple devices and have upgraded their iOS platforms to get capabilities that did not show up. There've been a termil in leadership over there. People removed, new people were hired and they're basically scrambling to figure things out. The new series that was supposed to be released last year will maybe be released in 2026 and it might get pushed to 2027. this is not looking good for Apple. And a part of these announcements, Google Gemini is expected to join Chachi PITI as a Siri backend alternative for iOS 26 with talks also going on with perplexity to potentially be a part of Siri and maybe Safari search. What does that mean? Well, as users it means you're gonna have more options, which is a good thing. It is also very clear to me that the new partnership between OpenAI and Johnny Ive is a big threat to Apple with its iPhone dominance in the market. And I think they're terrified of that. So I think they're looking for partnerships that are not with OpenAI to have some other differentiators to stay relevant. And there's also two trials going on. One is against Google Monopoly, which part of that is trying to break their$20 billion a year deal with Apple to be the default search on Apple's devices. So that might be another reason for Apple to move away from Google search on their devices to other solutions. And as I mentioned, their largest conference of the year. The WW DC 2025 is starting on June 9th, and it will be very interesting to see how much AI they're actually gonna be there. I actually believe there's gonna be a lot less than we've seen last year because of the very serious backlash after last year was everything Apple intelligence, and they failed to deliver on everything they promised. So I think it's gonna be a lot more low key when it comes to AI this year, but I will update you next week. One additional anecdote on Apple, they just drop from third to fourth place on Fortune 500 Company with UnitedHealth Group overtaking them, following Walmart, Amazon following Walmart and Amazon, which are still holding the first two places. And I heard a very interesting podcast this week that is asking if Apple is the next Nokia. Now I know that sounds insane. They're one of the most loved companies in the world. They have the iPhone and the iPad and the earbuds and the max and so on. And yet Nokia was in the same exact situation before the collapse. They were the largest behemoth in the mobile cellular world by a very big spread. And I don't know anybody who has a Nokia phone right now. Why? Because they missed the trend of smartphones and they were not fast enough to adapt. And when they did adapt, it was too little and too late. Now, as the same fate is gonna happen to Apple, I don't know, but it's definitely not looking good for Apple in the last couple of years. Combine that with the fact that, as we mentioned, Johnny, ive now has a partnership with OpenAI to build something that may take away market share from phones. Combine that with the fact that meta has actually built glasses that people actually use and love versus the really sophisticated, really advanced headset that Apple developed that very few people bought because it was three thousand dollars. And really heavy combine that with the fact that there's a trial going on against open AI when it comes to allowing people to open a secondary up store that Apple doesn't control and doesn't it cannot take 30% of the profits off the top. And that they're facing pressures from multiple directions and, facing pressures from multiple directions, and this may lead to results that are very different than what we know from Apple right Now. That being said, their stock is seem to be holding pretty steady despite all these issues. But if you look at Apple in the past decade, there's been very little innovation again, the only big thing that you can talk about that was like, oh my God, this is a new thing, was the headset, which was a huge failure. And so no big innovations coming from Apple. The company who maybe had innovation as its main driver, introducing new things to the world that didn't exist before. And now one of the people that helped them lead that is. Running against them and they're facing a lot of other issues. So again, I'll keep on updating, but very interesting point of view about Apple's current status from Apple to Google. Google just announced a new cool feature from Notebook lm, which is public sharing. You can now share your notebooks with anybody in the world, even if they don't have a Google account. What does that mean? It means that one of the best tools in the world today to summarize information and allow asking questions about information, you can now make it a much more collaborative environment. Teachers can create interactive study guides for students. Startups can build product hubs that multiple people can participate in and learn about. Researchers can share findings through that and business people can share project data and so on with other people. The tool just became more collaborative, which is great. I use Notebook Lamb all the time. I think it's a fantastic way to summarize information from multiple sources, be able to ask questions about it, and so on. And from Google to perplexity introduced Perplexity Labs last week, which we talked about, which is a really cool tool. I still think it's behind the more agentic tools like Gens, spark, and Manus, which have way more advanced agentic capabilities compared to perplexity Labs. And if you wanna know more about Gens, spark, and Manus and tools like that and how to run them safely, don't miss the next episode that's coming out on Tuesday. This is exactly what we're gonna dive into, but staying on perplexity, their CEO Arvin SW predicts that AI agents will redefine how we interact with the web, moving beyond answering questions to take actions like booking rides, ordering foods, and everything else that we do in the internet today. Now Rivas aims to disrupt current search by building AI agents that will integrate seamlessly with the apps. And as we mentioned previously, they're coming up with a new browser called Comet that will have everything integrated into it. This browser is supposed to be launched next month, so it's just around the corner. There are many other companies right now that are building quote unquote AI based browsers. It'll be very interesting to see how that works, but the direction is clear. We're gonna have a lot more agent approach to interacting with the web. We're gonna see less and less human traffic to websites as time goes by, and companies who rely on human traffic going to their website, whether for e-commerce or for information needs to understand that this is going to change. It's not gonna happen overnight, but over the next five years, there's gonna be a very clear increase in agent traffic to data, not necessarily websites, and a very significant decrease in human traffic to these websites. And that means that companies have to start adjusting to that and figuring out what that means from data structure, architecture, and other aspects of the way they're engaging with their customers. Now just like the potential iOS collaboration that I mentioned before, perplexity secured a deal to pre-install its AI on Motorola's new razor phones and SVAs credits, Google antitrust scrutiny for loosening grip on OEMs that are now allowing them to install their search and other capabilities on the phones instead of, or in parallel to Google's offerings. And from perplexity to meta. Meta just announced in their shareholder meeting that they're aiming to automate the entire creation of ads by the end of 2026. We share that with you a couple of weeks ago, but now we got more details on what the plan, but the plan is very extreme, and I'm quoting Zuckerberg from the meeting in the not too distant future. We want to get to a world where any business will be able to just tell us what objective they're trying to achieve, like selling something or getting a new customer, how much they're willing to pay for each result and connect their bank account. And then we just do the rest for them. What does that mean? It means that you don't need any kind of agencies and that it will optimize automatically down to the individual level. What does that mean? It means you don't need agencies because it will create the ads. It will create the copy, it will create the text, it will create the images, it will create the video. It will create everything that it needs. It will optimize down to the personalization to a specific person based on their interest, based on their hours, based on how they consume, what they clicked in the past and so on. Stuff that no agency and no human can do right now, and it will optimize for all these things. You're obviously in a question whether you are allowing your creative to be controlled by a machine, but if that machines know how to achieve the results better than you can with your creative, then there's no problem. Well, there is a few problems. Problem number one. That you lack control on the messaging that is gonna be presented on behalf of your company, which might be a problem, especially if you have a brand name that you're trying to preserve. I'm sure they will have some tools to guardrail what it can and cannot say, and what brand guidelines to follow and so on. But that wasn't clear so far. The other thing is that it's gonna kill, I don't know how many, but many, many, many different agencies who specialize today in doing exactly that, which is doing social media advertising on behalf of clients. There's been similar announcements already by Google and TikTok saying that they're gonna follow similar concepts. So the idea of an agency that builds ad and distributes ads free because they know how to do it better than you might be the thing of the past. Within a few years from now, staying on Meta, they just announced that they're gonna switch most of their risk assessment from Human Review to AI review on Facebook, Instagram, and WhatsApp. Whether that's good or bad, I'm not a hundred percent sure there's goods and bads in both of it. I'm sure AI can look at more stuff. I'm sure AI can classify things better. I'm sure AI is gonna miss things that the humans would've caught, and so I really hope that's not gonna come to bite us. You know where, but for now, this is the direction that Meta is taking. By the way, similar to Microsoft that we talked about before, meta is also going through some serious restructuring in its generative AI group splitting into two units to address what they call quote unquote internal challenges. So in the new leadership structure, Ahmed Al and Amir Frankl will co-lead the new A GI Foundations team, focusing on LAMA models and AI agents. And Joel Pue, VP of AI research will oversee the team dedicated to AI integration in META'S consumer apps like WhatsApp and Instagram. So one more the front end, one more the back end if you want. This actually makes sense to me. If you go back to how we started this episode, we're talking about the fact that the application layer is gonna be as important and I think in the long run, more important than the underlying models, splitting into two teams makes perfect sense to me. And now to some very interesting announcements from smaller startup and not just from the Giants. Foley, which is a company that generates phone agents, has just achieved 99.2% conversational accuracy, surpassing the previous king, which was Open Eyes Chachi PT four, all with 94.7%. Now, the biggest difference was cutting response time and latency by over 70%. How did they do that? Well, I'm sure they're optimizing the models like everybody else, but they also did it through partnership with Grok with the Q. So grok that develops and provides the most advanced inference platform in the world today. So these are chips that are not optimized for training AI like the GPUs from Nvidia, but actually are specialized in inference, meaning generating tokens. So they have what they call a multi LoRa hot swapping that enables instant model switching in real time without any latency. And they're also using MIT ize platform to optimize the performance of everything that they're doing. And they were able to reduce the response time from 661 milliseconds, basically over half a second to 176 milliseconds, which is what humans do. It reduces dramatically the ability of humans to actually know they're talking to machines based on their own internal research. They're seeing that about 70% of people cannot tell that they're talking to AI when they're talking to their platform. That's obviously coming from their own CEO. So I don't know how credible this is, but I can tell you for sure that these agents are getting very, very good. They're doing an incredible job in providing customer service. They're gonna get really good at doing outbound sales and inbound sales and a lot of other stuff, and that's another industry that is at complete risk of elimination, which is the industry of the call center and contact center. I think five to 10 years from now, the concept of contact centers were just seized to exist. There's still gonna be people doing maybe higher level more relationship kind of things, or being supervisor to assist in stuff that the AI wasn't able to solve. But that's gonna be single digits percentages of the amount of people that are currently employed in the call center industry. Another company that made a huge release this week is Hagen. Hagen is a platform that enables to create human-like avatars for any need that you have, whether it's training, customer service, onboarding, et cetera. I use Hagen all the time. I teach Hagen in my courses. It's a fantastic platform. Well they just launched AI Studio, which allows to take the creation of the videos and the realism of the avatars to a completely different level. It includes multiple components. Most of them are prompt based, so you can prompt how the video will look like. But they have now new functionality like voice director that allows a much better fine tuned avatar speech, including the ability to do voice mirroring, meaning you can record your own voice and then it captures the nuances of how you speak. And then you can apply those nuances to any voice that you can pick from their platform. They're also allowing you more gesture control of how the actual avatar moves their hands and what kind of gestures they're going to have. And they're going to be releasing additional capabilities like prom, control over camera motion elements, and prompt based editing and streamline, including B-roll adding all of that with a lot of AI capability. As I mentioned before, I really like Hagen. I really like their offering and I'm very excited to test this out. And I promised you in the beginning that there's gonna be some additional news when it comes to the scientific benefits of ai, well, the big one is that the FDA just approved a tool called Clarity Breast, which is a breast cancer detecting platform. So the Breast Cancer Research Foundation just announced the authorization of Clarity Breast to be used with actual patients. That's a big milestone for AI because it is the first AI platform that will be used for similar things in the public. It can predict a five year risk for breast cancer using only standard mammograms. So today, the way breast cancer risk is analyzed is based on a mammogram with the human looking at it combined with other risks such as family history. Well, this tool actually looks at very small nuance changes in mammograms. And using that because it has so much information in the past, it can make a much better prediction of future risk. This is fantastic news for women and it's fantastic news for the research of cancer because similar things might be applied to other ways of predicting cancer. And as we all know, catching cancer when it's in the early stages dramatically increases the chances of a successful recovery. And so I find this to be really amazing and great news. And in another breakthrough in a field that is less relevant to most of us, but it's still very interesting. New AI techniques allows researchers now to have better analysis of cosmo logical parameters and allowing them to better predict and understand how the universe works. These new tools were used with huge amounts of data from Sloan Digital Sky Survey Boss Dataset, it allows researchers to identify subtle patterns that were invisible in traditional methods and helping increase the accuracy of predictions and information by 30%. This is a huge increase. Again, it doesn't have any daily application to us, but if you take that to the world of research and you combine these two last pieces of news, it shows you that AI's ability to look at huge amounts of data, whether visual, numerical or other, and make sense in that data, is gonna allow us to drive significant new innovations and research in many fields, which is really exciting. That's it for today. We'll be back on Tuesday. As I mentioned, talking about general agents like Manus and Spar and how to use them safely. so I highly recommend you check that out. I think it's gonna open your eyes to what's possible today, which most people do not know. Quick reminder for the course that is coming up in August. So if you're looking to take our AI Business Transformation course, you should check out the link in the show notes right now. And if you're enjoying this podcast, please share it with other people. You can do this right now. When you're done, open your phone, click on the share button and share it with four to five people that can benefit from listening to this podcast. You are helping drive AI literacy and I also be very grateful if you do that. And until next time, have an awesome weekend.