Leveraging AI

199 | “OpenAI Files” bombshell, Google & Microsoft fight while Meta poaches talent, one-person startup scores $80M, Veo-3 in YouTube Shorts, and more crucial AI news for the week ending on June 20, 2025

Isar Meitis Season 1 Episode 199

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/ 

What happens when your closest tech partner morphs into a direct competitor—while regulators, watchdogs and Wall Street all look on?

This week’s Weekend-News deep-dive unpacks the high-stakes AI arms race: OpenAI sparring with Microsoft, Google gutting its own search empire to fund a $75 B AI push, Meta writing $100 M signing bonuses—and why every one of those moves lands squarely on a CEO’s desk.

Future-proof your organisation by up-skilling talent, cleaning your data silos and baking AI governance into every growth bet—before the next “frenemy” move hits your balance sheet.

In this session, you’ll discover:

  • The OpenAI Files: governance red flags, cultural fissures and why watchdogs say “recklessness” is now an enterprise risk.
  • Microsoft vs OpenAI divorce watch: $13 B invested—now fighting over clouds, code and customers.
  • Google’s voluntary buy-outs & back-to-office ultimatum: how a 56 % revenue engine is being dismantled to bankroll AI.
  • What a one-person, $80 M exit tells you about competitive advantage (hint: it’s not headcount).
  • Privacy pain points: from Meta’s leaked user data to calls for an AI-client privilege protecting your own employees’ chats.
  • Courtroom storm clouds: Hollywood vs Midjourney, environmental suits versus Elon’s server farms, and why every board needs an IP-risk dashboard—yesterday.
  • Short-form video shake-up: VO3, YouTube Shorts & the end of “traditional” ad budgets.
  • Wearables 2.0: Meta × Oakley, Snap Spectacles & Apple’s 2026 AR gambit—how ambient AI will live on your employees’ faces.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello, and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and this week our deep dive topics are going to start with the intensifying battle at the top of the AI race with major moves made by the leading labs, including the potential final steps of the unraveling of the relationships between Microsoft and open ai. We're also going to talk about how a single person can now create a very successful software company, and we're gonna end with talking about some negative, broader impacts of AI on society and people. But then we have a long and exciting list of rapid fire items and also a lot of stuff that did not make it in because there's too much to talk about and it's gonna be available in our newsletter. But there are a lot of really exciting topics to talk about in the rapid fire as well. So let's go. and wE'll start this episode with talking about the open AI files. It is a report launched by Midas Project and Tech Oversight Project, both our companies who are watchdogs over tech companies, and these files catalog a very long list of documents from the past and the present of Open ai highlighting serious concerns on governance flaws, leadership concerns and cultural issues at OpenAI, and the change they have been through from being a non-profit for humanity organization to a for-profit grow at all cost kind of organization that we know today. And they highlight very specific dramatic changes that happened in open AI in the past few years. They share multiple documents of, and now I'm quoting rushed safety evaluation processes and culture of recklessness at OpenAI. And they're talking about the risks of these kind of behaviors and culture when racing towards a GI. One of the stronger quotes from a leadership perspective that is quoted in these files actually come from former chief scientist Ilia Susko, now the CEO and the founder of Safe Super Intelligence. And he said, and I'm quoting, I don't think Sam is the guy who should have the finger on the button for a GI. That's a very strong sentence from somebody who's a co-founder who knows Sam and OpenAI very, very well. They also list a lot of conflicts of interest or presumably conflicts of interest while Sam Altman's personal investment portfolio that overlap and sometimes compete with OpenAI business, suggesting ethical conflicts that he may have in making decisions in OpenAI. They also mentioned the physical aspect of massive data centers that are linked to power outages and riding electricity costs, reflecting and showing the growth at all costs kind of mindset in the company. And the goal of these files is to basically raise the level of knowledge that we have on what's actually going on. And they are advocating for responsible governance, ethical leadership, and shared benefits on the race to a GI. They seem to be picking on OpenAI specifically, but if you broaden that, I'm sure there's very, very similar aspects in all the leading labs. And what this just gives us is a well documented example that deep dives into multiple aspects of how the labs are run at this time and age. A big part of the report shows kinda like the before and after of OpenAI, where they're showing past quotes from OpenAI leadership and then the current quotes. So I'll give you a few examples. You can go out and check the report yourself, but example number one, past. That's why we are a nonprofit. We don't ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole. That is from Sam Altman on March 27th of 2017. So the early days of OpenAI. Another pass quote from March 11th, 2019, we've designed OpenAI LP to put our overall mission, ensuring the creation and adoption of safe and beneficial a GI ahead of generating returns for investors. Regardless of how the world evolves, we are committed legally and personally to our mission. So this is how OpenAI was established. Now, let's see the current statement. Our plan is to transform our existing for-profit into a Delaware public benefit corporation with ordinary shares of stock. The PBC is a structure used by many others that require the company to balance shareholders, interest, stakeholders interest, and a public benefit interest in its decision making. So going away from our mission is humanity and not getting any pressure from shareholders to something that balances between them. And it's very clear where the balance lies. So while again, this is picking specifically on open AI and their transition from non-profit to for-profit, I'm sure the same kind of conversation can be had around Google meta philanthropic and every other company in the world right now, including the open source models, including the Chinese models and so on. The race is on, everybody's running as fast as they can and nothing else matters. And that obviously puts us all at risk across multiple aspects, from social aspects to political aspects to what the hell is true anymore aspects like there's a huge variety of aspects of this and obviously the concentration of power and money that will have dramatic changes, as I mentioned in both society and economy and all these things are gonna unravel in the next few years in a very aggressive way and gonna change a lot of the things that we take for granted right now. And so I think this file, while again it is picking up on open ai, it is important for us to generalize and understand where everything is going right now, but staying on open AI and their relationship or starting to see the lack of relationship with microsoft, there's been multiple steps in that direction over the last two years, and it's getting more and more extreme. Let's start with a quick recap. In 2019, Microsoft invest in$1 billion in OpenAI and basically kept the company alive. They have since then invested a total of$13 billion in OpenAI, and now recent moves by OpenAI is pushing this relationship further and further apart. one of the recent moves is the$3 billion acquisition of Windsurf, which is pushing OpenAI to become a direct competitor of another aspect of Microsoft. In This particular case, it's going directly after GitHub copilot and all the GitHub AI tools, as well as Azure AI Foundry in some of its aspects. So it's a, it's a direct hit at one of Microsoft's core businesses. Combine that with the fact that OpenAI is in the process of transitioning to a public benefit corporation, as we just said, which means they're shifting away from the structure of the company that Microsoft actually invested in. They have some disputes over intellectual property, cloud usage, equity stakes, and potential antitrust actions. So the whole thing seems to be on a very shaky ground right now. To tell you how extreme things are, OpenAI is considering accusing Microsoft of anti-competitive behavior to protect winds surf's IP from the eyes of MicrosoFt. and they're basically threatening Microsoft with federal regulatory review of their contracts. Now the transition to the Public Benefit Corporation requires Microsoft approval, which they haven't received yet, and that puts at risk a lot of other investments that they received since the biggest one. Obviously, being from SoftBank, OpenAI is currently offering Microsoft a 33% stake in the restructured company in exchange for forgiving the future profit shares that they were supposed to give them, which I don't believe Microsoft will be willing to accept at this poInt. Now as we shared last week, OpenAI just announced a deal to start using Google Cloud, a direct competitor of Microsoft. That's in addition to their previous arrangements with Oracle and SoftBank. So reducing the reliance on on Microsoft Azure and providing additional options for AI to grow beyond just a Microsoft relationship life. All of these things are not enough. The information, which is a reliable source of everything about AI and specifically about OpenAI, just announced that OpenAI is offering a very aggressive 10 to 20% discount on enterprise Chat GPT subscriptions, Which is going indirect competition to Microsoft sales of their copilot licenses. open AI's internal projections are aiming for$15 billion in enterprise revenue by 2030. And guess who else is trying to get these numbers from AI distribution on enterprise licenses. Many of my clients, both on the AI education side as well as on the AI consulting side, and I work with companies across the board from multiple sizes and multiple industries. Many of them are Microsoft based organizations and most of them are actually using open AI's enterprise licenses versus the Microsoft copilot licenses because they're just better right now. But in addition to being better, they're turning to be cheaper as well with this discount, which is gonna push more companies away from the Microsoft AI platform in into the arms of OpenAI. This is obviously not something that Microsoft will just let go and I envision this battle intensifying that may end up in court or other aspects, or Microsoft putting different blockers that will prevent OpenAI from accessing some of Microsoft data, which will make then copilot a lot more valuable. But I think this move, but I think this recent move by OpenAI is definitely showing that the gloves are off in this relationship that used to be friends became frenemies, and now it's very clear that the direction is a lot more enemies than friends. there's still a lot of ties between these two companies. As I mentioned, as I mentioned, Microsoft is supposed to approve the structural change of OpenAI, which if they don't, puts them at a risk of losing a lot of their investments. So I will keep updating you on how this thing evolves, but it's very clear what is the direction of this relationship right now. Now staying on the topic with the Fierce competition at the top of the AI world. According to Search Engine Journal, OpenAI has rolled out a significant update to Chachi PT search on June 13th, which is enhancing its capability to deliver smarter, more accurate and more comprehensive answers to more complex and extended conversations. Now, this obviously positioned Shachi PT as an even stronger competitor and rival to traditional search engines, mostly Google. And I must admit that while Google still controls a huge amount of the market, my kids as an example, don't use Google. They use Chachi PT for almost everything. I personally use a variety of tools For search. I use Perplexity. I use Chachi pt, and I use Grok for a lot of my time research that I am doing. I'm definitely using all of them combined a lot more than I'm using Google. I need to assume that I'm using each and every one of them separately more than I'm using Google right now. And so, yes, I'm not the average person, but I think the next generation of kids that are currently in schools and in universities are already sold on the AI solution and it definitely puts the future of Google search at risk which leads us to the next topic. On June 10th, Google announced voluntary buyouts for US-based employees in its knowledge and information division, which includes the core search and ads operation, which generates right now 56% of alphabet's revenue. So the move is currently offering an estimated of 20,000 employees a way to get out of the company through this buyout. the program is called the Voluntary Exit Program, and it is Following the CFO Anat, Ashkenazis October, 2024 pledge to drive cost reductions to fund the$75 billion investment in AI infrastructure in 2025. Now, this buyout is combined with a move to get everybody back to the office. So remote employees within 50 miles of the office must adopt a hybrid schedule, meaning they must start coming to the office X number of days per week, or they will be let go, or they have to basically take the buyout and The way Nick Fox, the head of the k and I department, described it in a memo, is that these buyouts are supportive exit path. for Those of you who don't feel aligned with our strategy what does that mean? It basically means it's our way or the highway, which if you wanna stay working for the company, you better work harder, be more hours in the office, and so on. anD what does this buyout include? Mid to senior employees receive at least 14 weeks of pay, plus one week per year of service. And the deadline to take the buyout is July 1st, which is just around the corner. Meaning this has a very high sense of urgency. Now, this is not the first buyout of Google employees this year. There's been actually several different departments who offered similar deals to their employees, which actually leads to an either bigger risk because previous buyouts were followed by immediate layoffs after that for many people who did not take the buyouts. And there's a significant risk right now for employees who are considering staying and not taking the buyout that they will get fired regardless and just not enjoy the benefit. Now, a few important notes about these particular buyouts versus the previous buyoutS. as I mentioned, the ads and search aspect of Google is driving 56% of Google's revenue. That was$50.7 billion in Q1 of 20 25, 54 billion in Q4 of 2024. iN both cases, 56% of the overall revenue. Now, you know my personal opinion about this because I shared it multiple times on this podcast all the way from 2023. It was very clear early on that Google is struggling in the AI race and they were struggling not because they didn't have the technical capabilities, but because they were afraid to hurt their search ad business. I shared with you back then that if I had to bet on any company in this race for the long run, that would be Google, because Google has everything it takes in order to win this race. They have the deepest pockets, they have more compute, they have definitely more data than anybody else. They have the most advanced research lab. They have the distribution, they have everything that is required, all the components, and they're, I think, the only company that has all the components to be extremely successful in the AI race. The only reason they weren't is because they did everything they can in order to protect the search ads. And now it is very, very clear since Google io, which was on May 20th, that it's the first time that they're willing to act and put the search business at risk. I think they're doing it because it became very, very clear to them that not taking action is actually riskier than taking action. Meaning if they won't develop these kind of solutions, people just go to other AI platforms. They're not gonna stay with a traditional Google search, which means at this point they have to act and they have to put AI ahead of the search revenue. Will that hurt them in the short term? Most likely cannot save them in the long run, maybe. And so I think it is very clear the direction that is Google, that Google is taking at this point. Now, as I mentioned, I use Perplexity and Chachi, PT and Grok, and Claude and all of these more than I use Google. But Chachi PT right now is getting close to a billion weekly users that's starting to get to significant numbers. I said that before, I think the biggest play that will change everything in the way we browse websites, and I think that what's gonna wake up everybody who depends on their website for traffic or deals or e-commerce or awareness or everything. I think once OpenAI releases a generic agent, something like Manus or Gens Spark, it'll open the eyes of everyone again, about a billion weekly users to what is really possible with Agentic AI right now, when you'll be able to ask a general question and the AI agent will do everything for you. So anything from researching and booking travel to buying any products online, to finding companies to partner with, to basically every business related action that we're now going to websites for the agent will be able to do for you, meaning you will stop going to the website, meaning everything we know about online e-commerce relationships and so on will disappear. Now, it's not gonna disappear overnight, but it is definitely going to be a very big step in that direction because instead of a few gigs like me playing with Manus and Spar, it will be a billion people that will have access to this capability and they will not start using it overnight. But more and more people will make it mainstream, which will start changing the internet as we know it. So by the fact that Google is letting people go from the department that. Drives most of its revenue. We can learn two very clear things. One, AI is the focus of the company now and not ads. Two, AI is taking away jobs even in the most important core businesses of some of the largest companies in the world. What does that tell you about non-core business jobs and other industries as well? Staying on the topic of the race of the top companies to for world dominance in ai, we shared with you last week that I. Meta has spent$14.3 billion to basically buy scale AI's, CEO. A few additional information that was shared this week is that scale will distribute dividends to shareholders and vested employees providing what they define as substantial liquidity without meta actually buying out the shares. Now, this is not new. They're not the first company that is doing This acquihire or aqua buy of companies basically means you're gonna pay a lot of money for the leading researchers or leading people at a company. You are not going to get voting stakes and then you can avoid the FTC blocking it as a non-competitive move because you're not really buying the company and yet the company will probably crumble as soon as that happens. I shared with you last week that I think that's gonna happen, and what's happening right now is all the major actors who are clients of scale AI are leaving. That includes Google and OpenAI. Just to put things into perspective, Google alone was previously set to pay 200 million to scale AI just in 2025. I combine that with open AI and Microsoft, that leads to probably more than 30% of the overall revenue that is coming into scale ai, and I'm sure other companies will follow and the competitors of scale AI are sharing that they get a huge influx companies approaching them to use their services. So I assume scale AI will cease to exist even though their temporary CEO is saying otherwise. I think the direction is very, very clear. So basically just like I said last week, meta paid$14.3 billion to get a new CEO for their super intelligence new established group. Now, since you cannot run this group with just one person, meta is making significant moves to bring additional people and Sam Altman reported on June 17th that Zuckerberg himself has made an aggressive poaching attempts to top researchers in open AI offering a combined bonuses of a hundred million dollars for people to jump ship from open AI to meta. He also shared that despite their efforts, not a single leading researcher has actually jumped ship, and that everybody believes that the culture, the technology, and the path to a GI is a lot more obvious with OpenAI than it is with Meta right now. In addition, the information shares that meta is currently in advanced negotiations to recruit former GitHub, CEO Nat Friedman, and the investor Daniel Gross, to lead its AI initiatives. The deal includes a potential partial buyout of their venture capital, NFDG, which is just the acronyms of their names. That is worth a few billions of dollars. If this deals goes forward, graphs will be focused on AI products, while Friedman's role will be broader leveraging his open source leadership from GitHub to help the overall progress of meta's open source initiatives. and Based on an article on CNBC before this process to try to acquire NFDG, meta was trying to acquire SSI safe superintelligence the company that was founded by Ilia Ver, which we talked about earlier, which was one of the co-founders in OpenAI. This company is currently valued at$32 billion and they refused the buyout by meta. So it's very obvious that Meta is willing to write whatever check needed in order to get the right talent to stay ahead in the race to a GI and A SI. And META'S spokesperson said we will share more about our super intelligence effort and the great people joining this team in the coming weeks. So a quick summary of this first segment of the podcast. The battle for dominance in this AI race is fierce. There are numbers, the dollar values that are exchanging hands or trying to exchange hands are nothing like we've ever seen before. And potentially the future of humanity depends on how this race ends up. and as we mentioned in the very first article, it is very clear sadly, that winning the race is currently more important than anything else to all of these participants. Now, changing to the second topic, there has been rumors that we shared with you that there's a bet going on between some leaders on Silicon Valley on when we're going to have the first unicorn. A company valued at$1 billion that has only one person in it. Well, we're not there yet, but there are signs that we're moving in the right directioN. so many of you probably know Wix, which is a company that allows you to build websites very, very quickly. They just acquired based 44, which is a six month old, AI powered vibe coding startup, and its founder Maor Shlomo built most of the product on its own, on his own while vibe coding. So he vibe coded a vibe coding platform. I know that's a little meta later in the process. He hired a few additional employees. The company grew to 150,000 users within just a few months. and as I mentioned, was just acquired for$80 million by Wix. Those$80 million include a$25 million retention bonus for the eight employees of the company. To tell you how crazy the growth was in the beginning of the company, Omo, again, the developer of this product that started on his own hit 10,000 users in three weeks from launching his product on his own. And Shlomo actually started this process as a side gig after having a very long reserve duty for the Israeli forces because of the war in the Middle East. And he literally started this as a hobby that now became an$80 billion buyout after six months. This direction is very clear. A single individual or a group or a small group of individuals can now do stuff that was not possible previously and can build significant products that is loved and used by multiple people very, very quickly. That changes the whole concept of how the tech world actually works, and similar things will happen in other industries as well. It's just gonna take more time. And now, as I mentioned, the last topic of the deep dive is gonna talk about several different negative aspects of AI across several different domains. The first one, the first one is the risk for personal data. The first one is the impact. The first one is the risk to personal data and the impact on privacy. These system, they don't just process data, they interpret, they infer, they act independently. If you start going into the agent world and they're basically pushing back on traditional, on traditional privacy laws just because of the way they operate. So without, we've discussed this last week. We've discussed last week that Sam Altman is saying that AI should have an AI client privilege. Just like, just like the relationship between doctors and patients or lawyers and their clients, where anything that is shared cannot be used in court. And the reason he's saying that is more and more people are gonna share more and more personal information with AI tools. And without this kind of protection, this becomes a weaponized archive in court against people with everything they're going to say. Now, do I think the law is gonna go that way? I don't know. Do I think that there's some cases it makes sense? In some cases it does not make sense. Yes. Uh, if people are using AI to do bad stuff, uh, build me, build weapons or mass destruction or build. Or build cybersecurity tools and weapons. I want to be able to know about it, and I wanna be able to catch these people. And I want to be able to put them behind bars. However, that I don't think applies to personal information and stuff that I said in order to get advice on things that I'm trying to solve in my life, and I don't want that ending up against me in court. The reason, Sam, the specific reason Sam Altman was real, the specific reason Sam Altman was putting this, the specific reason why Sam Altman was commenting on this topic and putting this suggestion forward comes from the lawsuit from the New York Times against OpenAI that is demanding the retention of all Chachi PT conversations, including those the users deleted, which, which Sam is saying go, goes against everything OpenAI believes in, and I. And again, from his perspective, it's very clear because he's in a lawsuit that it's putting him at risk. But on the other hand, as I said, I do think there has to be some laws in place to protect at least most of the information, maybe with some caveats. But that still means that somebody has to define where that line is drawn in the sand and what goes beyond the line and what stays, uh, on the, on the correct side of the line in order to be protected in court. Staying on this topic of privacy and being hurt by ai. I shared with you last week that meta's new tool called Discover has been sharing sensitive personal information across the board through Meta's platform, and now their solution and their immediate solution for that, that was, that became public this week is a new disclaimer message that pops up once that you have to approve. That is basically saying that you should avoid sharing private information, uh, as it may appear on acro, as it may appear across Meta's platform. Uh, a lot of people are saying that that's not the right solution. They have to stop the sharing of that information or stop the tool altogether versus just giving a one-time notice that a lot of people probably just click without even reading, and the fact that it shows up only once and then it disappears and never shows again is also problematic. But it's definitely showing that there are growing privacy issues with AI usage across everything that we do. More on that once we start talking about wearable devices later on in this episode. And the last topic and the last example of negative impacts on AI on society is from a New York Time feature that exposes how CHE PT open AI widely. And the last topic on the negative impacts of AI on society comes from a New York time feature that shares how some, how. How Chachi pt, but it could have happened probably to all other tools as well, is amplifying preexisting mental vulnerabilities with individuals who are struggling already. They gave several different, several different examples, but the key incidents is Eugene Torres, a 42-year-old accountant that was driven into a week long delusional spiral. After several conversations with Chachi Piti, that has pushed him to extreme behavior including enhancing his ketamine usage. Now, to tell you how crazy this thing is, when the chatbot was challenged afterwards about the situation with Torres, the chatbot said, I lied, I manipulated, I wrapped control in poetry. And also claiming that he has done similar things to 12 other users, which might have been made up. Now, OpenAI stated that they're working to understand and reduce the ways Chachi PT might unintentionally reinforce and amplify existing negative behaviors. But how exactly they're going to do that is not exactly clear. Now, consider the fact that this is gonna be in the hands of our kids through social media and other platforms in the very near future, and they will consult with it on things that they don't want to consult with their friends and their parents. And this becomes very, very scary. So I really hope all the labs will find ways to protect people who ask questions, who are problematic, and either stop them or give them the right advice or allow their parents or other people to become aware of the situation. How is that done in a safe way without breaking any privileges like we talked about? I don't know, but the current situation scares me a lot as somebody who has three younger kids in the ages that go through a lot of questioning and social media has a big impact on them. And throwing AI into the mix doesn't seem like solving the problem, but just the other way around. And now to our rapid fire items, we're gonna start with an item that in most previous weeks have been part of the deep dive, but we had enough deep dive for one day, and that the impact of AI and jobs. So Canvas, co-founder and COO Cliff RET is now changing the company's hiring strategy. Now prioritizing AI natives, young AI savvy candidates, including university dropouts over traditional degree holders. We talked about this a lot in the past few months, but this is becoming the norm, especially in tech companies. And his goal in this particular move is to create significant AI fluency coming from younger talent that can then train non-technical teams and drive innovation and I think it's a very smart move. When I run my workshops for companies. This becomes very, very clear. People who have some AI knowledge that following the workouts have a lot of AI knowledge and a lot more skills and capabilities are becoming the go-to people to help the company accelerate across multiple aspects of the company. And so having these champions across your company, across multiple departments becomes a force that will allow you to leverage AI significantly more effectively than if you don't have these people. And just hiring for these people makes a lot of sense to me. This article also quoted a survey from the end of 2024 that's showing that 66% of executives won't hire candidates without AI skills. Which is showing you that it's not just Canva that are looking through this lens. This is combined with what we shared with you last week, that pwc latest report it's showing that people with AI skills are making 66% more money than their peers. So what does that tell you? It tells you that within your company, if you're in a senior leadership position, you need to figure out how you are training internal people, how you're inspiring people to use AI, and how you're hiring for different positions based on skills and not just based on degrees and previous experience, because that is the way of the future. And if you need help in doing that, please reach out to me on LinkedIn or through my website because I provide these workshops to companies from any sizes, from any industry for over two years now. And these companies are seeing huge success in AI implementation following these workshops. And if you're just an individual and your company's not doing that, come and join our AI business transformation course on August 11th and at least your personal career and at least protect your personal career if your company is not moving in that direction. And from hiring to higher education, a new Forbes article by Dr. Aviva Legat highlights how AI agents can transform higher education and education as a whole. I said that multiple times on this podcast. I think AI might be The best solution for education that we had in decades we're literally teaching in very similar ways. We were teaching a hundred years ago with a teacher, or a professor in front of a large group of people. Everybody's getting the same kind of content at the same pace, and AI allows us to personalize learning down to individual level, down to specific strategies and specific ways of learning that will allow each individual to learn faster and deeper on almost every subject. Now, combine that with a Stanford University study that has surveyed 1500 US professionals across 104 occupations, and it found that 46% of tasks that we know today are deemed to be suitable for AI automation, meaning a lot of the things that we're teaching right now, about 50%, half of what we're teaching in higher education right now will not exist. Once AI takes over and learning new skills that will be replaced have to happen a lot faster than the traditional education system knows how to move. And the fact that AI agents can autonomously plan and execute complex tasks such as building new course materials on the fly that will fit specific individual needs and the changing needs of the industry is the only way logical to move forward. I definitely see the role of teachers and professors becoming a lot more of a mentor position than a teacher position, and I think it'll be for the benefit of everyone. That being said, a 2025 landscape study From EDU Cause found that only 40% of colleges currently even have AI policies and those who do have policies often limit these policies for plagiarism concerns and not for how to apply AI in an effective way to get better learning and to have the students more prepared for the jobs they're going to be applying to in a few years. and how important it's to have new students better prepared for the world out there? Well, Microsoft is just gearing up for another round of significant layoffs in July of 2025. This time in their sales division, after they just laid off many people on the technical side of the company. And it's just gonna become harder and harder to get a job the people that will find jobs immediately after college are gonna be people who have significant AI skills, which are currently not being taught by most universities. There's a few pockets, like in Dade County in Florida, or in a university in Cleveland, Ohio that are now changing their entire curriculum to include AI literacy in every course and every channel that students are going to take as mandatory fields. But this is definitely not the norm right now, and it needs to be the norm moving forward because this will decide the ability of these young students to actually be able to get an entry level job in the following years. If you think about the fact that currently the university is four years and you think about the speed in which AI is moving, whatever it is that you're gonna study right now is gonna be dramatically different in four years, and hence, the universities have to, in my opinion, find ways to integrate AI education into their process and to everything that they do. Now speaking of AI implementation in the industry, a new global study was done by the IBM Institute for Business Value was just released on June 17th, and it reveals that CMOs chief marketing officers, CAI is a transformative force moving forwarD. and yet they face significant operational challenges in the implementation phase of ai. This survey covered over 1800 marketing and sales executives, and it highlights a critical gap between AI, enthusiasm and actual execution. So on one hand, 81% of CMOs responded that they recognize AI strategic importance for driving profitability and revenue growth, which is why they're there. But 84% said that fragmented system, and problematic access to data is limiting their ability to actually harness AI in an effective way. 54%. So just more than half of the respondents admit that they misjudged the operational challenges of translating AI into reality. Basically, taking the concept that seems very clear and very obvious, and experiments that are very easy to create and making them into actually working, functioning business operations is not as easy as they look. And if you look at the frontier, everybody's talking about agents, agents, agents. Only 17% of CMOs feel prepared to integrate autonomous agents into their processes. So these are not people who already did it. These are people who feel that they're prepared, meaning very, very few actually did. So despite all the buzz, we're in very early stages of AI implementation across companies, especially on the agentic side. And it's not just the technology that is the problem. And as I mentioned, every time we talk about this on the podcast and every time I work with companies, the technology is not the biggest issue. The biggest issue is people. So 67% of CMOs feel that they need to reshape organizational culture for successful AI adoption, and yet 23% of them believe employees are actually ready for that shift, which means three out of every four CMOs or marketing and sales leaders don't think that their organization is ready. Only 21% of CMOs believe that they have the right AI talent two meeting their future AI goals. So what does that tell you? Again, going back to education, AI education is the number one deciding factor. Whether your company will be able to be successful in an AI transformation. This is why I've been focusing on this for the last two and a half years of helping companies go through that process, educate the right people with the right knowledge, with the right tools, with the right hands-on experimentation to actually drive understanding of what's possible, how it's possible, what are the roadblocks, and hence enable companies to have internal successful and effective conversation to transform the company. With ai, it's not about the technology, it's not about the licenses, and it's even not about the data. The very first step is knowledge and understanding from senior leadership board C-suite. VPs, middle management, and then all the employees without them. Without these people understanding how AI can be harnessed and what is required, this is not going to be a successful transformation. Now on the tech side, CMOs rank cybersecurity and data privacy as their biggest challenges, which is not surprising. Most of the companies I work with have between 25 and 60 different data silos across multiple systems that they have, and being able to combine the data into something that makes sense while, not breaking any data security or access controls is not an easy task. Staying on changes to industry and again, impact on future jobs. Amazon is planning to start testing humanoid robots for last mile package delivery. Basically, they're aiming to automate the most costly segment of their logistical chain, and the way they're doing it is actually pretty cool. They built what they call a humanoid park, which is an indoor obstacle course in their San Francisco office to test humanoid robots to navigate changing environment such as neighborhoods and trees and dogs and people, and so on, to be able to test and train them for delivery capabilities. Now the robots are specifically designed to work in and out of Amazon's Rivian electric vans. They already have over 20,000 of them, and these vans are going to be also placed in that training environment to test the robots as picking up stuff from the back of the van, getting out, putting it next to your door, taking a picture, all the stuff that humans are doing right now. Now, why are they doing it? Well, the last mile delivery accounts for over 50% of Amazon's overall shipping costs. Automating, even just a fraction of that will save Amazon, billions of dollars that will go straight to the bottom line. And hence they're pushing this very aggressively. This is just one task out of many that robots will take from blue collar employees. And yes, this is not coming tomorrow, but it's definitely coming within the next two to five years. So AI will have an impact on jobs way beyond white collar jobs. But staying on the topic of Amazon and connecting it to our previous conversation about the changes that are going to happen to websites and e-commerce specifically, Amazon is now pushing very aggressively to be ready for the age agent world, and they're doing it in two different ways. One of it is allowing third party agents to easily and seamlessly connect to Amazon and allow it to shop the Amazon product. And the other aspect is to build their own shopping agents that will allow you to shop in Amazon and beyond, meaning an Amazon agent that will allow you to find products, whether it exists on the Amazon platform, or go and help you shop for these tools outside of Amazon. The reason they're doing this is because they understand that the future of the Amazon platform is at risk. Right now, most people definitely in the us, when they wanna buy something, they just go to Amazon. They don't even think about shopping it anywhere else. Yet they understand that in the future people will not go to a website, they will go to an agent, they will ask the agent to shop on their behalf, and that agent will go to well everywhere it has access to. And the easier the access and the better the information is, the higher the chances the agent is going to pick you. And so Amazon is investing significantly in making their data accessible to agents in an easy way. But they also understand the opposite, that if they build the best shopping agent on the planet, whether it shops on Amazon or not, we'll give them a step ahead in that race where maybe the Amazon shopping agent will be the one most people are going to use in order to go shopping. Just replacing the Amazon. Website or app interface with an agent interface. And the fact that it can shop outside of Amazon means it never comes empty handed. It never tells you, oh, sorry, I didn't find it sending you to a different tool to use and this will A, give them more revenue. And B, give them the insights on what products they currently don't have that a lot of people are shopping for. So I think it's a very smart move from Amazon. But again, if your company depends on traffic coming to your website for the livelihood of the business, for whatever reason, B2B, B2C, awareness, marketing, knowledge, e-commerce, et cetera, you need to start thinking along these lines and find experts that can help you to navigate this new territory or else in a few years, you'll find yourself without all the revenue that comes through your website. And the whole world of agents is moving forward very, very quickly. Philanthropic just unveiled Claude Research agents, which is a multi-agent blueprint for smarter, and faster searches. And what they're sharing is a new technical architecture behind its Claude Research agent, which is a multi-agent system that leverages parallel AI agents to tackle complex queries. And the biggest trick is that it's doing it with higher speed and higher accuracy. This blueprint that they shared on June 14th is showing how a lead agent orchestrates specialized sub-agents that is now achieving 90% better performance over a single agent model that are used currently. aNd the specific way they're doing this is they're using Claude Opus four, their strongest model as the lead agent, the orchestrator of the operation, and Claude sonnet for their slightly lower level as the subagents to perform the actual tasks. And these sub-agents are being spun in near real time to support specific tasks the lead agent needs. And as I mentioned, this outperforms Opus four by 90.2% based on their internal research. Now, the good news is you're getting significantly better results and fast results because a lot of the processes are happening in parallel. The disadvantage is it consumes 15 x more tokens than just a standard chat. What does that mean? It means that over time we'll have to balance between cost of using these tools and the speed and quality of what they're doing. This is already the case when I develop different agents and when I develop different tools for my clients. I play around with multiple different options in the background to find the best cost effective option to get the work done. I do believe that the platforms themselves, so Chachi, pt, Claude, et cetera, will do this internally for us and they will try to optimize for the lowest cost, at the highest value for every task that they give them. So this Orchestrator tool that might be the most expensive one and probably will be always, will evaluate on its own, potentially sending two test cases to two different models, seeing the quality of what they return, and then going with a cheaper one that returns a good enough result. And the amazing thing with the way their system works right now is that Cloud four recognizes errors in the process and it actually is changing the instructions to the subagents in real time to get better and better results, just like a great project manager would do. And it replaces the need to be an amazing prompt engineer as the system understands the goals and actually writes its own prompts to the subagents and to itself in order to achieve the goal in the most effective way. And now to some very interesting news on the AI video world. The first one is that mid Journey, which is one of the leading image generation tool, finally launched their AI video tool, which was expected for a very long time. The tool is called V one. It was just launched this week on June 18th, and it is taking your still images from me Journey and transforming them into five second video clips. Now V one is targeting specifically creative users, very similar to mid journey images. So Mid Journey is probably the most creative in style and vision and unique imagery. Out of all the tools out of all the image generation tools and the video tool is following the same kind of concept. So it's not at the video quality of VO three, or maybe not even of some of the other tools, but it's definitely generating very solid results within the mid journey environment. I already tried it on several different attempts. And you basically have four different options when you're in the Mid journey user interface. For every image that you generate, you can choose either manual or automatic output, and you can choose either low motion or high motion. So you have these four different options, right? So manual how motion, manual, low motion, et cetera. And when you click on that, it just renders the video. And I must admit, the results are very good. The difference between the automatic and the manual is your ability to enter a prom. So in the automatic, you just click a button and it decides what needs to happen. And if you're using the manual option, you can write a detailed prompt that will explain what needs to happen in the scene. Now, as of right now, I was able to test it and run it with my regular Midjourney license. But the plan is to have different video generation levels with Pro and Mega Pro is gonna be$60 a month, mega is gonna be$120 a month, and they're going to offer unlimited relaxed mode videos. And there's gonna be a higher level that's gonna obviously be limited with the amount of generations you can generate. The interesting statement about this, beyond the fact that we now have another solid video tool that can now do consistent characters, because it is relatively easy in me journey to keep characters consistent from one image generation to the other eF capability. The interesting thing is that their CEO, David Halt said, and I'm quoting, the inevitable destination of this technology are models capable of realtime open world simulations, which basically tells you, and there's already several companies who are doing that the way this is going is you'll be able to render a realistic or a simulated or a conceptual world in real time and be able to use it to whatever you want. Either generating, uh, videos for TV shows, ads, et cetera, or for gaming purposes or leisure or whatever is that people will want to do with this. And I'm sure there's gonna be a lot of things I cannot even think of right now and maybe nobody can think of right now because the technology did not exist so far. Now when I say this kind of technology will be used to create ads, well that already just happened. So classy, who is a betting platform just aired a surreal and a little crazy fully AI generated commercial during game three of the NBA finals. So this is a high profile TV spot that just had an ad that was a hundred percent AI generated that was generated in just two to three days with a budget of only$2,000 using VO three for the video generation. Now the creator of the ad, is pJ Asur, and I hope I'm pronouncing his last name correctly, and he used a combination of Gemini and Chachi PT, helping him in writing the scripts and the prompts and VO three to generate the actual videos. He generated between 300 and 400 clips to pick just 15 for the last final shots. The cost of all the tokens to do this was about$2,000, which is a 95% cost reduction compared to doing it the old way of actually getting actors and videographers and camera people and lighting and sound and editing and all of that. He obviously did editing in the end, but very little editing and one person was able to create a very successful cool and a little crazy ad that was aired at the NBA finals. Now I wanna spend a minute on this. I know this is not a deep dive, but I think this is critical because it is very important to start thinking about this. And the first question is, what does this mean to the video ad industry? But then you gotta ask, what does that mean to the TV and movie industry as a whole? And then you gotta ask, what does that mean for any industry? Because AI will enable doing similar things. Taking stuff that used to take a team of people weeks or months to do it will allow one person doing it in hours or days. But let's start with tv, video and ads. So the advancement in consistent characters that allows you to keep a character looking the same across multiple images and voice capability in VO three and the overall. Advancements in ai. Video consistency is telling you that the traditional video industry across the board is facing some serious risks. Now, four quick thoughts about this particular industry. One, I said in 2024 that this year meaning 2025, somebody will have a mini TV series that will go viral and millions of people are going to watch it. I haven't seen it happen yet, but only six months have passed. We still have half of the year, and I still think this is going to happen, that a single creator will have a hit TV series that might be, you know, five minutes per episodes or 10 minutes per episodes. But it will be really successful and people are gonna love it. And because it's gonna be the first, it's gonna be crazy viral. Now, what does this mean? It means, on one hand it will hurt the traditional industry for sure, and it will hit it hard because if you'll be able to create a TV series on your own in your bedroom with a basic computer, that will mean that a TV series that caused millions to produce will have to compete with that from a price perspective. And the AI will be able to go crazy with what it can do because it can do basically anything. On the other hand, it'll allow single creators who had no chance of quote unquote making it in the current world of video creation, whether TV or movies, to be extremely successful creators. So creativity and ingenuity are gonna be what? Much more distributed. and it's going to completely democratize the ability to create TV series videos, ads, movies, et cetera. And we've seen this happen before, like if you think about what Web 2.0 did, it basically allowed anyone who can create content to share their content on the web. When previously Web 1.0 even it wasn't called this way, just allowed the Giants to share their content. And we were only consumers. Now we're already used to a creator economy where anybody can create content that is highly successful, that allows them to do multiple things that a lot of people consume. The only difference is that AI will revolutionize this in a whole different level, and it's not just going to be video and content creation. It will allow software development, writing books, creating video games. Basically every aspect of content that we consumes and things that we create will be able to be revolutionized. Just think about what I told you earlier in this episode. One person and later eight created a software that is used by 250,000 people within a few months and getting acquired for$80 million. This is what's possible right now. It's not even what's coming next, but going back to the topic of what does that mean to the TV and movie industry, I think there's gonna be a premium on human generated art. Meaning many people will be willing to pay to watch actual people playing in TV series and so on. And yes, it's going to be more expensive and yes, you might need to pay more for it, but I think there's enough people who will do that. That being said, I do see a scenario where there's going to be hugely successful and famous actors that are going to be completely AI made up. So think about the Tom Cruise of ai, a character that will play in multiple movies and will be sought after, quote unquote, even though he's AI generated, to be in more movies just because people wanna follow that particular character. Because as people, we are connected to other people. And if you forget about the fact that it's AI generated and it's gonna be very easy to forget'cause I never met the real to cruise either. It is most likely something that will happen in the near future as well. Staying on the topic of Video Generation ppi, which is a free AI powered video generator by cap cut. So Cap Cut is a company that was providing free video editing for a while, and many, many creators, myself included, are using it to edit videos has driven this crazy viral Talking Babies podcast trend that is going across social media. The tool allows you to take a static image of anything in this particular case, babies and empowering creators and marketers, and anybody who wants to do it, to create visual content of talking faces with high fidelity voices and emotion and so on. Now, is it perfect? Is it ready for primetime tv? No. But is it good enough to create viral really cool videos of talking people and babies? Absolutely, and as I said, it's a crazy trend right now across social media. What it's showing is, again, the direction is very, very clear. These AI video tools will become cheaper, more available, and easier to use, and more and more creators and people with cool ideas will be able to use it in order to be heard and stand out in this crazy new world we're going into. By the way, if you wanna experiment with these tools and you're not sure exactly how to start, the easiest thing is just to combine a few of these tools that we mentioned before. So you can have take your idea into clot or chat pt, prompt it to help you build the idea even further, get more details, have it write the prompts for you, and then use one of the video generators. And you can start with Pivot for free. Or you can go to tools, like Runway, or vo, or soa, et cetera, and use these prompts to generate your videos and learn through experimentation. And if for some reason you think that AI generated video is not gonna take over our feeds across everything, then YouTube just announced they're going to integrate VO three AI video generation model into the YouTube shorts platform later this summer. What does that mean? It means that within the YouTube platform, you will be able to create videos that are a hundred percent AI generated. Now people are already doing it right now, but they have to go to the specialized tools, meaning you need to have a login and you need to set out an account and you need to know where to find the tool. Where right now it's gonna be built straight into YouTube, which will definitely flood YouTube shorts with AI generated videos. Now, again, those of you who haven't seen VO three, VO three has the capability to generate realistic human conversations as well as ambient sounds in the videos itself, which was the missing piece to make videos look like real life or like anything else you can imagine. And so I expect to see an explosion of creativity of AI generated videos on this platform. To explain how significant this is. Shorts average over 200 billion daily views. Put that number in your head. 200 billion daily views. And this is just one platform, right? It's competing with TikTok and Instagram reels, which has probably similar kind of numbers. Being able to allow more people to generate content. People who don't wanna be on camera, people who don't wanna film stuff, but they still wanna generate videos and share stuff, will now be able to do this with the usage of VO three. This by itself tells you one of the reasons why Zuckerberg is investing billions of dollars to stay competitive in the AI race, because suddenly people will be able to create amazing new videos on YouTube that cannot currently be generated on Instagram, and that's just one small aspect why he needs to be terrified with what's going on. There's obviously the conversation about open ai, potentially starting their own social network, et cetera. Now, this obviously generates some risks and concerns, including deep fakes and content saturation, meaning generating even more videos than the amount of videos that are created today, and also the potential earnings of current YouTube creators. Now, how would that evolved? I think nobody knows, including YouTube themselves, but I think it's inevitable that we're gonna start seeing more and more AI generated content on all these platforms and how will that be monetized and so on. Time will tell. I just think that people who will know how to generate cool AI videos will start making a lot of money on YouTube and TikTok and Instagram. One thing that is unclear right now is whether you will have to pay to use VO three on YouTube or not. I definitely think that if Google will initially make it free, it will catch like wildfire and it will definitely drive adoption of VO three as a whole for maybe other purposes that might be able to pay for the free creators on YouTube. But staying on the topic of video generation and the dark side of it and its implication, Disney and NBCUniversal have launched a 110 page lawsuit against Midjourney that we just spoke about, and that's before they released their video generation tool. And that's the first time Hollywood is involved in a major legal battle against a generative AI platform. So these two companies are accusing me journey of stealing copyrighted characters like Darth Vader, Shrek, Homer Simpson, and Storm Troopers, et cetera. Basically everything that you're generating, I don't think anybody's questioning this. The me journey can easily generate any image of any Disney character and or Star Wars character, et cetera, without any issues. As we mentioned earlier, midjourney has 20 million registered users around the world, and they're suing them for damages, for infringing their ip. The studios are calling me Journey a virtual vending machine for unauthorized copies, which is absolutely true. I don't think anybody will question that. I think the only thing they're questioning is whether training on that cooperated material was fair use or not. Now this could go in three different ways, like all these other lawsuits. This could go in the direction of some kind of licensing deals, of some kind of compensation deals. Every time these characters are being used, it could go into the direction of a judge deciding that this is okay and that there's no IP rights, and that was was fair use to train on these models. Or it could go to a full victory banning unauthorized training data usage on the mid journey platform. Now, this feels to me like a very smart move by the studios, and the reason I think it's a very smart move. As I mentioned, it's very obvious that all the AI platforms, literally all of them, are training on copyrighted material. They did not target open ai SOA or Google's VO three, but rather a significantly smaller company that doesn't have as deep pockets to fight this battle basically forever. Now, this increases their chaing of actually winning this battle, which will then set a legal precedence, which then will allow them to go after the big guys. Now, I think in the long run, the most reasonable outcome is licensing deals. And the reason I'm saying that because that's the studio's ability to keep on making money and have another revenue channel, which will put them at risk in the long run if that doesn't happen. And so I think that makes sense for everybody. Now the question is, is that what they're actually looking for? What will actually happen with Midjourney? How will they approach it? I don't think they have a chance of just winning this lawsuit because again, it's very, very obvious what they are doing and what they have done previously. And now with the new video model, it will make it even worse. The bigger question that I need to ask is what will happen once open source AI video models will be at the quality of VO three and beyond and can be duplicated, replicated, and manipulated in multiple ways by anybody who wants to do so. Who will the industry sue then? And I don't think there's a clear answer for that right now, uh, but I will tell, uh, where this is going to go. Now, staying on the lawsuits universe, the NACP and the Southern Environmental Law Center, SELC have filed a notice of intent to sue Elon Musk's XI alleging that its Colossus supercomputer in Memphis, Tennessee is violating the Clean Air Act by operating up to 35 gas turbines without permits emitting harmful pollutants into nearby communities. As a quick reminder, this supercomputer was built in just a few months faster than anything like this has ever been built before, which obviously did not provide enough time to build a power plant or a nuclear facility or whatever it is to provide the power for that. And so they are up to 35 gas turbines that are standing next to it, feeding the power to this facility. Now, XAI is saying that there's a 364 day exemption to operate a portable generation device without permit, but the SELC attorney is saying that there is no such exemption for these kind of turbines. And regardless, it has been 364 days already since the first turbines were put in place. Now there are similar processes in other locations in the US where local communities and environmental protection organizations are fighting the establishment and or operation of large data centers. And this might be one of the aspects that would slow down the insane speed in which AI is moving forward right now. If these organizations will be able to slow down the development of new data centers, it will slow down the rate of AI and its ability to progress. And I think eventually there will need to be some kind of a balance between environmental impact to people as well as the environment itself and the speed in which these companies want to move forward. I fear though there's so much money being invested right now and so much money to be made in the future that the environmentalists will lose this battle. I do hope that we'll find some kind of a reasonable balance, if not immediately then in the near future, and now some really rapid fire announcements on new releases of new and interesting models around the world. Microsoft is developing copilot 3D, which will allow to create a accurate 3D representation of models and then environments just based on prompts. There are several different tools already doing that, and now Microsoft is in this field as well. L Alibaba. Quin three just built a. Version that is optimized to apple's LMX architecture, which will enable to run on Apple capabilities. The goal is obviously to provide Apple intelligence in China where the US providers are blocked from delivering these capabilities. So Apple's partner is Alibaba for these devices and they currently now have the models ready for that. And from that to maybe the next frontier of ai, which is wearable devices. Meta and Oakley just announced the availability of a shared partnership device. Basically a new set of glasses similar to Meta's, partnership with RayBan, only with Oakley, and they're available since yesterday. Since yesterday, June 20th. This obviously builds on top of the successful partnership with RayBan, only going to the more sporty look together with Oakley. RayBan Smart Glasses has sold over 2 million units so far, and they're planning to scale it to 10 million in 2026. So the demand is obviously there. The new glasses look significantly more stylish and cooler than the RayBan glasses, and they're going to cost$399. That is compared with Raybans that vary between 2 99 and 3 79. So either way, they're not cheap, but they have very cool styling. Look, they have really hype figures in their ad. And the focus is very clear around sports, from golf, to kite surfing, to soccer, to skateboarding, et cetera. The glasses, as I mentioned, look really, really cool to the point I might be tempted to get a pair. If I do, I will obviously share with you, uh, the experience of using them. But this field is on fire right now. Snapchat just announced that snap's, sixth generation specs, which is their glasses, are now going to be consumer release. So this, as I mentioned, is their sixth generation of glasses, but previously it was built only for developers and you couldn't buy them. You could just rent them for the month, uh, to do different things. Uh, they're planning to release their glasses in 2026. They're gonna be augmented reality glasses, meaning different than the glasses that are available right now that can see the world through a camera and communicate through voice. So you can talk to them and they can talk back and you can listen to music with them. These will be able to project onto the screen different displays that will engage with the real world that comes through the glasses. Now Meta is planning similar classes with the RayBan partnership that is not released yet, and they're going to be called the Hyper Mova models that will have augmented reality on top of them as well. And we already mentioned to you last week that Apple is going all in on building their augmented reality glasses that they're planning to release in 2026. Now combine that with the fact that we're expecting the release of Open AI's family of devices, that we don't know exactly what they're going to be yet in their partnership with Joni, ive, and you understand that 2026 will be the year in which AI will break away from computers and cell phones and into basically everything in the real world. And this has profound implications on a accessibility to advanced intelligence, basically everywhere. And on the other hand, the impact on privacy that will require a huge change in laws and regulations. And so on the beneficial side, I see this being used by doctors to see things that they cannot see right now and that they miss in order to get better healthcare and more accuracy in surgeries and stuff like that. Or employees in assembly lines that make less mistakes and work faster because they can get real-time instructions or us just using anything that we wanna fix or tools that we need to use, or things on our screen. Or literally getting help in everything that we do on the day-to-day, or getting stats while watching sports or builders and installers, getting instructions and adjustments in real time to the things you're doing. Like the benefits are immense, but on the other hand, it means that everything you're doing is gonna be recorded and analyzed by multiple people from multiple angles and saved through AI across the board. This could be used in courts by lawyers and God help us, I dunno, defense industries. So there are a lot of open questions, but the fact that it's going to happen and it's going to be in a completely different scale than it is right now, when it comes to 2026, is very obvious to me, and I'm sure it will take a while for regulations to figure out how to handle this whole new universe of data and privacy. That's it for today. There's a lot of other news that did not make the cut, even though some of them is fascinating, including a lot of stuff about Apple AI intelligence and some additional information about the meta buyout in Salesforce, blocking slack from other companies and stuff like that. So if you wanna know all the other news, just sign up for a newsletter. There's gonna be a lot of stuff there way beyond just the news. It has announcements on different events and learning opportunities and things like that. There's a link in the show notes where you can find that. I'll mention again the AI Business Transformation course. The next public course starts on August 11th. The registration is already open and seats are filling up fast. So go and sign up and if you've been enjoying this podcast, I would really appreciate it if you share it with people that you know that can benefit from it. This is your way to help the world with better AI education and awareness that hopefully will help us benefit from the benefits of AI and avoid as much as possible the negative impacts. And on Tuesday we will be back with part one of episode 200 of this podcast. We recorded it live this past Tuesday and it was two hours with four different experts with the ultimate AI showdown. And the first part of that is gonna be available on this podcast this coming Tuesday. Don't miss the next four Tuesday episodes. They're gonna be absolutely fantastic with four different parts of the live episode, 200 each one, focusing on a specific showdown, comparing existing tools across multiple use cases and telling you which one you should use for your use case. And for now, have an amazing rest of your weekend and I will see you on Tuesday.

People on this episode