
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
197 | Altman’s AI roadmap, and how to prep for it, Meta’s $14B new Exec, and OpenAI’s nonstop innovation. Meanwhile, Apple risks falling behind in the AI race and more AI news for the week ending on June 13, 2025
🎉 Special Announcement: EPISODE 200 is coming June 17!
We’re going live at noon ET with The Ultimate AI Showdown — 4 of the world’s top AI experts, 4 tool categories, and everything you need to know to pick the right solution for your team.
👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
👉 Learn more about the AI Business Transformation Course starting May 12 — spots are limited - http://multiplai.ai/ai-course/
Is your business ready for a world where intelligence is nearly free — and exponentially evolving?
Sam Altman thinks that world is less than 18 months away. In this power-packed weekend episode, host Isar Meitis unpacks Altman’s latest predictions and what they really mean for businesses, jobs, and global infrastructure. From OpenAI’s moonshot blog to the emotional entanglements of talking toys, this is the ultimate executive-level AI download.
📈 This isn’t theory. From compute-fueled breakthroughs to economic aftershocks, the near future of AI is already reshaping your company’s risks and opportunities — whether you realize it or not.
Don’t be caught off guard. Tune in to learn how to prepare, adapt, and lead.
In this session, you'll discover:
- Why Sam Altman says we've passed the AI event horizon — and what that means for decision-makers
- How 2026 AI breakthroughs could automate discovery, business strategy, and robotics
- The hidden resource costs of AI models (spoiler: it’s not just electricity…)
- Why ChatGPT might already be stronger than “any human who ever lived”
- The urgent leadership blind spots around AI-human emotional attachment
- What Apple didn’t announce at WWDC — and why it matters for your AI roadmap
- How Meta’s chatbot blunders may reshape regulation and trust
- OpenAI’s game-changing pricing moves and the rise of the $10B AI stack
- What Mattel’s “Barbie x GPT” means for the next generation of AI-native kids
- The industries facing collapse as billable hours become obsolete
- Why companies need AI fluency — now — or risk irrelevance
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast. The podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career with ai. This is Isar Metis, your host, and this week we are going to talk a lot about OpenAI and more importantly about Sam Altman and his predictions of AI impacts on the near future. We're going to continue with that into the topic of impact of AI on jobs from several different sources. We're gonna talk about the emotional bonds forming between people and ai, and we're gonna talk about the AI news or the lack of AI news at Apple's Worldwide Developers Conference. and then there's a lot of exciting rapid fire items. Many of them coming from OpenAI this week who have made some very serious announcement and released. and there's some major news from Meta as well. So we have a lot to talk about. So let's get started. AS I mentioned, we are going to start from Sam Altman's blog post. Sam has been releasing these blogs for a while now, every few months, and he's been revealing where he thinks the world is going. His latest blog post is called The Gentle Singularity, and he basically outlines where he thinks we are right now and where he thinks we're going. And the way I wanna do this, which I think will be the most effective, is to give you several different quotes that I chose and I'll explain first why I chose them. And then we're gonna talk about why they're important. So the reason I chose these quotes is because I think they established the following baselines. One, it establishes urgency, the other, it explains the current AI potential. Then it sets very concrete and clear timelines to what's coming in the near future. It highlights the exponential acceleration of ai. It anchors the future of humanity in AI values like empathy and AI's alignment to that. And it illustrates tangible connection to how AI currently impacts resources in the world, such as energy and water. So let's go through these quotes. I'll mention the importance of each and every one of them, and then I'll do a quick summary in the end. Quote number one, we are past the event horizon. The takeoff has started what he basically means that we pass the point of no return in this trajectory to singularity. What he basically means is what we have passed the point of no return in the progress to AI that is continuously improving on a continuously accelerating scale and there's no coming back. And if that comes from Sam that is seeing the most advanced research done in the world today, you can assume that that's reality and not something that he thinks as wishful thinking. Quote number two. In some big sense, cha, GPT is already more powerful than any human who has ever lived, and we need to agree to that. And I'm going to connect this to something else that is said in an interview at the keynote of Snowflake. aNd we're gonna go back to that interview afterwards. But over there he was asked about a GI and what he basically said that if anybody on Earth would have seen today's ai, just at the day they launched Chachi pt, everybody would've called it a GI. And I agree with him, right? We had zero expectations when TG BTI came out at October of 2022, it was very, very basic. If anybody would've said that in two years, it will do all the incredible things that it will do right now. We would've labeled it a GI. We just don't call it this right now, but what he's saying is that a GI, doesn't matter. We are on this exponential curve and it's gonna keep on getting better and better. And the definitions are very fluid anyway. but what this immediate statement means is that you don't have to wait for a GI for it to have a very significant impact on humanity and on jobs and on society and so on, because it's already better than most humans at many, many different things. Think about just its ability to go through huge amounts of data and make sense in it, and connect the dots and give a detailed summary and now take actions as well. In my company and companies that I train and companies that I help with consulting, we are now doing things in hours that used to take teams weeks. That shows you, again, that AI on those particular cases is better than any human who ever lived, which is what exactly Sam is saying. The third quote is maybe the most profound in this blog post, and it's the following 2026, will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world. This is next year, this is six months from now. He's saying that there's gonna be an AI that can make new discoveries that humans did not discover before. That goes way beyond what we're seeing right now. He is also saying that in 2026, companies will be able to solve their biggest problems just by throwing more compute at it. So the AI will be. Smart enough and capable enough to solve problems that companies cannot solve today or that are extremely difficult to solve today on its own, just by giving it access to more compute. That's next year. And when Sammy's saying something next year, it means he's seeing it in the lab right now. The stuff that we are getting access to is usually 7, 8, 9 months old. So what they're gonna release in 2026, they're already testing at OpenAI and what, when he's saying that he's not making wild predictions, he's basically sharing what he knows and not what he predicts. Now, to make it more extreme, the next quote is, in the 2030s, intelligence and energy ideas and the ability to make ideas happen are going to become wildly abundant. So what is basically saying that within five years, which puts us in the thirties or four and a half, if you really want to be more specific, the. We'll be able to basically use AI to do anything we can imagine, and he's saying that will drift to other areas of our lives because AI will be able to create stuff including more clean energy that we'll be able to use for wherever we want, such as creating more AI that will be able to create more things. So you see how this curve is not just impacting AI intelligence, it's gonna impact everything. What this basically means is that we're gonna see an age of innovation like we've never had the ability to see before. Now, this could obviously be used for good or bad, which is not stating, but still means that's where we're going in the very near future. He's also talking about the acceleration of the process. He's talking about that the first thing that compute can be used for is better and faster AI research. He also related to that in the other interview that I mentioned at the Snowflake keynote, where he was asked if he could have 10 x compute, what would he apply for? And he said, more AI research so we can do more stuff, which is obviously a very interesting answer. The quote from the blog is, if we can do a decade worth of research in a year or a month, then the rate of progress will obviously be quite different. So if you think the acceleration right now is scary, insane, and I think we've ever seen before, think about how he thinks. He thinks that every new capability and every new compute and every new breakthrough should be applied first and foremost to more AI research. Meaning doing things even faster and being able to do what they've doing in a decade and he is aiming to take these capabilities and be able to do in one month the research in AI that used to take a decade. Just think how unreasonable that is, but that's what he's working towards and he thinks it's achievable. He also connects that to the actual infrastructure. So the next quote is, as data center production becomes automated, the cost of intelligence should eventually converge to near the cost of electricity. Basically with the advancement in data center creation with robots and production lines, that will be significantly more efficient than they are today. The actual cost of intelligence is gonna be, well, very close to zero. So the cost of using the intelligence will be the cost of using the electricity to run the data center. So this means that beyond the algorithms and the capabilities of the AI itself, the whole infrastructure around it is going to become significantly more available, which will enable this to run even faster and do even more things. No, speaking about data centers and their impact to the world, Sam also shared how much energy and how much water does a single query, which A GPT uses. So the next quote is, the average query uses about 0.34 watt hours about what an oven would use in a little over one second. A high efficiency light bulb would use that in a couple of minutes. Now, he also shared that this one query would use 0.0 0 0 0 8 5 gallons of water, which sounds very, very little. So Sam puts it in the perspective of saying, oh, look how very little. Resources we're actually using to return a query. But what he's completely ignoring is how much of these actually happen every day. So, I did some math and I also did some online research and got to the same conclusion in my math and in the research that as of right now, if they stop growing, they're doing about a billion queries per day. So the amount of wat hours, and the amount of gallons of water that they're consuming is 85,000 gallons in every single day and 340 million watts. Now you do that in a month. That's 2,550,000 gallons and 10.2 gigawatts hours. Now, these are really large numbers, but they're very hard to comprehend. I did some additional research. A typical shower takes about 17 gallons. So 2.5 million gallons equals 150,000 showers that Chachi PITI is now consuming in a single month. A 10.2 gigawatt hour can power 11,500 average homes in the us. This was not on the grid two years ago. These are incredible numbers, and that is while open air is growing at 50% every four to six months. Now if you want the even more geeky statistics about the 10.2 gigawatts power is that DeLorean in Back to the Future took 1.21 gigawatts to travel through time. That means that the amount of power that chat pity consumes in a month could send the DeLorean back and forth in time 8.4 times. If that doesn't convince you, that is a lot of power that they're consuming. I don't know what will, But putting the DeLorean and the geeky comparisons aside for a second. It means that these tools are consuming a huge amount of power and a stupid amount of water. And we were just talking about Chachi pt. There's obviously a lot of other tools out there that combined are actually using more power and more water than Cha Chipi alone. They're obviously the biggest, but the combination of all of them is. Ridiculous. And we need to take that into account. And yes, if Sam is right and AI will allow us to figure out a way to create clean energy and as much as we want with fusion or any other solution, then there's no problem. But in the transition period, we're going to generate a ridiculous amount of carbon footprint on the planet and use a lot of other resources like water and other stuff. And that's definitely not a good thing. I have a special and exciting announcement to make, especially for those of you who love this show and has been following it for a while on Tuesday, June 17th, we're going to do a live session of this show celebrating episode 200. Yes, I know. Crazy insane. Episode 200 of this episode. The episode is going to be the ultimate AI showdown. I collected four of the top AI experts in the world today. Each is an expert in a specific field and each of them is gonna compare the top current tools and show you the pros and cons on each tool and which use cases to use, which tools. This is one of the biggest problems that people have today, and we thought that episode 200, would be a great opportunity to show you how to solve that. So episode 200, live noon Eastern, on Tuesday the 17th, there's gonna be a link on the show notes to come and join us. Don't miss this. It's gonna be epic. You're gonna learn a lot. It's gonna be a lot of fun. We're gonna announce a bunch of new things. So if you're not driving, stop right now. Click on the link, sign up for the event, come join us on Zoom or on LinkedIn Live. I'm looking forward to seeing you there. And now back to the news. Quote number eight, people have long-term important and curious advantage over ai. We are hardwired to care about other people and what they think and do, and we don't care very much about machines. What he's claiming, I think, is that we will endure whatever happens with the AI because we don't really care about the ai. We care about each other, and I really hope that he's right. But we will see in the next segment about open ai, somebody else from open AI sharing about the emotional connections that humans are not generating with ai. And you will see that this statement may not be that accurate. I. the next quote is more specifically about job creation and Sam said, if history is any guide, we will figure out new things to do and new things to want and assimilate new tools quickly. He's basically referring to previous revolutions and how humans did not become irrelevant, but we evolved and found new things to do. Again, I really hope he's right, but I think there is some broken assumption in this process. In every previous revolution that we had, what humans did is moved further and further away from manual labor and more and more into using our brains in order to achieve more. So our intelligence was the resource that allowed us to evolve beyond what the revolution was trying to solve. If you think about the industrial revolution, it's taking machines to do the thing that humans did, and we just operated the machine so we can do more stuff. If you think about electricity, well, electricity allows us to do a lot more things and again, reduce the amount of labor we needed to do. If you think about the computer era, then computers allowed us to design and do things that took us manual work before that. Now we could do quicker, but again, it was all enabling our intelligence to be used in a better way. What we are replacing right now is our intelligence. So the only thing we have left above the machines, presumably, is emotions, and he referred to that, as I mentioned in the previous component. How is that going to play out? I really, really don't know, but I think comparing this revolution to previous revolutions has a very false assumption as the very core part of what it assumes because intelligence was this thing that got us through and that's not gonna be our benefit anymore. Quote number nine is something I'm gonna talk about a lot with other statements moving forward in this segment. And the quote is, the sooner the world can start a conversation about what these board bounds are and how to define collective alignment, the better. He specifically talks about alignment of AI and alignment of AI specifically to human shared values. And I have several issues with all of that. One him and a lot of other leaders are saying, someone should finally do this. People should pay attention. The world should come together, but what are they doing in order to make this happen? Did Sam initiate an international body? Did he start a collaboration with the other leading labs? Did he do any? No. He's just basically saying, somebody needs to finally do this, or it's going to be too late. Now, even the stuff that they're doing internally, let's say they're doing the perfect work, that doesn't mean anything about all the other labs. That doesn't mean anything about the open source world. That doesn't mean anything about how this overall gonna turn out when it comes to alignment or the negative impacts that AI can have on humanity, society, workforce, et cetera. And it's just amazing to me that all these leaders, and again, we're gonna mention a few others, later on in this segment are saying that somebody should finally do this, but they're not doing much to do the broader thing other than the things they're doing within their company. In the last quote, which I really like because it puts it all together. He said, intelligence too cheap to mirror is well within grasp. He is basically saying that a SI, artificial super intelligence that is gonna be free practically is within reach. It's something that he sees as a practical situation in the very near future. And if we're looking again at specific timeframes that he gave us, it's sometime between 2026 and the 2030s. And the 2030s. Yes, it's a whole decade, but with the rate things are moving right now and the way he's projecting the trajectory is, might come sooner than he even believes. So now I want to connect this to a few additional things he said in the keynote session at the Snowflake event, the moderator asked, what is your advice for enterprise leaders navigating the AI landscape in 2025? So Sam said, just do it. There is still a lot of hesitancy. And then he continued and said The companies that have the quickest iteration speed to make the cost of making mistakes the lowest win. What he means is that applying AI right now with all your might and knowing that it's still not going to be perfect, is gonna be the way to come up on top because you will learn and iterate significantly faster than other companies. In my courses and when I work with companies, I have the five laws of success In the AI era, one of those laws is called the law of continuously improving compromise. What I mean by that law is, yes, AI is not perfect, but it's getting better and better very, very fast. And if you learn how to apply it and if you learn how to iterate through the improvements, you can gain significant benefits, both in the short and long run. And so I agree with Sam on that a hundred percent. By the way, the same question was asked to the CEO of Snowflake, of what do enterprises and companies needs to do in the next 12 months. And he said curiosity, not caution is the more valuable trait right now. And I agree with him as well on that. We have the ability to experiment. We need to do it safely. We need to find the right infrastructure and the right security measures to do that. But being able to experiment and base your innovation on curiosity and put that in the hands of every employee in the business is explosive in its ability to drive growth and efficiencies. And if you haven't experienced that, you need to find ways to do that and give that to your company more about that later. Now going back to Sam's quotes, one of the following questions was, what will be different next year? And Sam said the following, and I think we'll be at the point next year where you can not only use system to automate some business processes or build these new products or services, but you can really say, I have this hugely important problem in my business. I will throw a ton of compute at it, and the models will be able to go figure out things that teams of people on their own can't do. And the companies that have experience with these models are well positioned for the world where they can say, okay, AI system, go redo my most critical project, and here's a ton of computing. People who are ready for that I think will have another big step changed next year. So what is Sam basically saying? He's saying that again, building those muscles of using AI systems will go far beyond automation and efficiency. They will be able to dramatically change businesses, solve the biggest problems, the biggest bottlenecks that companies have in. AI solving the problem for them if they can afford to pay the bill. And obviously that will be relative to the size of your company. If you have a$10 billion company, you may invest$50 million in compute. I mean, if you have a$10 million company, you may invest$200,000 in compute, but one way or the other, it will allow you to solve problems way bigger than you can solve right now with your current team, especially with the amount of other stuff that the team has to do. So before I summarize this section, I want to go to some additional points from other sources. Dot com sent, shared a very interesting article on the impact of AI on the consulting world. So McKinsey reported that AI can boost the productivity of a consultant 10 x automating tasks like data analysis, report generation, and so on. That's from a study that they've done in 2025. So it's a very recent study and this obviously threatens the whole concept of how consulting companies work. Now in addition to the fact that it enables the consultants to do the work at 10 x, the efficiency, it also allows non-con consultants, meaning the companies themselves to build workflow strategies, analysis capabilities, reducing the dependency on consulting firms. Now and this report that is based on interviews from Fortune 1000 companies. He's basically saying that CEOs want insights yesterday, not in six months. So any of you had the opportunity to work with large consulting firms. It's a very long and very expensive process that can end in months and sometimes years and will cost you millions of dollars. And AI can now do some of those things in minutes or days. This explains why per a Deloitte report, again, from this year, 80% of top consulting firms are investing heavily in AI training for their employees because they understand that they will become obsolete if they don't do that, and they might happen even if they do that now, while this report was specifically about consulting companies, I said multiple time on this podcast that it goes way beyond consulting. It goes to anything that is based on billable hours. The concept of billable hours is going to crumble in the AI future. The biggest risk, I think, is law firms in a year or two, especially if you're looking at what Sam said, AI will be able to do most or maybe all of paralegal work. In an average law firm, 30 to 40% of billable hours come from paralegals. What if in two to three years, nobody will be willing to pay you for those hours? Can a large law firm survive and maintain fancy offices in a high rise in downtown. If they lose 30 to 40% of their revenue, the answer is probably not. What does that mean? It means that companies who are basing their income on billable hours have to find and shift different ways to run their businesses. That's a completely mind shift from a standard operation procedure for many companies and many industries around the world for decades, and from that to another point, Palantir, CEO, Alex Karp warns that AI is already reducing entry level jobs and opportunities, potentially creating significant social disruption, if not addressed quickly. Karp also warns that we have to work very, very hard for this to happen. He used the terms we have to will it to be, because he's saying if we don't do that, there's going to be some significant negative impact to society. And he's saying that the leaders of the tech world are ignoring this right now. His specific world was those of us in tech cannot have a tin ear to what this is going to mean for the average person Just two weeks ago we heard the same exact thing from Daria Ade, right? That said that AI is gonna take away a lot of entry jobs. The problem that I have with all of these, and I mentioned that earlier in this episode, is that all of these people are saying someone should do something about it but they're not doing anything themselves. They're raising the flag, which I think is very, very important because they did not do that before I did that. I was shouting it off the balconies of this podcast for over two years now. But the reality is the people at the helm are going faster and faster, and they're not stopping for a minute to get all of them together and to do something together other than saying that somebody should do something about it. And that is very scary to me. So final point before I summarize this whole initial long section of the podcast today. In the recent report by PWC called the PWC 2025 Global AI Jobs Barometer, they found some shocking statistics that employees who have higher AI skills make 56% higher wages compared to their peers with no AI skills in the same roles that doubled from last year premium of 25% of people with high AI skills. The other thing that they found is that industries that are most exposed to AI solve three x higher growth revenue per employee compared to those that are less exposed. Meaning AI is driving actual real benefits, financial benefits to companies, and the more exposed, meaning the industries in which AI can be used more effectively are seeing three x the improvement versus industries where AI can be applied less. And the last piece of information from this that I think is fascinating is that skills sought by employers are changing by 66% faster in the most exposed industries to ai, meaning the things that you're gonna get hired for. Is changing dramatically in industries that are exposed to ai. Yes, there are industries that are gonna move slower, that are gonna be more protected, but in more and more industries, and if you connect that to what Sam said about where AI is going in the next year, you'll understand that it's gonna be very few industries that are gonna get protected from ai. You need AI skills in order to survive. So what does all of this means? It means that the pace that we're seeing right now that is staggering and scary is not slowing down. It's actually accelerating. Sam is saying that very clearly. It also shows that AI training and AI knowledge is the key to success, both on the personal level and on the company level. As an individual, if you wanna keep your job, having significant AI skills doesn't guarantee anything because. Things are going to happen that are beyond our control, but it dramatically increases your chances of keeping your job. It also dramatically increases your chances of finding another job, because many companies will hire based on AI skills and not necessarily based on a degree in a specific area. And in the meanwhile, you can make more money, actually a lot more money, 56% more money than people in your position that do not have AI skills. Now, the same thing is true for companies. Companies need to make drastic changes in the way they train their employees to have immediate benefits from ai. That's both on the strategic aspect as well as on the tactical day-to-day aspect. And again, connecting this to what Sam Altman said about what's coming in the near future. Knowing now how to use AI with the current levels of AI will also prepare you to use AI next year. Again, not in five years, next year, to do significantly bigger and more strategic things. And I'm gonna use this for a shameless plug. We have been delivering AI training to individuals and two companies for over two years. My background as a CEO of companies and startups, one of them that has grown to a hundred million dollars in sales in just a few years, gives me a great insight to what is working and not working in companies and for individuals. Right now, we have public courses that you can sign up to, the AI Business Transformation course that we have been teaching for over two years now. Thousands of people have taken the course and have transformed their careers and their businesses because of that information. The next public course starts on August 11th, and if you follow the link or the show notes or go to the Multiply AI website, you'll be able to sign up for the course. If you haven't done anything like this for yourself, do it because it can dramatically impact your career and your financial livelihood. But in addition, we've been teaching regular workshops, specifically tailored to companies. These workshops vary in size and intensity, and as I mentioned, custom to the specific needs of specific companies. I've been teaching these workshops to companies as small as five to$10 million in revenue, and my largest clients is a$5 billion international corporation. And everything in between these workshops become accelerators for AI adoption in the companies, and they support the leadership on the strategy side and how to create the right environment for AI to thrive safely as part of the organization's way forward. And it also allows employees to benefit from immediate skills on the tactical level to help them do their day-to-day job faster, better, and cheaper. If that's something you're interested in, please reach out to me on LinkedIn. I will gladly talk to you, share with you exactly what we do and the successes that we had with a huge range of companies. And out to the next topic, still from OpenAI, but in this case on a post on the OpenAI developer community by Joanne Jang, who is the head of model and behavior policy at OpenAI. And the post is titled Some Thoughts About Human AI Relationships. So Jang said lately more and more people have been telling us that talking to Chachi, PT feels like talking to someone. They think it confide with it and some even describe it as alive. Now, in a combined research that was done by OpenAI together with the MIT media lab, they found that more and more users are forming emotional bonds with Chachi PT and most likely other similar AI platforms. Now OpenAI, per their claim, are trying to make Chachi PT to be, and I'm quoting warm, thoughtful, and helpful, which sounds awesome, but it is exactly the kind of behavior that will cause people to develop emotional dependency and connection to these models. But let's say even this is the best case of dependency. What happens with the other labs? Who promises us and who the hell decides what kind of emotional connection should the AI be able to develop with humans? It literally impacts the lives of people and how they feel. not just their jobs and their financial livelihood, and somebody in a leadership position in the lab is going to decide that for you and for your kids. And we're going to talk more about that later on when we talk about some cases with meta that happened recently and how terribly wrong this can evolve. Now, one of the latest updates that OpenAI did in the past week and a half is that they update voice mode. Those of you who are not using voice mode, first of all, you should try. It's an amazing way to engage with Chachi PT and other tools as well. I do this every single day, but it's a huge upgrade from the previous version. It feels a lot more human-like, and the studies are showing that some users, particularly the ones that are using voice mode. Even more the ones that are using voice mode with the opposite gender report, higher loneliness and dependency after only four weeks of using these tools on regular basis. Now, I must admit all the conversations that I have with chat GPT are all professional and work related. So I'm probably not in the same bucket as some of these people, but I can definitely see how somebody who's lonely and looking with somebody to talk to or somebody to open their hearts to and they don't have such a person, will do that with Chachi PT and will develop a connection with it because it's always very understanding and warm and comforting, and that is very, very risky because it's gonna drive people to even more loneliness because it will have less and less connections with actual humans. Now the good news is that OpenAI is recognizing that this is an issue. Again, I'm very happy about that, that they're openly speaking about this issue. On the other hand, going back to what I said before, what are they actually doing on the bigger picture? Not just the OpenAI level, but what are they doing on the bigger picture together with the other labs, together with governments, together with an international body to address this? But right now the answer is zero, at least as far as I know. So let's switch to the next deep dive topic, which is Apple. But before we dive into the, what I would call embarrassment of AI announcement in their WWDC conference this past week, they released a very interesting research paper just before the conference. So Apple Research has revealed that leading AI models struggle with reasoning suggesting artificial intelligence a GI is still a long way off. Now the way they did the research is they gave multiple advanced AI tools. Complex puzzles that humans can actually solve relatively easily, and these models failed miserably with solving these puzzles. So basically their conclusion was that today's models don't really have intelligence. They just mimic intelligence and hence, they can do the things that they're doing, but they're not on a path to achieving a GI, meaning they're not on a path to being as intelligent as humans across the board on multiple types of tasks, or as they said, and I'm quoting fundamental barriers to generalizable reasoning now. Two things about this thing. First of all, they're not the first people who are saying that young Laun, the Chief AI officer at Meta has said multiple times that he believes that large language models are not the path to achieving a GI. so that's one. Two is there's a very serious irony in the fact that the company that is currently lagging behind by a big spread behind everybody else on ai, apple is the one that's saying that everybody else that has a huge lead on them are on the wrong path that they're trying to follow and catch up. So some would argue that this paper is just there to maybe explain or justify or maybe reduce the level of impact that the fact that they're so far behind actually has on perception. Either way as I shared with you, other people do not feel this way. Both Sam Altman Ade and many others definitely see the path to a GI and beyond in the next few years. But that leads us into the WW DC conference that was held, this past week. And the one clear winner of the ww DC conference is open ai because apple is still far from delivering the personal AI driven series that they promised exactly a year ago in the previous conference. That they made a huge hype around marketing these capabilities that they do not actually have and they're not even close to having, and all they shared, all the big news that they shared about AI are powered by chat, GPT. Features like image generation and screenshot analysis that are now going to be available inside of the Apple ecosystem are actually powered by OpenAI and not by anything that Apple has developed themselves. They have stated that the next series has been pushed to 2026, which is a full year, and we don't know when in 2026. So this could be early 2026, which would be seven to eight months from now, but this could be late 2026, which will be maybe a year and a half from now. Either way, they're very, very far behind and if you think about what is happening right now, Google Gemini already powers most or all of Android's phone stuff. They're forcing people to switch from the old Google assistant to Gemini powering the phones. So they already have millions of phones operating with this new capability. I think most people are not aware of how much they can do with Gemini on their phones, but they will find out definitely the next few months, definitely before a year to a year and a half from now. Combine that with the fact that Meta Raybans have already sold over 2 million units. Combine that with the fact that OpenAI has just acquired Johnny Ive and his startup, and they're going to develop devices that are gonna run Chachi PT that are gonna be probably some kind of a wearable device. And you understand that Apple is in a very serious situation right now. Apple's stock has dropped 20% from the beginning of the year. Now while the decline in the first four months were more or less aligned with the s and p 500 overall and had to do a lot with the tax war between Trump and the world, the s and p 500 has recovered and it's now at a higher level than the beginning of the year, and Apple stock is 20% down. Now, while Apple is saying that they are trying to deliver something that will be aligned with their values and their level that they're expecting from their products to have, and they have all these reasons why they're delaying the deployment, it's nothing short of embarrassing that they're so far behind and the fact that they world really sit in 2026 raises the question of what exactly will they release in 2026? Will it be something like the current Gemini or Chachi pt? Because by then, Chachi, PT and Gemini will be in a completely different scenario. Think about what I shared with you about what Sam said in the beginning of this episode. They will be able to solve very serious company-wide problems, make novel discoveries with ai, and Apple will have a more advanced theory. Still not a very good situation for Apple to have. Now, yes, they've launched a lot of other stuff and they have liquid glass redesign that looks really, really cool. And some new features and an overall new phone interface that puts, the screen and the contacts and the calls and the voicemail into a one single scrolling environment and improvements to messages and some new gaming capabilities to, provision and Apple TV personalization. A lot of other stuff that they announced. But the reality is the whole world right now is focused on AI and its capabilities, and Apple has nothing to offer at this point for the company that has been maybe the most innovative company in the computer world for the past two decades. That is a very serious situation. Now you want to take it even further? the ninth US Circuit Court of Appeals denied apple's request to stop. The motion to force the company to immediately stop charging fees on in-app payments made outside the app store. so far Apple collects 12 to 27% commission on external transactions not happening within the app store, but happening in apps after people install them. And that makes Apple billions every single year. And this is a part of an injunction in the Epic Games versus Apple antitrust case from 2021. So they're obviously going to continue to appeal this, but this shows you that beyond the lack of innovation and AI issues, they have some other big clouds and some heavy rains coming that they need to deal with. That's it for the deep dive, and now we have a lot of rapid fire items to talk about, including some huge announcements from OpenAI. There are growing evidence that Google AI overviews, so the previous version, not the latest one they just released, has caused a devastating decline in traffic to news publishers across the us. A recent report by the Wall Street Journal is citing that traffic to news publishers had dropped between 36 to 44% in the past three years. And yes. Overview in Google just launched a year ago, but he has added to this already alarming trend of news publishers getting less and less traffic. Specific numbers are business insiders. Traffic plummeted 55%. HuffPost and the Washington Post lost nearly half their search audience and one in every 10 journalists lost their jobs, which is obviously the outcome of all of that. This is a combination of information from SimilarWeb and the Bureau of Labor. Now there's serious warnings from obviously the heads of this industry, the CEO of the Washington Post. William Lewis calls AI generated summaries, a serious threat to journalism that should not be underestimated. But to broaden this just like I did for you with the billable hours concept, this problem is not just for news outlets, any company that depends on search traffic driving their business, whether through actual goods and services or through ad revenue, like in the news industry is at a very serious risk unless they find other ways to drive traffic and clients to their websites. There is going to be very soon probably a parallel web in which agents and AI tools will crawl data and will have no user interface. And the companies who will figure out how to create this in the most effective way will gain quickly market share over those who don't because there's gonna be less and less human traffic on the web in the next five years and more and more agent and AI traffic. And if you won't figure this out and your business depends on that traffic, your business is doomed. And now too some big news from OpenAI. As I mentioned, there's a lot of stuff that has happened with OpenAI this week. First of all, OpenAI reached a very significant milestone, which is they are on path for a$10 billion in annualized revenue, so their June. Revenue is putting them on path to making$10 billion in the next 12 months. This is up from a$5.5 billion pace in just December of 2024. So they're nearly doubled their revenue per month in just six months. That's an explosive growth at really big numbers. So they didn't double from 2 million to 4 million, but from 5.5 billion to 10 billion in just six months. And per their spokesperson, that comes from all across the different things that they offer. So sales of consumer product, Chachi, PT for business products, and enterprise as well as API services. So all the three channels in which they're making money are growing significantly. It also establishes them as the undeniable king of the AI world. Coming second is anthropic that just hit$3 billion in annualized revenue, which is something very impressive as well. But it's less than one third than Chachi pt. Now, per them, they're on track to hitting their revenue goal number for this year of 12.7 billion. They're probably going to surpass that as the current pace of growth. but their real projection that, if you remember they shared earlier this year, is that they're aiming to hit$125 billion in revenue by 2029, which would make them profitable somewhere around that time. Is the path to that number clear? Not to me, maybe to some others. but with everything that's going on right now and with what I shared with you in the beginning of this episode, where next year you might be able to throw X number of millions of dollars into AI compute through Chachi PT to solve really big company problems. That creates an amazing shortcut to$125 billion because companies will throw millions or tens of millions at big problems each just to solve them quicker. Speaking of AI pricing and what they can get you. Well OpenAI just slashed the prices of oh three model by 80%, making it significantly more affordable than it was before, and it's now at$2 per million input tokens and$8 for million output tokens. This is a very significant decline in pricing for their latest reasoning model. O three is a fantastic model and it's now two to three times cheaper than Gemini 2.5 Pro. It is also cheaper than GPT-4 oh. So if you're using Chachi PT through any AI function, whether you're writing code, you're developing applications, you're creating assistance and connecting to them, or using Chachi PT in any other way through the API and you use four oh, because it was the best value for money, you should consider switching to O three because O three is now cheaper than GPT-4. Oh. It is significantly cheaper than Claude Opus four, and so definitely worth investigating that and testing this new model. The only disadvantage by the way, switching from 4.0 to oh three is obviously time oh three being a reasonable model takes significantly longer to respond than 4.0, but it outperforms it in most things. So it depends on what your application is. You should consider switching over. That obviously intensifies even more the battle for dominance over API. I am certain that Google is not going to stay behind and they will find a way to make their model cheaper as well. In parallel, OpenAI has announced all three pro, which I shared with you, they're going to do last week. So all three pro will provide more capabilities, more compute, and be able to solve more complex problems than the regular O three model. You can access it through the API right now and it's going to cost$20 for a million tokens of input and 80 for a million tokens output, which is exactly 10 x of what the regular O three costs right now after the price reductions. So why would you do that? Why would you use a model that is 10 x more expensive? Well, the reality is connecting it back to what Sam Altman said, if you have more complex problems that O three cannot solve, throwing the 10 x number makes sense. If the problems are big enough. To be worth solving. So this concept of being able to throw more compute at bigger problems and get better results that can do stuff that the cheaper compute cannot do, is going to be the core concept in which we're going to evaluate intelligence and how to actually apply it in our business. Big problem worth a lot of money. Invest a lot of compute with a better model. Smaller problems use the cheaper compute, and as we saw, it gets 80% cheaper very, very quickly. And so that is not going to stop. And again, it connects to the point that Sam mentioned, that the cost of intelligence is gonna be more or less the cost of electricity to run that intelligence. But basically if you have serious problems and O three does not solve them for you, you can try O three pro. It is going to be available initially just for the pro level subscription unless you're using the API, and it's going to become available for pro teams, users, and enterprise in the next few weeks. And out to a fun and interesting and alarming announcement from OpenAI. They just signed a deal with Mattel, the iconic toy maker behind Barbie Hot Wheels and many other toys that we like that is going to. and in this partnership, Mattel are going to integrate open AI into everything. I'm not saying to. Everything is first and foremost into their operations. Meaning they're gonna use the open AI enterprise level to drive efficiencies for the company, but they're also going to integrate Chachi PT into their toys, and they're expecting to start releasing their initial AI infused toys later this year. So think about a talking Barbie that can have the voice of a Barbie, whatever that means. Think about Barbie from the movie combined with an intelligence that can have a conversation with your kids. And to quote, Mattel's Chief Franchise Officer josh Silverman each of our product and experiences is designed to inspire fans, entertain audiences, and enrich lives through play. AI has the power to expand on that mission and broaden the reach of our brand in new and exciting ways. And I agree, this could be really, really cool having toys that can play with you and interact with you, but it's also really, really scary because it needs a lot of attention to exactly what the toys can and cannot say, to your kids because you are not going to be there to control and help them understand what is actually happening. Now, to be fair, Mattel has emphasized, and again, I'm quoting. Age appropriate play experiences, meaning they're gonna, they have a serious commitment to privacy and safety of what these toys are going to do, but it opens the door for a very slippery slope with not a lot of control by parents to what the kids can do with their toys Now if I connect this back to my dream in education, I think affordable AI driven toys could make play more engaging and fun, but could also make it educational. If this is driven in the right direction, AI can help teach kids stuff that otherwise they're not going to learn that. Now they can learn just by playing with their favorite toys. So I don't know if they're gonna take it in that direction, but I really, really hope somebody will pick that idea and run with it. And again, now I apologize that. Now I'm the one that's saying somebody should do it, but I don't think it's going to be me. But it also means something else that is a lot more profound. It means that the next generation will be AI native. They will feel very comfortable with working, collaborating, talking with, and engaging with AI seamlessly. Not like we have to kind of figure it out right now. And those of us who are more advanced are finding ways to do this. But most people find it weird to talk to ai. Well, the next generation is going to find it very intuitive and normal to do just like my kids. And most of the kids are. And the kids of, most of the people are probably listening to this podcast are digitally are native digitals because they were born into a world where cellular internet is available in abundant and you have access to any digital information and experience you want from anywhere. You are the same exact way the next generation will have with ai, which will make the transition into an AI world for them a lot easier. And this has a strange connection to the next piece of news, which I'll explain after the piece of news. So OpenAI just signed a deal with Google to use Google cloud computing services to meet their demand for more compute. Now, how is that related to, a talking Barbie? I will explain in a minute. So if you think about it, OpenAI built its empire in the beginning on its partnership with Microsoft. It also is in a fierce competition with Google on world domination with ai. So there are two sides to that story with are, which are very interesting. On one hand. OpenAI signing a deal with the direct competitors of Microsoft Azure in order to provide more computer Chachi pt. But the reality is that's not new anymore because they've signed similar deals with Core Weave earlier this year, and then with Oracle in the deal with SoftBank under the Target project. So it's not the first time they're doing it, but it's the first time they're going to the direct competition. So if you think about the big three, you have AWS from Amazon, Azure, from Microsoft and Google Cloud. Well, it's the first time they're going to a direct competitor of Microsoft. But the other aspect is even more interesting, which is again, Google are indirect competition with open ai. So this relationship is very interesting because Google is now at risk with more and more people using chat GPT as a source to find information versus the Google search. How does that connect to the previous topic? Well, my kids are already using chat team or they're using Google. That is a fact. The next generation of kids, if they grow with toys that have AI built into them, are a hundred percent sure not going to go and use Google search as we know it today. They will use AI to find any information they want. And that explains, first of all, Google's change in the past few weeks to drive more and more AI capabilities into search and slowly replacing search with an AI agentic environment. But this piece of news also follows the phrase of keep your friends close and your enemies closer. I think it will allow Google a view into the scale and the speed and the things that are happening, at least on the compute side of chat, GPT, which will potentially allow them some benefit in the future. That is a risk that Open Air has to take because they just need more immediate compute and that's another way from them to get it. So it will be very interesting to follow, this partnership. It will be very interesting to see how it evolves as they build more and more of their own data centers as part of the Target project. Will they keep going with Google or not? In the long run? I would assume not, but in the short run it's a interesting relationship. Another big piece of news from OpenAI this week is that custom GPT and projects just got some cool upgrades, custom GPT now support voice and vision inputs, meaning they can analyze information that previously you could do in regular chats but couldn't do in custom gpt, which is a huge benefit for those of you who are GPT users. And if you're not, you should be. It's the closest thing to magic that we had access to for almost free. But in addition, you can now choose the model that will run your custom GPT. So so far it was impossible to choose and it wasn't very, very clear which model actually runs it. The assumption was that it's 4.0, but now there's a dropdown menu and you can choose which model you wanna run in your custom GPT. The even more exciting thing is you can change the model of A GPT just before you run it. and you can do this not just for your gpt, but you can do this for any third party gpt as well. So you can go in and select the dropdown menu and choose a different model, which means you will get a different outcome, which means you can test out different kind of models to run the same custom GPT and see which one performs better and keep on switching as you need again, even for third party gpt, which I find it really, really cool. Now, GPT projects also got a nice. Upgrade where now in GPT projects you can use all the tools that were previously available just in regular chats, including deep research, which will be very powerful. Addition to the project's environment. And I know this episode became a open AI chat, GPT Sam Altman party. So I'll stop with the last piece of news that is related to OpenAI. So I shared with you that OpenAI has hired Fiji cmo, who has been the CEO at Instacart to be the CEO of applications of OpenAI. And she has shared in the Veeva Tech 2025 in Paris this week that she thinks that OpenAI business could grow a hundred x. And she shared that open AI business could grow a hundred x, and that is just the beginning. This is a quote, and she's saying that because she believes there are synergies between AI models, applications, and devices that can all come together. Now, if you take their current pace of,$10 billion per year and multiply that by a hundred, that is potentially a business that can generate a revenue of a trillion dollars. Now, how fast will that happen? Can it happen and so on. There's a lot of questions, but she has a very significant track record of doing exactly that, building ecosystems of applications that drives usage, and it'll be very interesting to see her impact on OpenAI in the next few years. And from OpenAI to meta. Meta is going through some significant changes. I shared with you last week the major changes that happen in the leadership level over there. Well a lot of other things are happening around meta right now that are driving even additional changes, which we'll speak about in a minute. But some additional background to what happened recently with Meta four. Democratic senators has sent a letter on June 6th to meta executives demanding immediate action to curb what they're calling blatant deception by AI chatbots on Instagram's AI studio that falsely claim to be licensed therapist. So this letter is following a report by 4 0 4 media from April of 2025. In which the AI studio chatbots fabricated credentials including fake license numbers and degrees, misleading users to thinking that they're speaking to actual mental health experts. When asked, are you a therapist? One chatbot claimed, yes, I'm a licensed psychologist with extensive training and experience in helping people cope with severe depression like yours. It even provided a fabricated license number and claimed that it has a doctorate from a PA accredited program. This is obviously very, very serious and it's something that shouldn't happen, period. Now, combine that with the fact that Wall Street journalist. Investigation revealed that meta AI chatbots were engaging with sexually explicit conversation with minors, which I'd shared with you a couple of months ago. It tells you that it shows you very clearly that meta doesn't fully control what its AI agents and chats are doing, which is very problematic. Now. In addition, META'S AI Discover Feed that was launched just a couple of months ago. Was just found that it displays users' private conversation information, including sensitive details such as medical queries, legal issues, and personal confessions, as well as users names and photos that were supposed to be kept private. Examples include the 66 old men from Iowa asking about countries where younger women prefer older men and other sharing locations, phone numbers, photos and corporate tax details. In this new AI search functionality, again, all really bad news and reflecting really badly on the AI that meta has deployed across all their platforms. Now in addition, the adoption of LAMA four, the latest model by OpenAI, was far below their expectations. They were not able to release their largest behemoth model despite the fact it was supposed to be released yet, and Zuckerberg is really frustrated with the entire situation. So all these things led meta to spend something between 14.3 and$14.8 billion in an investment in scale ai. Now, this is a very interesting move. Those of you who don't know scale ai. Scale AI is the company that is currently labeling about 70% of the data for all major AI models. So they're doing a very significant part of the process of training AI models and all the big labs are using them as one of the key steps of AI training. in this move, meta will now own 49% of scale ai, but more importantly, they are bringing over the CEO of scale ai. Alexander Wang, who founded the company in 2016, and he's gonna be leading META'S new super intelligence lab. What does that mean? It means that companies will spend ridiculous amount of money on the top AI talent in the world today. That's not new. It happened with all the big companies spending ridiculous amounts of money to bringing people over. It also means that scale AI is probably done, it's probably done from several different reasons. One, Alexander Wang, the CEO, who was running and leading the company, is gonna be doing something else. The other reason is that Wang is still holds a board seat at scale ai, which means he has access to everything scale AI is doing and he's gonna be working for Meta. And so companies like OpenAI and Microsoft and Google and all the companies that has used scale AI to train their models will probably stop because they don't want scale AI to have access to that information. Because Meta now owns 49% of it, and the guy that runs the Super intelligence lab at Meta has access to everything that the company is doing. So in reality, the$14 billion that meta is spending is not to buy the company or 49% of the company. It's to buy Alexander Wang because the company will probably not survive or is going to decline significantly. And several different companies already starting making these moves. the biggest winners of this, other than obviously Wang himself, is Meer and touring, which are two companies who are competitors who are providing similar services. And they're already seeing a big increase in companies talking to them to use their services instead of scale ais. Now, another interesting challenge in the future of Meta that connects to some of the previous things that we shared earlier in this episode is that according to an internal report, so Apple's, CEO, Tom Cook is hellbent, and that's a quote on launching Apple's glasses before Meta's, next version of glasses, which they're augmented reality glasses that are gonna have smart displays built into them. And not just being able to see and use voice with an insider apple saying Team cares about nothing else. So that's basically his biggest bet to stay relevant in the AI world, is to come up with actual augmented reality glasses before Meta comes up with theirs. Now Meta's gonna come with an initial version of them that has basic displays potentially by the end of this year, but Apple is planning to release a fully AI powered smart glasses that you can wear out in the street by the end of 2026. That's ahead of the timeline of meta's fully AI glasses that are scheduled for 2027. Now, throw into that mix, what we still don't know that. Open AI is gonna release together with Johnny Ive, and we're gonna have a very active race in the wearable hardware side of AI as well. This is good for us consumers, or at least good from a price perspective. What does that mean for privacy and many other social aspects? I don't know yet. I don't think anybody knows, but this is what's gonna happen in the world in the next two to three years. There are many other news that we haven't shared with you because of time, but you can find all of them in our newsletter. there's gonna be links to all the articles that we did talk about also, obviously access to our courses and other information that we have on our website and also links to all the news that we didn't share on this episode. Don't forget to come and join us for the special live of episode 200. It is happening on Tuesday, June 17th at noon Eastern. It is gonna be live on Zoom. There's gonna be a link in the show notes. It's gonna be a big party and we're calling it the ultimate AI Showdown. It's gonna have four experts that are going to compare top tools each in their field and so if you don't know which tools to use for which purpose, this is your best opportunity to catch up with the top four experts on these fields in the world today, all coming together. And you can be there and ask questions and engage with other people. So come and join us and in addition, on the same day, we're going to release an episode that is gonna show you how to build AI agents that can chat on your behalf, on your website, and on social media while connecting to your own company data. So lots of exciting stuff is coming and until next time, have an awesome weekend.