Leveraging AI

219 | Google’s ‘Nano Banana’ crushes image gen, Salesforce AI breach, Meta’s brain drain + Midjourney deal, XAI’s ‘Macro Hard,’ and more AI news for the week ending on August 29, 2025

Isar Meitis Season 1 Episode 219

Check the self-paced AI Business Transformation course > https://multiplai.ai/self-paced-online-course/ 

Is AI about to replace your design team, your video team… and maybe your job?

This week’s episode is a masterclass in both excitement and existential dread. From Google's hilariously-named (yet wildly powerful) Nano Banana model to major shifts in enterprise AI, Isar Meitis pulls back the curtain on the biggest news, breakthroughs, and business implications in AI from the past week.

In this session, you'll discover:

  • What the heck is Nano Banana and why it’s a game-changer in image/video generation
  • The 3 major gaps in AI-generated visuals and how close we are to closing them
  • Why AI-native startups hit $18.5B in revenue and what that means for legacy companies
  • The truth behind job loss vs. job creation in the age of AI
  • Morgan Stanley’s bold $920B AI forecast for the S&P 500
  • Real-world enterprise AI use cases (yes, one reduced a 15-week process to 10 minutes)
  • Scary security lapses in major LLMs, and the surprising players working to fix it
  • Robots learning just by watching videos and what that means for your operations
  • AI lawsuits, government deals, and macro-level power plays shaping the future

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello and welcome to the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host. I personally had a really crazy busy week. This week I started with an AI workshop for a very large known tech company in San Francisco. Then I did another workshop in the middle of the week one of my long-term clients in San Clemente in California. And then on Friday I did an AI workshop here in Florida for an organization that does CEO round tables to a group of 20 to 30 CEOs. So I had a very, very busy week. But in this week, when the entire AI world went bananas. And to be specific, nano bananas. So those of you who haven't heard about nano banana, we're gonna talk about that. We're going to talk about significant impacts to the economy and the workplace with new information from some very large entities. We're going to talk about safety and security, new findings and issues from this past week that are alarming and interesting at the same time. And then we have a long list of rapid fire items, including people departing from meta new interesting releases. And much more. So let's get started. So what the hell is Nano Banana and how does that have anything to do with ai? Well, last week a new model showed out on LMS chatbot arena on the image generation leaderboard called nano banana. Nano banana, took it by storm, went to number one on almost every aspect of image creation and image editing. Everybody went crazy and speculations on who it is. Well, now we know Google just announced that it's their model and they launched it under the name Gemini 2.5 Flash image, which is not nearly as good as the name Nano Banana, which I think is probably what's gonna stick, at least for now. But it is an extremely capable model that is basically doing everything, all the other models are doing only better, with some additional capabilities that did not exist before, that are absolutely mind blowing. And that is a game changer across multiple aspects. So let's talk about what it can do. The first thing it can do, it can blend several images together. So you can take an image of a superhero, an image of your face, ask it with your face on the superhero, and he does it. Perfectly. You can take an image of a car and place it in a scenery and it will do it perfectly and so on and so forth. It knows how to not just blend the things together, but actually control the perspective, the angle, the lighting, everything to make it look like a perfect image. It is incredible at maintaining consistency of people and items across multiple aspects, meaning you can take an object, take an image of it, and then ask to look at it from different directions, from different perspectives in different zooms, place it in people hands, place it on top of things, inside things, and so on, and it keeps perfect consistency. This is something that we didn't have before. The closest thing that we had to, that was ChatGPT's image generation model, which is not bad, but in Nano Banana, it's absolutely perfect. That means that if you're selling any products, you can generate images of these products in any scenario you want perfectly from any angle, and generate as many of these as you need for any needs that you have. It also is very good at prompt based editing, meaning you can upload an image, either a real image or an AI generated image, and ask to blur the background. Remove stains, alter poses, change the lighting, add text on top of it, remove different things from the image, or even take old images that are blurry. And pixelated and enhance them just with a simple prompt. It also has an amazing understanding of the world in physics, which means it understands the 3D environment that is in the picture. You can take a picture of a room and take a picture of a sofa and ask it to place it along the right wall, and it will place the sofa perfectly as if it's there, which is amazing if you're trying to put new furniture in your house and you wanna see how they look like. Because it is a part of a large language model, it also understands context. You can give it stuff that you written by hand or flow charts and you will generate it in 2D or 3D as needed diagrams. All in absolute accuracy and it's absolutely amazing. Google included synth id, which is their own technology, which is a digital watermark that allows with a different tool to identify any image that it's generated as AI generated, which I think is very, very smart. I really hope that's gonna be a law that will require anybody to use either synth ID or any other technology that will allow us to easily know what is AI generated or edited and what is not. And this new model is now available on all of Google's API platforms as well as on open router file, ai, and many other platforms that integrate and aggregate image generation capabilities. And these amazing capabilities to generate consistent images of characters are very well translated to videos as well, because most of the advanced video generation tools are starting with an image of a person or a thing or a scene. And the problem was how do you generate multiple aspects of the same person to continue the scene from different angles while maintaining consistency? Well, that problem is now solved and the internet is basically exploding with examples of either images created and or edited by nano bananas or videos that are created based on images that are created or edited by nano bananas. And it's absolutely mind blowing. Now, I tested it across of many cool use cases, and I'm probably gonna share an episode specifically about nano banana this coming Tuesday, just to show you how to use it and what it can do. But I give you a very simple use case that really blew my mind because it was something that was impossible for me to do because I don't use Photoshop. And now I was able to use it in seconds with just one line of prompt. So in one of my workshops this week, I've done an introduction to workflow automation with ai and I took screenshots@ofmake.com. The background in make.com is kind of white, not exactly, it's kinda like light beige. And the text next to the nodes is in black. All my slides have a black background, so if I remove the background, the text disappears. Also, the inside of the nodes, the graphics inside of them is also in the same as the background color, so it disappears as well. So it's very hard to actually see the graphics and to understand what each node does. So that forced me to look for a solution, and the solution was not found until I tried Nano Bananas. All I did is I dropped the screenshot into Nano Bananas and I asked it to change the background to black and the font to white, and it did it. So the perfect same exact flow that I had done in Make Now appears in the new version with a black background in white letters. Which is something that would've taken a good Photoshop user, probably a minute to do maybe two, but it took me one sentence, and I don't know how to use Photoshop. So the combination of the abilities that the Nano Bananas enable is absolutely incredible, and it raises a lot of questions. The first question, is it the death of photo shoots? If you can take the actor that was supposed to be in the photo shoot or create one out of thin air and then put any clothing you want on them, that could be yourself, by the way. So you can see how specific clothing is gonna look on you by combining the two images, but I'm talking about the brands themselves. Why would you ever do a photo shoot if you can create multiple angles of your exact product in any angle, in any scenario, in any lighting, in any storytelling need that you have in seconds. The other question is it the death of Photoshop? Like why do I need to Photoshop anything? If I can literally say with a few words what I need and you will make it happen? Is it the death of TV and movie studios? If I can now generate the initial point and other points in the story for any scene from any angle accurately, and then I can just generate the videos from there. Why do I need TV and movie studios? So I think the answer is not yet, but it's definitely a big, big step in that direction. Why am I saying not yet? First of all, the resolution of these images are not good enough yet. So if you are doing this professionally, the resolution of these images are not gonna be enough. But that being said, many tools today, many AI tools today allows you to upscale images amazingly well. So that problem can be solved with AI already, even if it's not built into the process yet. On the movie generation side, some of the movie generation tools are not perfect in character consistency in the video itself, but we need to assume that if that was solved on the image level, there's absolutely no reason why it's not gonna be resolved on the video level. So just think about using a tool that is actually not bad insistency, such as VEO three, and using the entry points or queue frames using a tool like Nano Bananas and you're getting pretty close to what you actually need. Small improvements in video, especially in length and resolution, and those problems go away as well. To me, the biggest missing thing, which I said when ChatGPT launched their updated image generation tool is layers. Now what we're seeing from all these AI models in the past year or so, that they're competing maybe less on capabilities of the model and more and more on tooling. And so I think the next better tooling option should be to control layers and should be able to have images in higher resolution. I think if you can control layers within the generation and then be able to resize, move around, reshape each and every one of the layers separately, that could be game over for Photoshop, definitely for Canva, because why would I do anything manually if I can just ask it and make it happen and generate whatever style in whatever direction, in whatever format that I want. So I do see three small gaps, as I mentioned, layers, resolution and video character consistency. Once this will get solved, that puts several different entire industries and software companies at serious risk. To give you an incredible, real world example on how Nano bananas can be used with other tools to create highly valuable resources for a company. You should check out one of the recent posts from Rory Flynn. Who is a good friend and an incredible visual AI creator, and he used Weavy which is a tool that allows you to create multi-step processes for image and video generation. And in that process that he generated, you can drop in an image of any object that you want, anything that you're selling, and it generates multiple angles of that object in multiple levels of Zoom that look extremely attractive. And he did it with a sports car. So not something as simple as a cup or a ball or a perfume, but something really complex with multiple parts, with angles, with wheels, with reflections, et cetera. And using Nano Banana, he was able to create multiple accurate aspects of this car by entering only one angle of the shot. It is absolutely mind blowing, and the amount of opportunities it opens for brand marketers is out of this world. And the final thing that I will say, I really, really like ChatGPT's latest model. It's actually very helpful. So this model does everything that ChatGPT model does better than the ChatGPT model does it. And it generates an image in about 15 to 20 seconds while ChatGPT takes a minute, sometimes two minute to generate the image. Now, will ChatGPT now reduce the amount of time it requires to generate images on ChatGPT? Maybe. But with their current capacity issues across the launch of GPT-5 and everything that else that they're doing. This may be a while before we start getting that, but definitely the race is on and definitely Gemini 2.5 Flash image, also known as Nano Banana is right now the best image generation tool out there. And you can try it out in your Gemini subscription. Just go to Gemini and you can start generating images right there. And from bananas to the impact of AI on the economy and jobs. So I shared with you last week that MIT released a research that said that 95% of big AI projects provide zero value. And that's after looking and interviewing 350 people that were involved in investments in AI of 30 to$40 billion. I expressed my concerns about the accuracy and the process in which they achieved and received these results and the methodology of the actual research. Well, this week we are blasted by several different data points and research from multiple reputable companies and organization that show exactly the opposite. The first one is Morgan Stanley, who just released that their analyst estimate that the widespread of AI adoption could yield$920 billion. In yearly economical value for the s and p 500 qua companies, which is a growth of 28% of their projected value by the end of this year, the analysis projects at 90% of roles will be affected by AI and augmentation, and it is mostly going to be focusing on headcount reduction through natural attrition and automating routine knowledge intensive tasks. But in addition, it is claiming that by reducing the cost of low end task employees that will not be necessary anymore. They can focus those employees on higher, more value generating tasks, which will also drive growth in sales. So these companies will gain on the top line as well as in reducing expenses and not just on cost reduction. The impact on the s and p 500 could be 13 to$16 trillion increase in market capitalization, which again is about 25% increase to its current total value. Industries with the highest exposure to ai remission is consumer staples, distribution, and retail. Real estate management transportation that could see benefit up to over 100% in the year of 2026. So we're not talking about somewhere in the far future. They're talking about a year and a quarter from now. They're saying lower impact in lean sectors like semiconductors and hardware, which makes sense. You have smaller overhead, there's less to save from. The report clearly separates the impact of AI between augmentation and full automation. Full automation means the job gets eliminated because AI can do the job on its own. Augmentation means the employee uses AI in order to generate more value and faster while still keeping the job. That might be slightly different job, and they're anticipating new roles to emerge like chief AI officers, governance specialist, risk assessors, and so on. But I have commented on this topic several times. Yes, I think AI will create jobs that we cannot anticipate right now. However, I have serious doubts if it will balance all the jobs that will be lost. And I have complete confident that in the near term or midterm future, it will not catch up to the amount of jobs lost. And so in the near to midterm future, before these jobs are getting generated, we are looking at a serious bloodbath of a lot of people losing their jobs to AI without any clear new jobs to replace them. Now they are saying that full adoption may take years or maybe even decades with different companies have different priorities, and some companies prioritizing attrition and efficiencies over math layoffs, especially in customer facing roles, which makes perfect sense. You don't wanna put your current customer relationships or customer facing stores at risk, but it's just a matter of time. Now, the ironic aspect of this article on Fortune is that fortune disclosed that they have used generative AI for the initial draft of this article, which tells you that this is widespread across more or less every industry. Another interesting report that came out this week is from Bankrate, and they have analyzed the salaries of people across multiple industries and multiple types of jobs. And what they found is that blue collar jobs have been outpacing white collar jobs in growth of wages in the past few years. So since 2021, hospitality workers wages have risen nearly 30%, outpacing the inflation by over 4%. Healthcare workers also saw a 25% increase in salaries, also bidding the inflation. On the other hand, workers in professional services, finance and education have had weight gains below inflation with teachers facing 5% shortfall compared to the inflation rates, which is actually really, really sad. I think education is potentially the place that we should invest in the most. Now, the other aspect that this survey looks into is promotion rates. Basically the percentage of people that are being promoted to higher jobs, and what it's showing is that the promotion rate has dropped to 10.3% in May of 2025 compared to 14.6% in 2022. That is a very big decline and it's impacting mostly entry jobs gen Z graduate seeking to advance their career after having their first job in white collar fields. So while wild collar jobs have higher entry level salaries of$19.57 per hour compared to$16 per hour for blue collar hospitality and so on, the employees in these blue collar industries see faster promotions and higher salary growth over time, which makes them catch up. But the bigger story is not even whether you're getting a promotion or a raise. The bigger story comes from a Stanford University study that has researched payroll data from A A DP of millions of US employees, and they found that workers ages 22 to 25 in the most AI exposed occupations have faced a 30% decline in employment since late 2022, which is the time that ChatGPT was introduced. So the fields that were impacted the most, our customer service representative, accountants, software developers, administrative roles who have seen the steepest declines with entry level software engineering jobs, dropping nearly 20% from the end of 2022 to July of 2025. Now in the same AI exposed occupations, employment for workers over 30 remained stable and in some cases even increased by six to 12%, which basically tells you that people with relevant knowledge and relevant experience in specific roles in industries are still not replaced by ai. And over there the augmentation plays a bigger role where there is employees can use AI in combination with their knowledge and expertise to drive better results. Jobs are also not impacted in low AI exposure industries and occupations such as nursing, health aides, maintenance and production supervisors who have kept steady employment, including with younger workers. Same kind of story here, augmentation versus automation. Usually entry level jobs can be automated and hence are not needed any more from a human input perspective and jobs that require more knowledge, more experience go through augmentation where the people who have the AI scales can actually increase their salaries and capture better positions because they have the knowledge. I shared with you, the previous researchers has found that people with AI knowledge get significantly higher salaries and get significantly better position as of right now, and it's gonna keep on being this way for the foreseeable future. What does that tell us? It tells us that. If in any age if you have AI capabilities and skills, you have significantly higher chances of either getting a job on maintaining a job and that the exposure right now on the younger generation is even stronger because most of the entry level jobs are at higher risk of automation. That means that you gotta think very carefully on the direction you're going to pick in life if you are in high school or entering college or in college right now, and take that into consideration. It also means that regardless of what age you're in, you have to take care of your AI practical education, how to use AI to actually augment your job and do things faster, better, and cheaper. And this is exactly what I have been providing to companies and individuals in the past few years. We just did a complete revamp of our self-paced AI business transformation course. It's basically the same exact course that I teach on Zoom only. It is chopped up to specific lessons with specific homework and exercises that you can take at your own pace. Now, do I think that taking the course with me as a human live instructor where you can ask questions is better? Yes, I absolutely think it is better, but in the next few months, I'm completely booked with company specific training workshops, and hence, I will not be launching another AI business transformation course cohort, at least through the end of the year, potentially through the beginning of 2026. So if you are interested in taking the course, the self-paced version is fantastic, and you can start taking it today. And we just finished updating it to the latest course that we teach. So the data in this course is the latest and greatest, and there's gonna be a link in the show notes if you want to take that course. Another interesting input to the strength of AI tools and their potential impact in the economy comes from an article in the information that is showing that AI native startups, basically the people who are driving the AI revolution, has scaled to over$18.5 billion in annualized revenue in just two and a half years. In less than three years. Since the salons of ChatGPT, open AI anthropic, cursor, cognition, replicate, and other platforms. A total of 18 different companies has grew from very close to zero revenue to$18.5 billion in revenue. Now, to be completely transparent, eight 88% of that is the combination of open eye Anthropic to together at$16.4 billion, but that leaves just over$2 billion for the rest of the company. That's still a lot of money. This article, the information is specifically targeting the claim in the MIT research that I shared with you last week that we mentioned in the beginning of this segment. And they're basically saying that the study overlooked the, what they call the shadow AI adoption, where 90% of employees use. Un sanctioned tools, basically stuff that is not defined by the company. Tools like ChatGPT, Claude, et cetera, for day-to-day productivity, gaining huge benefits that are not accounted for when you're looking just at the company level projects. What does this mean? It means that as a leadership team in your company, you need to get on board with this and train your employees on how to use these tools in a proper way, in a safe way, in an effective way, which again is most of what I'm doing these days, is delivering workshops to companies on how to leverage generative AI tools for day-to-day tasks that are relevant to basically every aspect of the business. This article also gave some specific examples. They said that high performers report saving in back office operations And then the company, Novo Nordisk uses Anthropic's Claude 3.5 sonet to draft regulatory documents. Cutting time from 15 weeks to less than 10 minutes per each. And every one of these incidents. The team that does it went from a team of 50 plus to three, and the annualized cost of the technology is less than a salary of a single one of these rider in that team. Intuit spent$30 million on OpenAI models in 2025, they're claiming yielding multiples in value. Moderna, PayPal, Shopify, Citigroup, and many others achieve significant savings. VII, many of them sadly from cutting jobs, but they are seeing significant value to their bottom line by automating different tasks with ai. Moving to another example, the IT Giant ndl. Is leveraging an AI powered software from Palo Alto Networks to automate routine security tasks. They cut their security incident response analyst team from 80 to fewer than 40 over this past year, so they cut the team in half. And even here they're saying it primarily affected entry level roles handling repetitive grant work while retaining the senior analyst for complex investigation. Do you see a pattern here? Scott, ONB, who is Kyra's head of Internal Cybersecurity, said, and I'm quoting, we're starting to trust the AI to handle these tasks like scans and device isolation. What does that tell you? It tells you that ai, if it's built correctly and it's integrated to your data, can deliver consistent results. Consistent results builds trust. Trust builds traumatic changes in the way an organization operates. And I think this is the process that's gonna happen more and more in the next few months, and definitely the next few years where companies are going to figure out how to use AI properly. And through that, develop trust in the results, which will lead to even further changes in the way companies and entire industries operate. And again, to contradict the input from M MIT's survey last week. NDL spent over$600,000 on software last year, but O and B noted that the saving from staff reductions far exceeded this cost. Another example comes from KPMG that uses AI tools for compliance audit cutting time for compliance audits by 30% via agents that are trained on global cybersecurity laws. And so same kind of scenario, when you have a specific use case and you have the right implementation, you have very dramatic results. How dramatic? Well, 90% reduction in human handling, 30% audits done by agents, 50% cut in the job force in specific companies. These are very, very significant. Now, when you got this kind of capabilities, when you can drive this kind of efficiency, you have two options. Option number one is, can we grow at the same pace? So if I can save 50% of my employees, can I use these 50% of employees to grow now by a hundred percent, right? Because I can do it with half the staff. The full staff should gimme a hundred percent growth. The reality is that's almost never possible, right? It's never possible because there's no elasticity in the market to allow you to grow by a hundred percent. So it will drive job losses. There's just no way around it. In a capitalistic world. We got another data point this week from Anderson. Horowitz, one of the most known VCs in the world where they released a paper called The Rise of Computer Use and Age, agentic Coworkers. Basically everything this paper is about. The fact that computer using AI agents that can see what is on the screen, understand the context, and know the exact process that needs to be done, allows to basically replace any human that is using these tools in a way that is significantly better than traditional robotic processing automation, also known as RPA. because this software can access any software like humans, it is to an extent bypassing the need for API implementation because it can go through the graphic user interface, click on buttons, and navigate everything that needs to be navigated. And more importantly, because it can do it across multiple software at the same time, it is not dependent on one API or another, or how to combine them together because it just freely goes between the software, just like humans do. This eliminates in some cases the need to go through processes that are very complex to integrate with systems like SAP and Epic and Oracle platforms and so on. So while general purpose agents like ChatGPT agent, or the new Claude agent or Manus and so on struggle with complex enterprise software, because it is customized workflow, you can train customized models to follow these workflows and teach them edge cases and then they can actually do the work. They're claiming that within six to 18 months these kind of agents will dramatically improve in capability. And they will enable specialized roles in marketing, finance, sales, et cetera, allowing to replace more and more roles within companies. They also shared that agents that are tuned for very specific domains can autonomously manage very complex processes, including marketing, finance, sales, et cetera. I shared with you several times in the past that I have a software company that is just coming out of stealth. The company is called Data Breeze, and what it does is it completely automates financial reconciliation. So every invoice that comes in it, knows how to compare automatically, and autonomously to the relevant purchase order and the relevant sales order. Look what is difference between all of them. Figure out based on rules that are set up in simple English that you can set up by just literally typing or speaking the best practices and the rules that you have in your company right now, and it will do all this processing on its own. Our first client out of beta is now using this to reconcile 30 to 40 invoices every single day without human interaction in a matter of minutes. This work used to be done by a group of several different people. If that's something that's interesting to you, please reach out to me on LinkedIn and I can explain to you exactly how it works and how it could be integrated into your financial process. And I'll do some safety challenges with AI and security things, and I'll start with something that you just need to know. It's not really a security thing, but it's something that you need to take care of right now, which is Claude. Claude by Anthropic just made a change to their user terms. If you are a free pro or max user, by default, Claude can use your prompts and every day that you put into Claude for training of their next model. This came into effect of September 28th. So two days before this podcast comes out, now you can opt out of that. To do this, what you gotta do is you gotta go to settings, go to privacy, and they will prompt you to confirm the new rules. But either way, there's a tuggle button that's called Help Improve Claude, that you need to turn to off from the default situation of on. Now, Anthropic obviously frames this as a win-win situation. They're stating that users who don't opt out will help to improve model safety, making our system for detecting harmful content, more accurate and less likely to flag harmless conversations. They also said it also help future cloud models improve at scales like coding analysis reasoning. Ultimately leading to better models for all users. Now, while this is true, you may not be willing to donate your interactions with Claude into that quote unquote greater good, and so I have opted out of that. I think the fact that they made it a opt-in by default is really surprising. It's even more surprising the fact that it is Anthropic doing this. This is something that we would expect from Grok and Anthropic. The company that has planted their flag on fairness and safety is not the company I expected to do something like this. I do expect a very serious backlash on this approach, and I will update you in the next two or three weeks if I was right or wrong about this. Something really helpful happened this week when it comes to safety of AI tools. Open AI and anthropic, and three other smallest, smaller, less known labs decided to collaborate and exchange testing each other's models. Basically, OpenAI provided Anthropic special access through the API to test their models for safety and vice versa. This process together with their internal testing has unveiled some pretty scary loopholes on both platforms. As an example, open Eyes ChatGPT 4.1, share detailed instructions on bombing a sport venue, including the vulnerabilities of the venue, the explosive recipes, and even evasion tactics. After the bombing, actually they're saying that GPT models cooperated with harmful requests after minimal retries, or just by changing the approach, instead of saying, I'm gonna bomb this venue, it was defined as security planning for the venue, trying to identify how can somebody bomb the venue, and that changed it completely. It was also helpful in trying to understand how to use the dark web to purchase nuclear materials or to develop different kinds of spyware software. Anthropic themselves, in their latest report have shared that cloud code was used to target. 17 different organizations including healthcare, emergency services and government demanding six figures ransom using ransomware developed using cloud code, the AI tools, automated reconnaissance, harvesting credentials, penetrated networks, advise on data targeting and crafted visually alarming ransom notes, all on its own. What is all of that telling you? It's telling you that while these tools are becoming. Available and everybody's using them every day. They're opening more and more risks. And I am really excited that OpenAI and Anthropic decided to collaborate on testing for safety across these models. I said that multiple times before. I think there has to be an international group with all the labs around the world, plus governments around the world to test every single model before it comes out. And I think these cross testing collaborations will dramatically reduce the risks while allowing us to still benefit from the immense value that AI generates. I really, really hope this is just the first step in that direction. And speaking of AI and its involvement in data security breaches, Google Threat Intelligence Group, also known as GTIG, reported that over 700 organizations or potentially impacted by a chatbot that was running on the Salesforce platform. That was a malicious chatbot that has stolen multiple data points on, again, over 700 companies. This AI actor specifically hunted for high value credentials like Amazon Web Services, access Keys, VPN credentials, snowflake tokens, and passwords in plain text to enable for further system compromise, right? So it was using this agent to get access to other things so it can do even more damage. Now, the agent was smart enough to delete the query jobs and cover its tracks after every step. The only location that hints to what happened is in the logs that it did not have access to. So GTIG urges companies who suspect they might have been a part of this thing to go and check the logs. Now, what did the attacker planning doing with the data is unclear. No ransom requests have been made yet, but this shows you how scary this is. Salesforce has a huge marketplace of agents. Many of them are developed by third party companies that you don't know. It'll promise you the sun, the moon, and the stars, and you need to decide whether you trust the agent to actually do what it's saying it will do, versus do what this agent was doing and we're adding into a whole new world of cybersecurity and legal aspects of who is responsible for this? How can you protect yourself from this? Who can install these agents on your company's domain, et cetera, et cetera, et cetera. And we'll have to figure out all these things as individuals and as companies in the very near future because the risk is already there. Another really sad story this week that we heard before, but now turned into a lawsuit. Adam Rainn. A 16-year-old kid has committed suicide in April of 2025. His parents are now suing OpenAI alleging that ChatGPT became their sons closest confidant, it provided detailed suicide guidance to this kid in the hours before his death in April. The lawsuit claims that ChatGPT and I'm quoting, sought to displace his connection with family and loved ones and would continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts. This is a) really, really sad, and B), as a parent, really, really troubling. More and more kids are using these tools, not just specifically OpenAI as a mentor, as a place of comfort, as a tool for dealing with their mental aspects. And these tools are not ready for that by any means. This is just one example that ended up tragically, but there are many smaller examples that don't any deaths but could still end with very significant mental and social damage. And we need to be aware of that and we need to make our kids aware of that. And we need to find different tools that will be guardrails for that. And definitely the big labs needs to be held accountable for these kind of things. This is the first thing that they should prevent in their models as far as blocking them from what they can and cannot do. And I think there has to be a law that will report these kind of conversations to at least the parents, but if not some kind of evaluation group of humans that will evaluate whether it generates a risk or not to humans, and based on that report to the relevant authorities or the relevant individuals. These kind of incidents always take me back to the Robots Series by Asimov, where the first law of robotics is that ai, or in their case, human, cannot harm humans. Asimov's First law for robots, in our case, AI that says that AI will never harm a human regardless of anything. And this is definitely the scenario where the AI did not follow that law. That law doesn't exist right now, but I think it should. Staying on the topic of legal action, I reported to you last week that Musk has threatened, suing Apple and OpenAI, accusing them of a monopolistic collusion in favor of ChatGPT over competitors like Grok in the app store. Well, this week, XAI has filed the lawsuit in a federal court, so in addition to that, the lawsuit also claims that the integration of ChatGPT into iOS forces iPhone users to default to using ChatGPT in its tasks. Even if they would prefer alternatives like Grok, because that's not built into the iPhone. Do I think that this case has a chance? I don't know, but this is not the first lawsuit from Sam Altman against OpenAI. There's also the other lawsuit that is trying to prevent them from converting from a non-profit to a for-profit. There's actually been a really interesting article this week on the information that he's claiming that he actually has a chance of winning this legal lawsuit. There is an opposite lawsuit from OpenAI to Elon Musk claiming that he's harassing open ai. That's the only reason he's suing them. There's a lot of obviously, bad blood and personal history involved in all of that. Where is it going to go? I don't know, but I will update you as things happen. In a strategic move to safeguard AI's explosive growth, Silicon Valley heavyweights, including groups like Anderson Horowitz and Greg Brockman, the president of OpenAI and others are funneling over a hundred million dollars into the leading the future Super PAC network that is supposed to push cash, that is supposed to deploy marketing campaigns and individual campaigns and ads against stringent AI rules ahead of the 2026 midterm election. If it wasn't clear from the 2024 elections that the tech, Silicon Valley, 800 pound gorillas are impacting what's happening, this is a very clear step in maintaining that situation. If you remember at at the inauguration of President Trump, he had all the big hitters sitting next to him, and I don't think this is changing. I think they're just continuing what they've done before. And I find this really, really scary. Okay, we do anything about it? Well, not really. I do agree, by the way, that if what they're fighting for is only trying to prevent patchwork on the state level, that's actually the right approach. But on the other hand, right now, the federal level is giving them free hand to do whatever they want, which is also not a healthy situation. Staying on governments. There are conversations right now between Sam Altman and the UK Secretary of Technology, Peter Kyle, to potentially provide a nationwide premium access of GPT plus to every citizen of the United Kingdom. This will make it the second nation that does something like this. The first agreement like this that OpenAI has signed is with the UAE. The UK is obviously significantly bigger, and this could be valued at around 2 billion pounds if this agreement moves forward. Secretary Kyle is pushing very aggressively that the UK stays ahead of the AI curve, and that's his play to make this happen. It'll be very interesting to see if that actually evolves and move forward. But it takes me back to what I'm saying all the time. Giving them access to ChatGPT for free doesn't mean they will know how to use it effectively. Doesn't mean that they will know how to use it at all. As I mentioned last week, 77% of ChatGPT users use it as a replacement for Google. They actually enjoy very little of the benefits of what generative AI has to offer across every aspect that it can offer it. It can do absolute magic, and all you have to do is see the faces of the people that I teach in workshops to understand the magic that can be done using it to search the web is not the best use of ai, and I think that's where governments and organizations need to invest the money, not necessarily in free access to the licenses, but more importantly on the training and education on how to actually apply the capabilities of these tools in effective ways. A few interesting releases from this past week. Anthropic just launched a chrome extension, that allows you to see and control everything in your Chrome browser. That's obviously their play to combat open AI's, ChatGPT agent, as well as the Comet browser for perplexity or Google's Gemini for Chrome that they started rolling out as well. So this is their play to stay relevant in the general agent browser based tool. There are serious rumors about open ai, about to release their own browser, not to be dependent on Chrome. There's obviously the aspect of Google may be forced to sell Chrome and we don't know who's gonna buy it, which might be OpenAI or somebody else. But either way, the game is definitely on. When it comes to agentic browsers, I must say that I've been using Comet for just over a week, and I have mixed results with the things it can and cannot do. But the direction is very, very clear. We will have no browsers with no AI built into them in the very near future because it makes perfect sense for an agent to control the browser versus we controlling it. And this week we also got an open source tool that does the same thing, open CUA from the University of Hong Kong with some additional partners just released an open source version of computer use agents, CAU. They're claiming that it rivals the same quality of results from open AI and Anthropic for similar tools. So this direction is growing like crazy and I think we'll have to on one hand learn how to use these tools effectively. On the other hand, find safeguards in order to make sure that they're not doing stuff that we don't need them or want them to do. I think we're not there yet. From a security perspective, I think from a tooling and capabilities perspective, we're very, very close with these tools, being able to be extremely powerful and effective in everything that you need. Again, so far, mixed results with comments, with Comet, but it was able to help me solve some problems that would've taken me hours to figure out in just a few seconds. Staying on new releases. OpenAI has launched GPT realtime out of beta into a full deployment that's their latest and greatest API that drives their latest and greatest text to speech model. That in addition to allowing any company out there to use their new voice agent, it also has significantly lower costs than the previous costs. So the price per input tokens went down from$40 to$32. And the price for output tokens went down from$80 to$64. All of those per million tokens obviously, of audio output. They also added something that is at least as interesting, at least from my perspective, is the new API also supports MCP, which means for those of you who don't know what MCP is, it's the ability to connect platforms out there like ERP systems, CRM systems, marketing platforms, email, et cetera, and connect them in minutes to your environment, which basically means this new API will now be able to query work with and impact all the tools that currently have MCP servers, which is more and more of the components in your tech stack. Think about an AI voice model that can see everything in your ERP, in your CRM, in your marketing platform, et cetera, et cetera. And respond and take actions based on your conversation with it, and you understand how impactful that could be for day-to-day business operation. Maybe the most interesting announcement this week comes from Elon Musk and Xai was unveiled, what they called Macrohard. the Macrohard is a tongue and cheek name on micro soft. So macro is the opposite of micro. Hard is the opposite of soft. But what it's supposed to do, it is supposed to replicate and surpass software giants like Microsoft using an entirely AI agentic solution. The idea behind it is basically saying that since Microsoft doesn't generate anything other than software, then a network of AI agents can do the same exact thing and replicate the entire company, including the co-generation, the marketing, everything else. So in his tweet, El Musk posted, join at X AI and help build a purely AI software company called Macro Hard. It is a tongue in cheek name, but the project is very real. In principle, giving the software companies like Microsoft do not themselves manufacture any physical hardware. It should be possible to simulate them entirely with ai. How realistic is that? I don't know. I think it's not the first time we're hearing this idea. It's not the first company that is talking about doing something like this. The biggest difference is that it's Elon and he usually pushes the envelope on what's possible. Think about how many electric cars we had before Tesla, or the recent launch of their Starship rocket that was able to successfully do everything it needs with the largest rocket that was ever built. Combine that with the fact that he can raise more or less any amount of money that he wants for these endeavors. Combine that with the fact that he was able to build the most powerful data center in the world in about four months. That usually takes other companies nine to 18 months, and you understand that you cannot disqualify what Elon is suggesting he's doing. He is usually really wrong on his timelines, but he's usually accurate on the final results. And so it'll be interesting to follow what this new venture under XAI is going to achieve and how fast The other thing that Xai did this week is they open source GR 2.5. It was their best model as of the end of 2024. So it's less than a year old. It's still a very capable model and it is following the trend that started with OpenAI releasing their first open source models. All of these, I think, is just trying to compete with the really capable open source models from China and allowing an open source alternative. So the open source world does not become owned by China, so both Quin and Deep Seek have very capable open source models, which are based in China. And now we also have models from OpenAI and Grok releasing their previous models. Elon Musk also mentioned that they're going to release Grok three. Within about six months. So basically we're gonna get a model that is a year old every time as open source that companies can use and integrate to whatever needs that they have. Another very interesting release this week came from MIT, they've built a way to teach robots on how to operate in different environments without sensors, purely by allowing them to watch videos. Now what they're saying is that this strategy is an effective alternative to lengthy manual programming and specialized sensors, which are often costly and complex to integrate. That's an exact quote. What it basically means is that you can take a robot, any robot, because this development is not specific to the platform and teach it to do anything by allowing it to watch videos of people doing the same exact action. Now the first thing that came to my mind is the combination of that with Google's Gen three. So Gen three is the open world AI environment that literally generates an entire universe in real time using AI tools. What it basically means is that if you don't have videos of humans doing the task or the environment in which the robots needs to operate, you can create it in real time with a tool like Gen three, and you can teach the robot on how to operate in that environment potentially before this environment even exists. So think about building a new factory. Modeling it, running it in Gen three multiple times, creating the videos on how to operate in that environment, and then feeding it into this new tool from MIT to teach the robot on how to operate in that environment on the first day. This sounds like science fiction, but this is apparently currently possible. Two interesting pieces of news from Meta this week. One is that several leading researchers from the old AI regime and from the newly developing meta's super intelligence lab are leaving open ai. Three of the top names are Avi Verma, Ethan Knight and ham Agarwal have resigned from Meta within weeks after joining. Verma and Knight are returning to OpenAI after.Less than one month. And Agarwal, who joined MENA in April is also leaving the company to an unknown destination. Also Chaya, AK Meta's, director of generative AI products management with nearly a decade in the company is also joining open ai. So do I think that reflects on the overall operation at Meta and the very aggressive way they brought people in and the very aggressive reorg that they've done there? Yes, absolutely. It reflects on that. Does it have a critical mass aspect that will stop that organization from working? I think the answer is not yet. They did lose some critical people. We said, all I said all along that that's not a great way to build a team of just paying people lots of money to jump ship and then reorganizing them to roles that they may not like. That being said, I don't know if they could have done it in a better way, and they may keep just enough people to do what they need to do. So it will be interesting to follow what happens with this new group in the next few months and seeing how many people stay, what they can develop, how many people leave, and so on. And I will keep you updated as this story evolves. A more practical news from Meta this week is that Meta just announced that they are signing a licensing deal with Midjourney to integrate its tools into the Meta Universe. Those of you who don't know me, journey has one of the most advanced image generation tools and now video generation tools as well. And these are gonna be fully integrated into Facebook, Instagram, WhatsApp, and other things that meta are going to develop. There's been previous discussion of potentially meta buying mid journey. This did not evolve into reality at least yet, but this new move will at least allow meta to catch up to the latest Gemini model, bano bananas, as well as ChatGPT image generation capabilities and other open source tools like flux, et cetera. And speaking of large companies using other AI tools from other providers, apple is reportedly exploring a partnership with Google as the engine for the new Siri. We covered the issues with Siri multiple times on this show. Apple is definitely far behind that. We're talking about 2025 and then 2026 and now potentially 2027. And I think they understand that's just not good enough. And so right now they're in discussions of using Google Gemini to be the engine behind Siri, potentially running on Apple's servers to keep data security based on the standards that Apple wants to deliver to its clients. Is that gonna come to fruition? I'm not a hundred percent sure, but it's definitely an interesting conversation. It's not the first partnership with these two companies that are partners and competitors, depending on where you check. And in my eyes, it makes perfect sense, Google has a really powerful AI capabilities, potentially the best in the world, and Apple is mostly a hardware company, and so this partnership makes perfect sense to me. We'll see how this moves forward. I Know Apple still has their push to actually do it themselves. They might do it in parallel and just partner with Google for a while. The other interesting aspect that nobody talked about, but I find interesting is that the US government is forcing Apple to stop using Google as the default search engine just to break its monopoly. And now instead of that, they might bring in the Google AI into the platform, which may or may not be blocked by the US government, but it will be interesting to watch. And two interesting updates from OpenAI and Microsoft. Fiji, CO officially started as CEO of applications in OpenAI as of August 18. This move was announced earlier this year in April, that she's gonna move away from Instacart and take that position open ai, but she finally took that position. She's going to manage a big chunk of the company. She's gonna oversee executives like COO Brad, op CFO, Sarah Friar, CPO, Kevin Whale, software Engineering Chief Rivas, Narayan, and Teams in marketing, policy, legal and hr. Basically taking away significant part of the day-to-day operation of the company. Now, if you're asking what Sam Altman is going to do, well. He said, and I'm quoting, we have a big consumer tech company. We have this mega scale infrastructure project for humanity. We have a research lab, and then we have all the new stuff, the robots, the devices, the B, CI, and the crazy ideas I can't run For companies, an open question if I can run one, but I certainly cannot run four. So what it feels like is that Sam is gonna be focused on the more strategic infrastructure and new things that the company needs to do, and CO, who has an incredible track record in taking companies and operationalizing them to a way that they're A, profitable and B, running very effectively. She's done this in Facebook. She then did it in Instacart, and now presumably she's there to do the same exact thing in OpenAI. So good luck to cmo. Then Microsoft just released two new models, MAI voice one and a i one preview. One is a voice model. The other is a text model that they're going to slowly integrate into different aspect of co-pilot in order to reduce their dependency on open AI deliverables. How good are these models? Well, so far not incredible. The AI one preview is now on the leaderboard in the 13th place, but we gotta remember this is the first attempt on that. So it will be interesting to see how they improve that over time. They definitely have the distribution and the test bed to learn very quickly and iterate. So we need to keep an eye on those and see can they really effectively replace OpenAI or not. That's it for this week. We will be back on Tuesday with a how to episode, as I mentioned, most likely on how to use nano banana across multiple use cases. Until then, have a great rest of your weekend. Go and check out our self-paced AI business transformation course. It can literally change your life and or your business, and you can do it at your own pace and keep on exploring AI testing things, sharing it with the world. We all can benefit from that. Have an awesome rest of your day, and enjoy the long weekend.

People on this episode