Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
233 | ChatGPT porn chat is coming, Job quake: 100 million roles at risk this decade + Mechanize’s “total workforce automation”, the biggest lessons from AgentForce, AI cancer-fighting novel breakthrough, and more AI news for Oct 17, 2025
Are you prepared for an AI future where your job, your utility bill, and even your government are all impacted?
AI is reshaping the world faster than most business leaders are ready for and this week, things got real. From Bernie Sanders’ stark warning about 100 million lost jobs to OpenAI’s surprising shift into personality-driven chatbots and possible erotica, the lines between innovation and disruption are blurring fast.
In this news episode, Isar Meitis takes you on a deep dive into the top three stories shaping the AI-business landscape right now — with a rapid-fire roundup of everything else you can’t afford to miss.
In this session, you'll discover:
- Why Bernie Sanders’ AI job-loss report might be politically motivated and still dangerously real
- The true cost of data centers (think: a hundred thousand homes’ worth of electricity)
- Why bipartisan protests against AI infrastructure are gaining traction and why they may still fail
- How OpenAI’s new direction is raising eyebrows from erotica chatbots to creator IP fights
- Why Mechanize believes AI workforce automation is inevitable, and what that means for business leaders
- A16Z’s latest insight: Software ate the world, now AI is eating labor
- What Salesforce's “AgentForce 360” is really about and whether it can save them from irrelevance
- The parallel between AI’s IP wars and Napster’s battle with the music industry
- Why regulatory changes in California might be too little, too late
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Meitis, your host, and like every week we have a lot to talk about. We are going to cover three deep dive topics today. One is going to be the impact of AI on jobs and the political aspect of it as it is evolving ahead of the elections that are just around the corner. We're going to talk about the new release by open ai, which is very interesting and how they're getting a little complicated with the direction that they're taking right now compared to what they promised the world. And we are going to finish the deep dive with sharing what happened at Agents for at Agent Force this week, which is Salesforce big event that happened this week. And then we have a long list of really interesting rapid fire items, including new releases from Anthropic, from OpenAI, from Google. And we're gonna end with some really good news. I always like to end on a positive note, so if you don't have time to listen to everything, jump to the end to listen to the final segment because you will find it very interesting, some hope for humanity of potentially curing cancer with ai. But that's all the way in the end, and we're in the beginning. So let's get started. A Senate Democrat report that was just released paints a very dark picture of the AI's potential to disrupt the US job market, potentially taking away almost a hundred million jobs in the coming decade. This is a significantly higher number than any other projection that we've seen before. This report was spearheaded by Bernie Sanders and he is highlighting an urgent need to create policies to protect workers against AI, automation and robotics that he is going to take their jobs. Now, the report itself was actually based on a ChatGPT driven analysis of various jobs that exist today and AI's opportunity to actually automate them across different types of ai, including humanoid robots and other robotics. And now I'm quoting Sanders himself. He's saying Artificial intelligence and robotics will allow corporate America to wipe out tens of millions of decent paying jobs, cut labor costs, and boost profits. Now the report argues that AI concentrates wealth and power with obviously tech executives and tech companies, are leading this charge as they're investing billions of dollars to reduce labor costs and boost productivity. Now, on the other side of the aisle, republicans are advocating that US leadership in AI is a necessity for the future of the dominance of the us and they're warning that overregulating, this field will cut our ability to be competitive and we may lose the race to other companies, mainly China. Now the report by itself is obvious. The direction is very clear. The numbers are questionable because of the way the research was done. It's just relying on ChatGPT, analyzing the current thing, but the direction's very clear. AI can take jobs. Yes. Is it a political aspect Right now? A hundred percent. There are elections just around the corner. We are just a few weeks away. And job loss is a big thing as far as driving people to change their direction and how the way they vote and job loss is always a very good threat to dangle in front of people, to scare them into making one vote versus the other. Now, the interesting thing here, again, to make it as broad as possible, because this is a political play, they have mentioned several different industries that make it very hard, and many of them are not tech jobs or leadership jobs, but they mentioned fast food as an industry that's gonna hit very hard accounting, trucking. So you can see that many of these are blue collar jobs and not white collar jobs. And the direction is very clear. Again, this is showing you that the report is trying to be as broad as possible in its impact on society. Despite the fact that I think we are a while out from humanoid robots serving drinks at Starbucks, the fact that they mention it as a very high likelihood that this is going to be dramatically disrupted, I think shows that there's a very clear agenda behind this report. I'm not saying it's wrong, by the way. I think the timeline might be a little compressed at the end of the day. They're talking about a decade. There's basically gonna be many other elections and midterm elections between now and the time it's starting to be critical but I'm not saying that it's totally wrong, meaning I do believe that AI and robotics will take a lot of jobs away. I think the system will have to change to deal with that. And we're gonna talk about other people's opinion, but before we get to other people's opinion, I wanna share my own opinion on the topic. In my opinion is that the state slash country level of AI adoption is very similar to the company level decision making that has to happen. And I will explain. A lot of people come to me and I just came back from MAICON, which is an AI conference, and it was fantastic, and I got to have a lot of AI related conversations with people. And because of this podcast, a lot of people know me and came to me to ask me big questions. Many of them are around job loss and what decisions can company makes to stop that. And I don't think we can, and I'll explain what I mean. And I said that multiple times on this podcast. If you are running a company, you cannot decide not to run fast with ai. I mean, you can, but what is going to happen? And my answer to these people when it comes to the impact on AI on companies and how can they approach AI to not cut jobs. So there are many business leaders out there who truly, deeply care about the human. Aspect of their company who truly, deeply care about their employees and want the best for each and every one of them. And they don't want to let any of them go. And I think this can last a relatively short amount of time. I mean, caring about them is obviously something that will last forever, but being able to keep all of them is not something that will last forever. Because there are two options and two options only if you can grow faster than the efficiencies that you're getting. Meaning if you get a 30% efficiency and you can grow your company by 50% because you freed time in your employees and they're doing higher level tasks and you've learned how to really leverage AI to grow faster than your competitors, then go right ahead, hire more people to do that. However, the pie, the total size of the pie, the all the goods and services that we can sell in a specific industry is finite. Will it grow with ai? Maybe in some industries? Definitely not in all of them. Let's say you're doing plumbing. The amount of plumbing needed in the US or in the world is the same. I mean, it's growing a little bit because we're building more homes, but nothing dramatic. Nothing changes from what happened in the past few years. And so once everybody finishes going through this transition, it will be very hard to take more market share, take a bigger chunk of the pie, and then you're competing against other companies who are all using AI maybe more effectively, less effectively, but you won't be able to continue growing at the pace that AI will give you benefits. And then you have two options. Your competitors will be able to sell the same goods and services you are selling at 10 or 20% less while still making more money because their cost structure, because of AI, is gonna be significantly better than yours. And in that case, you have two options. Option one is to lose your entire company because you will run out of business because your competitors are gonna be significantly more efficient and provide better and faster goods and services than you can. That means your company is done, and that means you're not taking care of any of your employees. The other option is to start cutting people, to stay competitive and then you can save some of the employees. I think the same exact logic applies at the national level. If we try to keep all the employees and stop ai, then other countries will become better and more efficient than us in literally anything. And then we are going to lose everything the US has built in the last 300 years, and definitely the domination it develops since the end of World War ii. And so if you are excited by this move by Bernie Sanders, you gotta keep this in mind. I'm not taking sides here. I'm not saying these guys are right and these guys are wrong. I think the right way is probably somewhere in the middle, as I believe in many, many different cases when it comes to politics, because politicians are taking more and more extreme positions on more or less every topic. the reality is very clear, and as I mentioned, the same concept that applies at the company level, applies at the national level. And we will get to the point that we just don't have a choice. Now, what does that mean? It means, as I mentioned, that we'll get to the point that we don't have a choice, and doesn't matter whether you're a Republican or a Democrat, we will face that fact. And yes, it may take three years, five years, 10 years, but we will get to that position and that means the entire system of how we live we'll need to change. Now, how exactly will it change? Well, it will change too. I don't think anybody knows at this point, but we will have to figure it out more about that from a slightly different angle in the next few segments. Now staying on the impact of AI on the country, there is a nation backlash growing against AI data centers in multiple different locations, highlighting the cost and impact of these data centers on local communities. So to put things in perspective, a typical data center consumes electricity that is equivalent to a hundred thousand households now, in most cases, it doesn't come with building new electrical power plants to deliver the power to an additional a hundred thousand households, which means it's using the existing grid in the existing power generation. In some cases it is using generators which have their own issues because they're doing significantly higher pollution than a large power plant. Definitely a nuclear power plant. Now, the largest data centers are 20 x, that means 2 million households as far as the total electrical power that they're consuming. Now. In addition, they're consuming a lot of water. So in Mansfield, Georgia made us data centers have left many residents having to buy bottled water for drinking because they are claiming, and again, I didn't verify that, that the local whales became murky and the local water became undrinkable because of the amount of water that gets consumed by the data center. Now, is that, again, a political statement? How much of the wells becoming murky is really because of the data center versus just the overall population and the amount of rain and many other aspects? I don't know, but it doesn't matter because this is the mindset that people have and this is what they feel that data centers are doing. Looking back at the electrical aspect of it, Mike Jacobs, who is the senior energy manager at the Union of Concerned Scientists. Again, you can get by the name. What are they concerned about? He found that data centers added$1.9 billion to Virginia households electric bills in 2024 alone. Now, again, I wasn't able to find exactly how they did the math, but it doesn't matter. And again, the reason it doesn't matter is it is obvious on one hand that AI data centers are consuming a lot of electricity from the existing grid. It that means that there's more demand. That means that the price are going to go up and whether the number is 1.5 billion or 2 million, it doesn't matter because the direction is clear and that's what people care about. And to show you how much this situation is extreme right now in Virginia's 30th House District, both Republicans and Democrats campaign against local building of data centers. And the quote from John McAuliffe, who is the Democrat that is leading it together again with a partner from the Republican Party. He said, and I'm quoting, we need to make sure Virginians are benefiting, not just paying for it. Now, when was the last time you saw a unified bipartisan support of anything in the past few years? This just shows you how critical this is becoming, again, especially going into elections and how much politicians care about job creation and impact on lives when it comes to basically everything just before the elections. And so we're going to hear more and more about this in the next few weeks as the election comes up. Will the volume of this kind of protest and so on continue after the elections? I'm not a hundred percent sure, but right now there are many, many fronts in which local organizations, communities, and so on, are fighting against building data centers in their area now do I think this is a justified battle by the local people of these different communities? I a hundred percent think the battle is justified. I do truly believe that data centers consume a lot of energy and a lot of water, and that will take its toll on local population. Do I think the local people have a chance to win these battles? I think they can win a few. I am a hundred percent certain they're going to lose the war. I'm not saying that being happy about it. I'm just saying there's way too much money in huge interests that are involved, and there is a government right now that sees the race with China as critical. And so I believe that most of these data centers will get built in the locations that the companies want to build them. Again, good or bad, helping the local population by delivering, uh, jobs, et cetera or bad for the local population. I think time will tell, we'll be able to analyze this a few years into the future, but right now, again, I think other than maybe a few local battles, I think the bigger companies are going to win. Now, continuing on the topic on the impact of jobs in the US society and population. A 16 Z, which is one of the most known and successful vCs in California, and they're obviously deeply involved in the AI universe, has released a podcast, their own podcast, a very interesting episode. It is a presentation by Alex Pel, who is a general partner at A 16 Z, and he delivered that at. On a presentation he did at the LP Summit, and it's titled Software Is Eating Labor. Now, first of all, a little bit on the title, Mike Anderson, who is the founder of this vc, had an essay in 2011 that was called Why Software Is Eating the World. So this is a homage if you want, on that particular piece. So this piece is called AI's Eating Labor, and he makes some very interesting and troubling claims, but they all make perfect sense. So what he mentioned is that the current worldwide SaaS market is about$300 billion in revenue that it's generating every single year. This is definitely a really big pile of money for AI to go after, but what he's also saying is that the US labor market alone spends$13 trillion per year on salaries. That's a much bigger pile of money. That's two orders of magnitude bigger. Now Rampal does an interesting history review where he's basically showing that the growth of SaaS company or software companies in general has literally just taking files and digitizing them. He gave multiple examples, but just to name a few. Sabre, which is still the backbone to many of the way we order airlines, which was developed by IBM. Was literally digitizing paper slips that previously held all the information on which airplane takes off at what time and who sits in what seat and so on, and made it into a digital environment, Salesforce, and before that on premise, CRMs basically took notes that people took and Rolodexes that people had about sales and made them digital and Epic, which is a huge company. With that holds most of the US healthcare records. Basically took, well guess what? Healthcare records that were held in, huge organizers and then batches, and then drawers, and then so on in really, really large rooms and digitized all of that. And so that was the software and SaaS revolution and what he's saying that AI will come and operate on the operations that are currently done by people. So it's gonna be booking and rebooking flights, it's gonna be drafting contracts, it's going to be negotiating on your behalf. It's going to collect payments, by following up with people who hasn't paid on time, et cetera, et cetera, without any human intervention. He gave some very specific examples, numeric examples, and he gave an example customer service. So he said traditional SaaS model charge per seat. So if you are using Zendesk at about$115 per seat per month for the professional suite, you will pay for a large company about$1.5 million to Zendesk just for the seats to do your customer service. But if you have a thousand person support team, the cost of the thousand person support team is$75 million compared to the$1.5 million of the software. However, if Zendesk provides you the ability to actually provide customer service, to answer phone calls, to answer chat, to answer emails, to close tickets, to connect the dots of everything the clients need, Zendesk can charge you significantly more, potentially three x at$5 million software spend. But you will not need the$75 million spend on people. Zendesk will make three x the money they're making right now, and the company will be able to let go of a thousand people because you will not need them to do customer service anymore. Now, in many cases, it's not gonna be a all or nothing kind of game where Ramo highlights. As an example, nurses in the US are earning$650 billion annually. That's more than twice the entire global SaaS market. Now, can AI replace nurses? I don't see that likely in the near future. However, it can replace some of what nurses do, such as follow-ups or prescription or post-surgery pain checks and things that right now are consuming a lot of time of what nurses do and they will not need to do in the future, which means we will need less nurses than in the other scenario. So it's not an all or nothing game, but it will still require us to have less nurses, or at least in the short term, it will allow us to overcome the need for additional nurses. Ramble also played an audio that is showing a negotiation between two different voices that are negotiating and closing a deal on shipping costs for a shipping. And it's practically impossible to tell which one of these two voices is an AI versus an actual person, but it got to a final deal, meaning we're at the point that AI voice and AI understanding of the situation is good enough to have a voice negotiation between suppliers and people who are buying these supplies, whether goods or services, and both sides in theory could be AI and there's no reason why they wouldn't be. Now beyond the saving in costs of ai, ramble stresses that there are many other advantages in using ai. So a few examples that I gave. One is doing demoralizing roles. Think about doing collection calls. How many times you getting really angry responses, and that becomes your day-to-day. That becomes your life. People just being angry at you and shouting at you and hanging up the phone on you and calling you different names. And now AI will do this, and the AI doesn't care. It will just keep on doing this again and again and again until they get a response and until you actually talk to them and hopefully pay what you owe. He also talked about regulatory compliance. And at least in theory, the AI will always be compliant. It will never go out of script. It will never do things that it's not supposed to do. It will never round corners, and then you won't get in trouble. As a company. He was talking about multilingual support and the 24 7 aspect, 365 of AI that humans cannot just not do. He was talking about seasonality. So how do you scale a lot of your staff around Black Friday and the holidays, but then you let them go and then you have to retrain them again in September next year to get ready for the holidays again, all of that goes away when you're using ai and hence why there's a very, very strong incentive to use ai, and that's why there's a very, very strong incentive for the companies to develop it as solutions to replace labor versus just to replace software. Now you wanna take it more extreme? I'm going to remind you a company that we talked about a few months back, and that company is called Mechanize and the goal of Mechanize. And they raised a lot of money from a lot of very interesting people. They claim that they're going to automate the entire workforce. That's their goal. They're not hiding it. And they just released a paper. And the paper is called The Future of AI is already written. And the blog post that they released provide a provocative analysis of the future as mechanized it is seeing it, and they're basically saying that we don't have an option. They are claiming that the full automation of jobs is coming regardless if they do it or not, just based on everything that happened in history before. And they made several very relevant historical comparisons. The first one that they mentioned is that every time there's been a useful technology in the past, it was adopted regardless of what people thought about it or felt about it. The other thing that they mentioned is that in many cases in history, there's been parallel discoveries of new science, new tools, new technologies and so on in different independent places around the world. So before we had internet that everybody knew what everybody else was doing. It was still happening and they're claiming it was happening because once something became feasible, it was discovered and then adopted in different societies and civilizations around the world. And they gave several different examples of that. One of them is that the Aztecs in South America discovered very similar ways to the Spanish in Europe to do irrigation and use currency. That's before the Spanish came over and found them. So very similar solutions in two different places around the world. And there are, by the way, many other examples of that. another example is obviously nuclear weapons that were developed and being put to use in different places around the world, despite the risks that comes with it. And yet we know today that many, many countries in the world have nuclear weapons and definitely nuclear power. So what is the claim? The claim is that since every time in history something became helpful and useful, and you can define what that is, the world started using it at a large scale regardless of what people felt about it and regardless how much it questions the status quo that equi that existed before. And what Mechanize is saying is that the full automation of the workforce by AI tools and then robots and so on is basically inevitable. So they might as well be the first one to do it and benefit from it. And hence, if you're looking for a job as a software engineer, go and work for them. That's the bottom line of this paper. So what do I think about all of this? What I think about all of this is that I sadly think that they are right in the current way of things. And what do I mean? In the current way of things. We are now taking for granted democracy and we're taking for granted capitalism. Both of these things did not exist, not that many years back in human history. Now, this sounds like a catastrophic change of the world as we know it, but it doesn't have to be. So if you think about the life that everybody knew in the Renaissance, everybody knew in the Middle Ages, everybody knew in the stone age and so on. All they knew is that way of life. And the change always look really scary. And yet, in each and every one of these cases, what came afterwards was significantly better. The vast majority of people today in the world have significantly better life than more or less the entire population of the world 200 years ago, unless you were extremely wealthy. And that is just 200 years. You go back beyond that and you get my point. Now, do I think this has to follow the same trend? No. There are huge risks and we don't have a clue what will follow, and I think UBI is bullshit, but will we find a solution that might enable us to live better, more fulfilling life than we have today without having to work for a living? I don't know. I would like to stay optimistic based on history, but I do see very, very significant risk, especially in the shorter term because in the shorter term, we will not find a solution. This technology and its adoption, even though the adoption is significantly slower than the technology itself will put us in a situation well, we will have to face realities that we're not used to facing. There's gonna be a lot of unemployment and there's gonna be a lot of turmoil from a social perspective, and this will lead to something different in the long term. But in the short term, I think we're heading into some really turbulent times. And from that to the second topic of the recent releases by OpenAI and how they're being perceived by others. And we'll start with the announcement from this week, and I'm quoting Sam Altman's post on X or tweet or whatever it is called right now. Now that it's X and on Twitter. We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues, and that obviously relates to the suicide that they were sued about and other situations that they were really harming people that were using ChatGPT and got the wrong advice. So this is me now continuing to the quote, we realized this made it less useful slash enjoyable to many users who had no mental health problems, but given the seriousness of the issue, we wanted to get this right. So this goes back to much of the term oil and the negative feedback that became very, very loud as they released GPT five and took away GPT-4 oh where a lot of people say they lost their best friend and their advisor and their personal relationship with AI was completely destroyed because GPT five doesn't have a personality, but a lot of it was done on purpose. So now back to the tweet. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what they liked about four oh. In parenthesis, we hope it will be better. Close parentheses. If you want your ChatGPT to respond in a very human-like way, or use a ton of emojis or act like a friend, ChatGPT should do it in parentheses, but only if you want it not because we are usage maxing. In December as we roll out edge gating more fully and as part of our treat adult user, like adults principle, we will allow even more like erotica for verified adults. Okay, First of all, let's talk about what happened on X and on Reddit as a response to this announcement. And a lot of people went very, very negative, somewhat nuclear responses, uh, to this announcement. The general consensus was that OpenAI has said in the beginning and then later on now that they're trying to shift into a for-profit organization, that their main goal is to making the world a better place with ai. And yet now they are giving us another social media platform, one that might be even worse than the existing social media platforms. And they're talking about the release from last week of Sora two as a social app and a porn chat. One user on X said, and I'm quoting, I don't remember PornHub, claiming they're going to make the world a better place. Another user shared a video interview of Sam Altman from 10 weeks ago, in which he said he's really proud of OpenAI for not falling into the trap of trying to captures people time and attention on ChatGPT, but actually trying to provide as much value as possible and when was pushed by the interviewer of an example of what does that mean? He literally said that he's very proud that they still did not create a sex chat bot on top of chat GPT. And here we are 10 weeks later when they are building an app that will capture as much of your time and releasing a sex chatbot on top of chat GPT. Now all of that is true, but I wanna give you my personal opinion. I shared in length last week what I think about the social media app that they released with SOA two. It. It scares the hell out of me and it scares the hell out of me because it is the first time that a social media platform can generate the actual content. If you think about it, yes, humans are asking Sora to generate it, but Sora can generate it on its own. It doesn't need you to tell it. All it has to do is to follow the behavior, and as I mentioned last week, potentially with the camera, seeing your face and your expression and what you're about to do, and then serve you a new piece of content that would be perfectly aligned with your personal needs, wants and wishes at that exact second. Now, is the technology there right now to do this now? Because it takes a little longer to render the videos, but this is what's coming and that is really, really, really scary. Like personalized, fabricated content to the wishes and needs of the individuals, completely ignoring everything else, just to keep you on the platform as long as possible. This is a glimpse of where this is going. But the flip side is on the adult chat question. I have no problem with that aspect as long as they can really verify that it is adults. Now, we have had porn sites for like since the beginning of the internet, and it doesn't matter whether I agree with the fact that we have porn sites or not in modern society. This is the reality and they are there and they're available. And in most cases, any 13-year-old kid that has a phone or 9-year-old kid or girl that has a phone can access them because there's no real verification of age when you go to most of these websites, by the way, a friend told me that, I don't really know it. and the other aspect of this is I have a very serious issue with open ai, anthropic, Google, or any other large language model lab being the moral guardians of the world. Why should they decide what is acceptable and not acceptable, especially across the entire world, with different societies, with different beliefs, with different moral codes and so on? Why should they decide what is right and what is wrong? So I have no problem with them allowing people to do whatever they want within what is legal, not what they think is moral within what is legal, because this is how modern society works, and we treat adults as adults and we allow adults to make adult decisions, and we allowed it during the internet era and there's absolutely no reason we will stop doing it in the AI era. And I would love to hear your opinion about this. I would love to hear the opposite side of this conversation. So find me on LinkedIn again, you know my name. I'm Isar Mateis. I'm on LinkedIn every single day. And tell me what you think about it. I'm obviously going to post about it as well, so we'll have the opportunity to respond to that. This wasn't the only issue with OpenAI in the last couple of weeks. They, as we mentioned, released Soro two. SORO two has an opt-out option from an IP perspective. So you can create videos of any person, any known actor, any character from cartoons around the world, et cetera, et cetera, et cetera. And if the companies or the people want not to be included, they need to send a formal request to OpenAI saying that they wanna opt out of their likelihood and IP being available on Soar. Two, that obviously raised hell from many different directions, especially the Hollywood creators. So the Motion Picture Association agencies like W-M-E-C-A-A-U-T-A, which representing many, many different big stars and the SAG aftra. Association as well all has been going very loudly against this approach by OpenAI, and their claim is that OpenAI do not own the rights to any of that IP and hence should stop this immediately and replace it to an opt in option where companies will be able to, if they want to add their IP into the Sora universe, but that the way it is right now is a clear infringement of existing copyright law. Sam Atman said a few things. One of it is that there are different organizations that are complaining to open ai, that their IP is not being shown enough, basically seeing it as promotion. Is that true or not? We will never know, but that was the claim that Sam Altman made. But the other one, he said that they're going to be releasing an update to Sora two, offering the rights holders of this ip. And I'm quoting more granular control over character generation, whatever that means. But this is coming potentially around the corner. As a reminder, Disney, universal and Warner have sued Midjourney for very similar issues. And when they did, I told you it's a very smart move because they have a much better chance of beating Midjourney in court versus OpenAI because OpenAI has significantly deeper pockets to fight them. And if they win against Midjourney, then now you have a precedent that you can use in the trial against OpenAI or VO three, et cetera, et cetera now, how will this resolve itself? I think this will make its way to the Supreme Court, uh, because I don't think any of the sides will stop before that. And I'm not sure that the Motion Picture Association or any of the other groups, and they're probably unified together to go at this against OpenAI and the other labs, I don't think they will win. And the reason I don't think they will win is because this has been happening forever. Meaning the user that uses the tool is the one responsible for whatever they are creating. If I am now going to use Canva and I'm gonna copy pictures of the internet of known characters and so on, I'm gonna create an ad and I'm gonna use it on social media without getting the copy rights I need in order to create that ad. And let's say I've used Disney characters. Are Disney going to sue me or they're going to sue Canva? They're going to sue me. They're not going to sue Canva. Now you can take it further back in time. When people were creating images with paint and pencils, when I create an image and I sell it, and let's say I'm making a lot of money from it, and it's an image of Mickey Mouse again, sorry for sticking with the Disney example. I'm from Florida. That makes it very easy for me to think about. But let's say I created a really beautiful painting of Mickey Mouse and I'm selling it will Disney go after me or will they go after the people who created the paints or hobby lobby for selling me the paint? No, they're going to go after me, the person who is using the tool in order to infringe on their copyright. I have a very strong feeling that this is exactly the card that OpenAI and the other labs are going to play. And to be completely honest, I think it's the right interpretation of the situation. The fact that I have a tool that allows me to break copyright law doesn't mean I have to use it to break copyright law. And again, this is nothing new. I could have done this in Canva. I could have created the video before just with a lot more work. I could have created images before, just with a lot more work. A little more work. Again, if I'm using Canva and just existing pictures from different places around the internet, and so this is my concept. Now, my view of this, and stay with me for a minute. I promise it'll make sense. I think this is a cross between Spotify and Web 2.0. So let me explain what I mean. If you think about Spotify as we know it today, as the place we consume music, and it doesn't have to be Spotify. This could be Apple Music or Google Music or Alexa or wherever it is that you consume music through streaming. This is the current state of things. Everybody is more or less happy with it. So the creators have a huge distribution channel they never had before, and they're making sense on every time their songs are being played. But it's played on a very wide audience and people get to know them and because they get to know them, they buy tickets to their concerts and so on and so forth. Is that very, very different than what it was before? Yes. If you remember, we used to buy albums and CDs of these creators, and the first generation of distributing music digitally was the scariest thing that ever happened to them. So those of you who are old enough like me, remember the peer-to-peer services, the P two P services that allowed to exchange music online and be torrent files and servers where you can go and get literally any song you wanted, uh, without paying anybody anything, just by downloading it. And then obviously the biggest name that many of you probably know is. Napster, right? So Napster took that concept that was kinda like an underground thing and make it into a company where you can go into their website and get any music that you want completely pirated. That took a while, but it all went away, and we found a business model and a usage way that satisfied both sides, I think. Now what does that have to do with Web 2.0? Well, web 2.0 was the first time that allowed anybody to create content on the web. Again, those of you who are old enough who lived in the early days of the internet, the entire content on the internet was created by really large companies, and there was no social media and you couldn't post anything. You just consumed content of the internet. Again, for those of you who are younger, that doesn't make any sense, but that was the first decade of the internet. no individual created content and posted it online. But now it is the norm. So what I hope that will happen, and to be fair, I think that will happen is that the owners of the IP and the creators of the platforms will find a way to satisfy everyone's needs. In other words, I will be able to create with AI anything that I want, and I will pay a little bit of money for the AI platform, and some of that money will go to pay the owners of the IP of the things that I am creating because the platform will be able to track it and distribute the revenue accordingly. This means that there's gonna be more Disney characters out there and more songs by specific musicians right now, and more pictures by known artists and more so on and so forth. And they will make money off of it. They will become more famous and people will then consume more of their stuff. Now, does that put at risk the overall mechanism because then people may consume only this. Maybe. do I think there is a way around this? No, I don't think there's a way around this, and the reason I don't think there's a way around this is because of the open source universe. Even if we block Sora two, the open source tools will allow to do the same exact thing. And even if you block this variation of that variation of the open source tool, any individual can download the open source platform to their own computer, remove the limitations of IP creation, and then use it as they wish. And so I don't think there's a way to block it. And I will think the IP holders will have to work together in this new universe, just like they did with Napster and with BitTorrent servers. And we will get to the Spotify era of creation of everything. So instead of just consumption, anybody will be able to create anything. If and if there is IP involved in it, you may need to pay a little more to create it and part of that money will go to the owners of the ip. Now, will this happen? I don't have a clue. Does it make sense to me that will happen? Yes. Do I want it? Do I wish it will happen? I do. I don't know about the big owners of the IP right now, but again, as I said, I don't think they have much of a choice. Now, staying on this topic of Sora two, Japan's government just issued a call for open AI to, and I'm quoting refrain from corporate infringement in Soro two related specifically to anime characters. Which they defined as, and I'm quoting again, irreplaceable treasures, that Japan boast to the world And they're emphasizing what they're calling the irreplaceable role of anime to global culture and economy. So where do we stand right now on OpenAI? We stand in a situation that in two weeks they literally went from, we are all for the future good of humanity. We are not going to do things that will just try to keep you on the app as long as possible to, as I mentioned, giving us a new really scary social app and potentially the ability to do porn. Chats on chat. GPTI already told you what I think about it. But since we mentioned, a government and the CHI and the Japanese government approaching open ai, I will mention one more thing on this segment that is not completely related, but it is tied to regulations and so on, and that is california just passed a call called Senate Bill two four three, and it was signed into law on October 13th. And what this bill does is it requires chatbots to clearly notify users that they are AI and not human, and mandate annual reports on the safeguards in the situation that they have on negative things such as suicidal ideation. The idea here is obviously twofold. One is to make it very, very clear to users that they're talking to an AI versus a person in order to reduce the potential dependency or trustworthiness of these tools. And the other is to generate a way to monitor what's actually happening in those platforms, especially when it's life threatening. Meaning suicide prevention efforts. Now do I agree with that? And do I think it will play a significant role in the future? So yes, I obviously agree with that, right? It puts reasonable guardrails without really stopping the technology from moving forward. and I think it makes perfect sense. Do I really think it will play a significant role in. Doing what it is trying to do. I think time will tell. I think telling people it's an AI versus a human doesn't matter. I think people who chat on these chatting platforms know it's ai. They don't think it's a human. And even if you keep on reminding it to them, it doesn't matter because if you develop emotions as an outcome of a conversation, the fact that somebody will tell you that your emotions came from something that's not human will not make a difference. We get emotional when we watch a Disney cartoon movie. Just think about how you felt and when you saw Bambi for the first time and the fire happening, uh, and what happened to his parents, right? We shed a tear and we become very, very emotional and we know it's not real. It's a cartoon on a screen, and yet it drives an emotion. If that emotion continues for a long period of time, you will develop some kind of an emotional attachment to that thing, regardless of whether it is human or not, despite the fact that, you know, it's not human. So I think it's not going to prevent that. I think it has no chance to prevent that just because the way we are wired as far as showing people what is the impact of that, and forcing companies to share publicly what is the overall situation from a social impact perspective, especially at its extremes. I think that is very, very important because it will force companies to take the right measures to prevent these things from happening. And now to the third deep dive of this week. And that's what happened in Agent Force this week. Agent Force is what used to be Dreamforce, which is Salesforce largest conference of the year, announcing all their biggest announcement. Again, they've changed the name last year because they wanted to show the world that they're all in on AI and focusing on developing agents. Now, why is this very important and why is it a deep dive topic? Because it connects directly to the first segment that we had and how will AI change software and or how we work. And so in the first agent force, mark Benioff has been very loud and vocal about how fast this shift is going to happen, how quickly it will take over the world, and how generic AI tools like ChatGPT are bullshit and has no value in the real world. And this year he acknowledged and had a complete shift in his rhetoric. He is now stating that utilizing AI effectively and implementing it on a enterprise level takes time for companies and it's not a quick flick of a button and then you're up and running. He, he also agrees with what is, I think, a consensus that AI technology is definitely outpacing consumer adoption. And as I mentioned, he acknowledged to an extent the real competition, and that was kinda like the underlying clear tone, that it is very obvious that Salesforce sees the moves by OpenAI and Anthropic as a threat to their business, as they're growing their influence, and as they're integrating into more and more tools, and it's now more or less, if you can't beat them, join them kind of approach, versus this is bullshit. It will never be relevant. You have to stick with what we are giving you. Now Salesforce shares are down 26% year to date, and a lot more than that since its peak on January 28th, which is definitely not a good sign as far as what people thinking about the chances of Salesforce, figuring out how to deliver value and how to grow sales in the current situation of things. Now, what Mark Benioff says is that he suggests that Salesforce competitive advantage lies in its deeper customer relationship and the fact that they hold, and I'm quoting fundamental backbone of their mission critical operations end quote, which led them to announce their new version of the product that they called Agent 4 360. So what is Agent 4 360? It is basically their way to integrate everything, Salesforce, all the different components under one umbrella from a data perspective and allowing the users to build agents that can connect and talk to all these tools at the same time, or as they're calling it, it's the connective tissue that brings together sales, marketing, commerce, slack, Tableau, MuleSoft, and everything else Salesforce under one agentic layer. This is obviously an extremely powerful promise, if it is actually doable. So let's talk, first of all, what is the goal of Agent Force 360? And then talk about what is actually included in it. What they are creating is an agentic enterprise solution end to end with all the data and all the different tools that they have, which is a very powerful promise that I think every enterprise that is using Salesforce or is that, is using any other set of tools would like to have Now, is it complete? No, it's not complete because there are many tools that are not Salesforce, that organizations are using. You still have an ERP, you still have, uh, outlook or Gmail to send your emails, and you still have databases for other platforms, and you have. Other tools, but at least on the stuff that is in within the Salesforce universe, it is supports to unify all of it. So what comes with it? it comes many different tools. One of them is called Agent Script, which is allow you to prompt and create scripts that allows users to create agents. There are many different ways to create agents. I'm not a hundred percent sure on the differences yet. underlying underneath all of this are the reasoning models from anthropic Open AI and Gemini that can power your agents to be able to be very powerful and capable because of their thinking capabilities. They're also created agents, force Builder, which is another new tool, and most of these tools are gonna be released to beta in November, so they're not released yet. And this tool is allowing users to build, test, and deploy agents from a singular spot. They also are releasing agent force vibes, which allows you to vibe, code, different applications that talk to all these underlying tools, capabilities, and agents. They're also releasing Data 360, which is a new variation of Data Cloud, which is a intelligence layer for their data platform that allows agents to have access to clean and reliable data and context for everything that you're building. And they're also releasing more than 300 industry ready agents. So stuff you don't have to develop, you can just take them off the shelf. And the goal is to make them plug and play into different industries like healthcare, financial services, and manufacturing. Now, when I mentioned, if you can't beat them, join them. If you go back to last year's rhetoric about open AI and ChatGPT Versus what they announced this year. So this year you versus what they announced this year, you see that there's a very, very, very big difference. And what they announced is that users of OpenAI ChatGPT, that has Salesforce licenses, can now connect their chat UPT and give it full access to everything on the Salesforce app data, including CRM records. And it can control Agent Force tools straight from chat GPT or as I said, chat, GPT will be able to, and I'm quoting, tap into powerful enterprise grade ai, right at their preferred surface environment. Basically, you'll be able to use chat GPT to control and understand and connect with everything on Salesforce. Now they have positioned Slack as the home for all these agents. So basically Slack becomes your communication world with the agents behind the scenes. And so any communication that you want to do with any data will be done just like you're doing it today. But instead of to talking to humans, you will talk to agents behind the scenes. I think this makes perfect sense. Instead of trying to introduce a whole new way of communication, let people communicate as they're communicating today, just provide them significantly more value through an existing platform. And I think that's a very smart move by OpenAI. I already have several different integrations of AI into Slack and not natively, but just stuff that I created, uh, including N8N and Make and stuff like that, that talk into, uh, specific channels that I have in Slack. And it's very useful and intuitive and making it out of the box makes a lot of sense. Now, the current statistics is not very promising and it's not very positive, which might be one of the reasons why their stock keeps on coming down this year. But they currently have 12,000 customers that are using or experimenting with Agent Force, so we don't really know how many of them are in a mature phase of actually using it. Now that sounds like a pretty large number. But the reality is they have about 150,000 customers. So after over a year of trying to deploy this and getting into less than 10%, where many of these, and again, we don't know how many are just experimenting and not actually deploying Agent Force, uh, that's not a very positive sign as far as adoption of this new technology. But again, agent Force 360 is a whole different beast and animal, both in means of its ability to connect to the data as well as in the organization's ability to easily create and deploy agents. So this might change it moving forward. Now to show how amazing agent force is, they gave multiple examples of large organizations who are already deploying and getting benefits and immediate ROI by deploying Agent Force, these included Dell and FedEx and William Sonoma and Pandora and PepsiCo and others. So they are showing specific cases in which large organizations have deployed it and are seeing real ROI. We're obviously going to see more and more example like this moving forward, but as of right now, I think Salesforce are making the right moves to provide value. And if we connect it to the very first segment that we had in which we shared that in theory Salesforce will be able to be your sales team and your marketing team, then if they move in that direction, and if they do that effectively, they can not just save Salesforce, but make it one of the most successful companies in history. Tam will tell if they are going in that direction and how effective will they be in actually deploying it in a successful way. But they definitely have the opportunity because they own the data and they're connected to everything across many, many different companies around the world. Those of you who have taken my course, you know I have my five rules for success in the AI era, and one of them is that there are two ways to win in the AI era. One of them is having access and owning the proprietary data. And in this particular case, it is very clear that Salesforce owns the proprietary data of a huge amount of companies around the world, which gives them in the perfect position in order to leverage that data to allow these companies to be more effective with that data, which may or may not come at the price of the employees of the companies. Now two hour rapid fire items. And I'm going to start with Apple that took another hit this week. Kay Young, who just recently were promoted to head of Apple's, a KI team has left Apple for meta after just a few weeks in the new role. So the a KI team stands for answers, knowledge, and Information Group that was tasked with enabling Siri to fetch real-time web data and conversational queries. Basically making a smarter AI based Siri. So the person who is just made in charge, just left for meta. This is just another nail in the coffin of apple's AI initiatives. It literally just doesn't look good again and again and again. And this will eventually come to bite them and will start impacting the stock price of Apple. Because yes, it's been going up so far. I don't really understand why. Uh, yes, they can release new devices and devices are okay. I guess they're better than the previous models, but this is not the new game. And in the new game, they have lost time and time again. I'm really surprised they haven't bought one of the leading labs, or at least perplexity, which was in conversation with'em so far in order to solve this problem. I don't think they will have a choice. The other choice would be to say we are not in the AI game at all. Basically just announce it and saying, we are going to keep on building some of the best hardware on the planet and we will integrate other people's AI into it in the most effective way. I think that will take a short term hit in their stock. But I think in the long run, that may be another approach to basically say, we're not even trying to compete on this. We will just keep on building hardware. That's what we know how to do. or like I said, go and buy philanthropic, which is gonna be very, very costly, but doable by Apple. But that's not the only person who has left a known company to join Meta this week, Andrew Ullo, who is one of the co-founders of Thinking Machine Labs, the company that was founded by Mira Moira from OpenAI, who raised a crazy amount of money before even announcing exactly what product they're going to release. Well, he's leaving for Meira as well for as a spokesperson for the company. Said personal reasons. No, this is not the first attempt of meta to bring him over to the meta team. The first attempt was in August with a crazy offer that presumably was about$1.5 billion in stock and incentives performance over time. Now, meta said that's inaccurate and ridiculous at the time that this, was, that the article in the Wall Street Journal shared that amount. But at that time tulo refused that offer whatever the offer was. Uh, it obviously was very significant and now he is moving over. Uh, so those personal reasons might be I'm going to become a billionaire overnight. So that's a good enough personal reason to change, from one company to the other, especially when the other company already has a product and distribution and compute a lot more than your current company has. Now, a little bit of history why Tulo is so important. He spent 11 years at meta's Facebook AI research, and then at Open ai, and then obviously now at Thinking Machines. So he brings a real serious depth and experience and knowledge about the META'S Facebook universe back into the team. In a very interesting piece of news for this week, OpenAI and Broadcom announced a multi-year. Partnership to generate 10 gigawatts of custom AI accelerators for open ai. So this is now custom chips. This is not open AI buying standard off the shelf chips from Nvidia. These are going to be built based on their blueprint and the deployment is supposed to start at late 2026 and continue through 2029 with new investments that are coming from unshared sources. At this point, this is gonna be the first time that OpenAI will have its own processor after trying to do this and announcing of trying to do this several times in the past. Now this is a really broad partnership with Arm Design specialized CPUs to pair with these Broadcom chips. TSMC is going to fabricate the chips on their latest technology, and SoftBank will obviously play a role in this, in financing the whole thing. So a very large partnership that is somewhat built to hedge against the complete dependency on Nvidia right now in future growth. And we have seen other similar partnerships in the past few weeks announced by OpenAI with other tools. But this is the first time this is gonna be a chip that's going to be developed to the blueprint that OpenAI will put together. But OpenAI are not the only company who made similar announcement this week. Meta is partnering with Design company ARM to create AI capabilities and AI chips to Meta's needs. This obviously aligns with the crazy development right now of multiple new data centers. Meta themselves has several different large projects. The two biggest ones are Prometheus, which is a huge multi gigawatt data center that is supposed to go online in 2027 in Ohio and Hyperion, which is a five gigawatts that is supposed to go live in northwest Louisiana in completion by 2030. And so having their own design chips will obviously allow them to do things their way and reduce cost and reduce dependency on other companies. I think we're gonna start seeing more of that across the board with companies with the large labs developing their own chips in order to have less dependency on Nvidia and third party providers. There was a very interesting article this week on the Information, which is a online magazine that I really like. They bring a lot of good food for thought and information from different sources, including about ai, and they're sharing about the current huge dilemma that Amazon has right now. I shared with you last week that Walmart signed a multi-year agreement with OpenAI, and they're becoming the first really large company to allow checkout of any Walmart product on the ChatGPT platform. Basically, you can go and search any product that Walmart has on ChatGPT and be able to buy it and check out right there and then without ever visiting the Walmart website or app. This took Walmart's stock up almost 5%, which in Walmart's value is a crazy amount of money. At the day of the announcement and at the same time, Amazon stock fell 1% and if you look at a longer period of time, Target's stock went down 34% year to date. So where does that put Amazon? Amazon today is definitely the kingpin of e-commerce in the US They're holding about 40% market share of us E-commerce. That's a crazy number. And now they need to make a choice. They either do the same thing that Walmart did, and allow people to shop on Amazon, right on ChatGPT or allow 800 million weekly users on chat g PT right now, which is growing every week, so that it's gonna be a billion plus probably as we get into 2026, allow them to only shop things from Walmart and not have access to Amazon, which will definitely eat into their market share. The biggest problem I think Amazon has with this is that a big part of Amazon revenue doesn't come from the revenue sharing they have on every specific product, but comes from ads on the platform. People who want to promote their products on Amazon are paying a lot of money for placement, and if they go down the path of allowing people to shop Amazon products on ChatGPT, and obviously later on other platforms as well as agents, they will lose a part or maybe all of that revenue over time, which is definitely not good from a stock price perspective. Now, how will this turn out for Amazon? I don't know. It will be very interesting to see how they play this. I will say something about Amazon's previous attempt at this. So if you think about it, most people before Amazon were shopping and still many people do shopping for goods on Google. And then from Google going to the different sites and Amazon, were able to change that paradigm in many, many, many different people in the world, myself included. And definitely in the us. Are going to Amazon to look for stuff. And they're not looking on Google at all. So it's maybe the only case that in the Western Hemisphere people are going to a different platform than Google to search for stuff. And that is the Amazon example. Can they do this in the open AI and agent era? Time will tell. I would argue that in the long run, they don't stand a chance because once everybody's used to using their own agent to do everything for them, and when I say everything, I mean everything digital. Then they will go to that agent to find stuff to shop, and that thing will have to find it on the internet. And then Amazon will not have a choice but to shift to that model. But in the short term, it will be very interesting to see how they play this. Now, last week I shared with you that OpenAI is very clearly going after world domination across literally everything they can put their hands on. So this week it's became known that they are pitching many companies to partner with them on sign in with ChatGPT. So similar how we have today, sign in with Google or sign in with Microsoft or sign in with Amazon. Uh, with the biggest one. Obviously being Google, you will be able to sign into third party platforms using the chat GPT connection. Now connect that with the announcement from last week with the announcement of apps in chat inside of ChatGPT, and you understand where this is going. If by default your sign into the application is with ChatGPT and behind the scenes you can connect to the app and use it within the ChatGPT ecosystem, that will evolve more and more and more. Then OpenAI has their fingers into more and more and more places, and they will become the solution that people will go to for basically everything they need. There are similar solutions to that in China, if you think about it. They have single large apps that you can use for everything, including payments and logins and transportation and literally everything you want. and it seems like OpenAI are moving in that direction in a very aggressive way. Connecting back to what we talked about in the beginning. Zendesk is moving very aggressively to develop agents on the Zendesk platform and in their AI summit, they shared that their new autonomous AI agent is designed to independently resolve 80% of customer support issues that they are seeing right now. So think about it. They have the data of exactly what is happening from a customer service perspective across well, a huge amount of companies that are using the Zenex platform, and now they can build agents to handle these situations, to build it to spec of exactly what the needs are. Which will allow them going back to what we discussed in the beginning to become the workforce for customer service versus just the tech SaaS platform to hold the data for customer service. Now, in addition to be able to handle the 80%, they're also rolling out co-pilots that will allow the remaining 20% of the people, to collaborate with AI to deliver customer service resolution faster, better, and cheaper. They've announced, you know, an admin layer to control all the agents, a voice-based agents, and a lot of analytics in the backend to show exactly what is happening. So whether we like it or not, this is coming and it is coming very, very fast. And even if they're exaggerating by 100% and it's going to solve only 40% of use cases across the entire user base of Zendesk, it is very, very significant. It will obviously improve over time. A company called Finiti, spelled with a capital D, and then Infiniti after that has just released Caffeine, which is another vibe coding platform. They're claiming it is slightly different than the existing Vibe coding platforms. They have their own development language for a while. So this company is not a new company and the goal is to allow people to very quickly build web applications and deploy them at a enterprise level of certainty and security. whether that is really different than what we have right now or not, I don't know, but that's another vibe coding solution. And we are going to have a bunch of these very successful. My go-to right now are mostly lovable and rep, but there are many others out there. And this is just another contender this very hot part of the competition of the AI market, staying on new releases grok. So Elon Musk's company just released Grok, imagine version 0.9, is a image and video creation platform. And the interesting thing about this is that it generates videos extremely fast from either text or images. So from a speed perspective, you can generate video on Grok faster than any other platform out there. And the very initial examples that I've seen are showing very solid realism and motion as far as the quality of the video that it generates. Now, I shared with you in the last couple of weeks that Grok four Fast, their smaller, faster model is currently the most efficient model, more or less across the board. So from a speed and cost to value perspective, they are alone in the top right side of the quadrant of being able to deliver value fast and cheap. And this is just another example of that. Only on the image and video creation side of the universe will it be able to compete with so two and VO three the next generation of tools, time will tell, but they're definitely doing the right things to place themselves as the most cost effective fast solution in the AI race as of right now. And speaking of VO three, Google just unveiled VO 3.1, which is a jump forward from the previous model that is now really, really old, and it was released just a few months ago. So they released two models, VO 3.1 and VO 3.1 fast. Just like with the previous version of just VO three. It is available through the Gemini API, the Google AI Studio and Vertex ai. And it's also available on the regular apps of Gemini and Flow. It provides enhanced audio and visuals compared to the previous model. It allows to use three reference image characters to create character and style consistency across video and shot. It includes the ability to extend scenes so you can take an existing video and extend it over time, up to a minute or longer by generating new clips to continue the original clip. So basically allowing you to create longer and longer videos, which is something that most of these tools did not know how to do. You had to create the separate cuts and then bring them together in an external editor, and now you can create much longer sections and segments. That was not doable before. Definitely not at that quality. It creates a smooth transition between a first frame and last frame of a video and knows how to work with it, including the audio that has to come with it. And they have a few new features that did not exist before that are going back to the tooling side of things with the ability to crop and zoom and do things that were not possible before. And that goes back to what I said many times before. We are getting to the level where these tools are getting to the point that they're all really, really good. And what's gonna make the difference is not necessarily the marginal difference in the quality of the output, in this case video, but how easy, effective, and useful the tooling is going to be. Meaning how easy will it be for a user to get the output that they want versus how good the quality is. Because the quality is gonna be more than acceptable across all these different tools. And these are the first moves by Google to move more in that direction. Now the cool thing, if you wanna learn how to use Google VO three very, very quickly and get very powerful capabilities, Google also released VO three prompting, VO 3.1 prompting guide. So you can go to the guide, copy and paste example, and use their five part prompt formula to create incredible outputs that look completely realistic or not realistic, uh, but basically look however you want. And there's gonna be a link to that in the show notes and obviously in our newsletter as well. Now Microsoft is rolling out a big update to the copilot app on Windows, and the idea is that it will be able to seamlessly produce new documents and so on out of thin air instead of just helping you with editing and stuff like that before. And this includes full Word document, Excel, and PowerPoint presentations, as well as later on PDF files directly from chats and prompts, bypassing basically the current apps. So this is released for initial testing. It's still not widespread yet, but this is the direction that it is going. So the days in. The dream that I thought will come true sometime in late 2024 might finally happen in 2026, where there's gonna be one unified solution for Microsoft and one unified solution for Google versus standalone apps that are somewhat useful on very, very specific use cases, but definitely not as useful as they can be if they can access everything across the board. And I am really interested to see it's finally coming together. myself as a Google user. I'm obviously more interested on the Gemini side of the unified platform, but there's definitely a huge audience for a unified copilot that can get to all your data across everything Microsoft, and create outputs across the entire Microsoft universe. Hugging Face did an interesting analysis on all the different things that are currently available on Hugging Face. They released that on October 13th, and it includes the platform's 50 most downloaded open source entities that account for over 80% of all of the downloads on the platform. So the 80 20 Rule works right? It's just 50 entities, but they account for 80% of the downloads across thousands or tens of thousands of things. You can currently download from hugging face. So they come from 20 companies, 10 universities, 16 individuals, and they together drove 36.45 billion downloads. Now small models are crushing it with 92.48% of overall models are less than 1 billion parameters, which is showing you that while we're all looking and chasing the big models under the hood, behind the scenes, people are building different applications on open source tools are actually preferring much smaller models because they're fast and cheap to run. Now from a modality perspective, computer vision is at 21%, audio is at 15% multimodality, so tools that can do all of them is at 3.3%. I think it's just not as developed on the open source world yet, but this gives you an idea with the, obviously on top is text models. Still English dominates the overall languages of these different models. US entities from the number of entities are leading, but this month was the first month that Chinese models were downloaded more than US models. For the first time ever, there has been more downloads of open source Chinese models than US models, despite the fact that the US have many more entities on the list, way more than everybody else. What does that tell us? It gives us an idea of what's actually happening behind the scenes when developers want to choose open source models for the different things that they're developing, showing you that there's a very live and healthy open source environment, and showing you that China is rising very aggressively on that field on purpose. All the big releases that they have done have been open source, and they have placed themselves as the leader in open source model development. And now for the final and really exciting news that I promised you for the end of this episode, which is Google Gemma powered AI tool that was custom created for this purpose has been able to create a breakthrough in. Potentially providing a new path for cancer treatment. So I am not a scientist and I'm definitely not an expert on cancer, but I'll try to explain to you what I understand from reading this paper. So the model is called C two S dash Scale 27 B model, which is designed for single cell analysis. And what it's trying to do is the following, apparently there are two types of cancer. There's cold tumors, and then there's tumors who are active and these cold tumors evade detection of our immune system and T cells. So the idea was to come up with different types of chemical compounds that will trigger these tumors in order to being detected by human T cells and hence attacked earlier in the process and hence potentially helping fight cancer in earlier stages. And what this model was able to do is it was able to identify specific candidates as far as compounds that can do this, checking over 4,000 different options, and then recommending the ones that are most likely to do that. The ones that the AI has found have been then tested in an actual lab with an actual cell and was proven to actually work. This is a true new breakthrough in science that makes a huge difference on human life. AI was able to do potentially for the first time ever. Now, is this really gonna help us cure cancer better or not as unclear? This is was one building block out of many others. So there's a lot of other pieces in this puzzle to take this from a concept to an actual solution that can be used. But what is showing that novel scientific discovery can be done either completely by ai, indefinitely with the assistance of ai, which can then help solve some of the biggest problems and the biggest issues that we have in our world today, such as cancer, other disease, global warming, power consumption, et cetera, et cetera, et cetera. So kudos to Google for creating this model and working with scientists to do this, and let's really hope that together we can create a better future with ai. Will we be back on Tuesday with another how to podcast, where we'll teach you with some of the best experts in the world how to do something that you can do in your business with ai. If you are finding this podcast helpful, please hit the subscribe button so you don't miss any episode. Do it right now. Pull up your phone and do this, and while you're at it, share it with other people. There's a share button on your podcast player. Just click on that and add a few names of people that you know can benefit from this. They will learn a lot. The world has a better chance of being successful with ai, and I will really appreciate it. And until next time we talk, have a great rest of your weekend.