Leveraging AI

237 | OpenAI goes for-profit and targets a $1 TRILLION IPO, GitHub & Cursor ignite a multi-agent coding war, Canva’s FREE Affinity suite ambushes Adobe, AI-generated music war heats up while lawsuits loom, and more AI news for the week ending Oct 31, 2025

Isar Meitis Season 1 Episode 237

Is the AI boom about to reshape the global power structure — and your business model with it?

OpenAI just made its long-anticipated leap into the for-profit world — triggering billion-dollar investments, a trillion-dollar valuation trajectory, and a redefined power dynamic with Microsoft. But that’s just one headline in a week filled with seismic shifts in the AI ecosystem.

From Anthropic rolling out memory (finally) to Claude, to Canva dropping a free Adobe killer, to ChatGPT hitting 1M weekly suicide-related chats — this episode is a full-throttle ride through the wild frontier of AI, business, and ethics.

Whether you're leading a team, shaping policy, or trying to future-proof your strategy — this is the episode you can’t afford to skip.

In this session, you'll discover:
 - Why OpenAI's restructure gives the nonprofit surprising control — and what that signals for governance
 - The trillion-dollar IPO roadmap (yes, trillion) and why SoftBank just doubled down
 - Microsoft's 27% stake in OpenAI and what it reveals about their AI dominance strategy
 - OpenAI’s urgent call to the White House: “Electricity is the new oil”
 - How Claude is challenging ChatGPT with memory, portability, and frictionless context
 - Canva vs Adobe: Why a free Affinity suite might shake up the design software world
 - Music industry disruptions: OpenAI enters the AI-music ring with Suno in its sights
 - The billion-dollar legaltech boom and what it means for professional services
 - Crypto-trading AIs: Which models are winning with real money — and which are tanking
 - Why Google's new AI Earth tools could save lives (or at least predict cholera outbreaks)
 - What the rise of agentic browsers and multi-model orchestration means for your stack
 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker 2:

Hello, and welcome to a Weakened News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This Isar Metis, your host, and we had an explosive week of news. I could have probably made this a three hour episode without any problem, without losing stuff to talk about, but I will try to keep it down to less than an hour. We're going to cover OpenAI transforming into a for-profit organization and what that means to the world and to investments and so on. We're going to cover lots of new interesting releases and new updates from OpenAI and from Anthropic as well. We're going to cover some head-to-head clashes across several different industries, so huge announcements in the code writing AI universe with interesting releases from Microsoft Cursor and Windsurf, head-to-head collusion in the creative space with interesting releases from Canva and from Adobe, and this was also Q3 earning announcements in the tech world. So there's a lot of stuff to learn there. Huge investments across several different companies, so we have really a lot to talk about. So let's get started. As I mentioned, biggest news of this week is that OpenAI has finalized its long awaited shift to a for-profit organization. Now, if you remember, and we covered this across multiple episodes of this podcast, this hasn't been a smooth sailing. There's been a lot of organizations that we're trying to slow or prevent this process which included open letters from leading influential people across multiple industries, including the AI industries and so on. It included some very serious conversations with the Attorney General of California and of Delaware. It included a lawsuit and a counter lawsuit with Elon Musk. So this hasn't been a smooth transition, but I said all along, and you can go back and check every episode that we talked about this, that this will prevail because there's too much money and too much political interest and too much international impact too. Prevent this from happening. The question was how exactly will it turn out? So I must admit it turned out way more balanced towards the safety and general public benefits than I thought it will, which I believe is good news for everybody. So. We're going to cover a little bit on what the new structure looks like, a lot about what that means for the future, a little bit about what it means to the relationship with Microsoft, which was their early biggest investor and backer. so let's dive into the details. So first of all, the nonprofit OpenAI Foundation retains 26% ownership with a warrant for even more shares as the company grows, which is very important because it means they're gonna retain control on some very critical aspects based on the conversations and the release of the details, both by OpenAI and the Delaware Attorney General. So if I need to summarize the output, I will use the words of Brett Taylor, the chairman of OpenAI, who said in a blog post, we believe that the world's most powerful technology must be developed in a way that reflects the world's collective interest. Well, that sounds very much aligned with the original agenda of OpenAI, which we know is not completely aligned with that. But I must admit, after reading the notes from the Delaware Attorney General, I feel much better about the direction this has actually taken in the end. So Kathy Jennings, who is Delaware's Attorney General, released a formal statement of no objection to open AI's for profit restructuring. Here are some of the details from that letter. First of all, the nonprofit retains full control over the future, so there are now two separate bodies. One is called the the OpenAI Foundation, also known as NFP, and it holds the sole authority for appointing and removal of the PBC board directors. The PBC is the Public Benefit Corporation, which is the for profit arms. So the directors, the people who will make the final decisions on the future of what OpenAI is going to do, will be appointed or removed by the nonprofit foundation, which I believe is a big deal. The second one, which is a big deal, which is the public benefit corporation. So the for-profit aspect of OpenAI directors must ignore shareholders' interest when deciding on safety or security issues involving AI models. That's a huge deal. This means that these people have fiduciary responsibility to take care of safety ahead of shareholder interest, despite the fact that they are directors in a for-profit organization. Now, the additional interesting and important part is that the nonprofit side Safety and Security Committee stays completely independent and it retains the power to stop model releases even if risk thresholds are met, meaning the people that will make the final decision to stop. A release of a model can do that regardless of what it means for profits and regardless what it means as far as internal testing, if they believe it generates a risk one way or another. Now the nonprofit organization retains a perpetual access to the for-profit organization, AI Models API research, IP and Employee Support to advance its mission to benefit humanity, which again, is very, very important. Within one year, the nonprofit board must include at least two directors who do not serve on the for-profit organization, which again provides its with the obvious goal of making it an independent body that can make its own decision, which I believe is very important. And the Attorney General Musk get an advanced notice on any major governance changes for even additional oversight. So what does all of this means? It means that the Attorney General actually took significant steps in order to hopefully ensure the safety of the future releases of any models coming out of OpenAI, as well as making sure that the nonprofit will push as much as possible for benefits to humanity above the needs of the for-profit organization. Again, I must admit, this is more than I thought we're going to get, and to put things in perspective, there's all the other labs like anthropic and definitely the Chinese companies who do not have these kind of measures in place, right? So from that perspective, it actually puts open AI in the most for humanity safer scenario than any other lab out there that is developing, leading, cutting edge new models. So what does that mean for the future of OpenAI? Well, first of all, it means that they can now freely raise, significantly more money and they will need that money because they are planning to spend crazy amounts of money on infrastructure. More on that in a minute. The first piece of news that came out immediately after this announcement is that OpenAI are eyeing a$1 trillion valuation IPO sometime in 2027 with the goal of raising$60 billion in that initial public offering. Now to put things in perspective, currently in the world, there are four companies, only four companies in the entire world, with a valuation of over$1 trillion. These are nvidia, Microsoft, apple, and Alphabet, also known as Google. That is it. So if this actually goes forward, this will make OpenAI a company that nobody has heard of three years ago, the fifth largest, most valued company in the world. Now to give you another perspective, again, if this moves forward, and these are very early stages, but the largest IPOs in history have been the Saudi oil company, Aramco, who raised 25.6 billion on a$1.7 trillion valuation. Alibaba with a$21.8 billion raise on$175 billion valuation and SoftBank with 21 billion raised in a$64 billion valuation back in 2018. So from a valuation perspective, this is going to make it the second largest IPO ever after Aramco's, IPO in 2019. And by far the highest from money raised, if they can actually raise these 60 billion now based on the raises that we're able to make in the private markets, that should be an easy peasy thing for them to do. And so from every perspective you're looking at this, this is going to be a incredible IPO, and I'll be very surprised if it's not going to happen because the amount of money they're planning to raise cannot be raised in the private markets, even though they've been very successful in raising very significant amounts so far. So the rumors are talking about filing in the second half of 2026 with a listing possible in 2027. Now that being said, OpenAI spokesman said the following and IPO is not our focus, so we could not possibly have set a date. We are building a durable business and advancing our mission. So everyone benefits from a GI, As I mentioned, I will be extremely surprised if IPO is not on the relatively near horizon just because of the crazy amounts of money they're trying to raise and aligned with the crazy growth that they're seeing. Their annualized revenue run rate is supposed to hit 20 billion by the end of this year. But at the same time, they are growing their cash burn at an even higher rate, which means they have to raise these crazy amounts of money. The immediate infusion of cash that came as a response to this is that SoftBank board has greenlighted the remaining 22 and a half billion dollars of its$30 billion pledge that was pending of them converting to a for-profit organization before the end of the year. So now that was done. They actually got the rest of the money, which has completed open AI's founding to$41 billion so far. Now that sounds like a crazy amount of money, but they have spent$16 billion on compute in 2025 alone. This is just compute that is not including salaries and other stuff, and it's supposed to be growing to about$40 billion in 2026, which means if they don't raise additional funds between now and then, they will run out of money or be very close to that depending on how much their revenue grows at the same time. So talking about money raise, the first thing that the completion of the transition has made is SoftBank's board that has pledged a$30 billion commitment in the beginning of this year, but 22 and a half billion out of that were pending on the conversion to a for-profit before the end of the year. the SoftBank board has greenlighted the releasing the remaining 22 and a half billion dollars to OpenAI for a total funding of OpenAI so far of 41 billion. Again, these numbers are staggering and even just this recent number, 22 and a half billion dollars is more money than almost any company has raised in history, as I mentioned on the IPO section before. But that being said, the pace in which they are burning cash is growing at the same time. So they are expected to burn through$8 billion in cash this year and 17 billion next year. And that obviously depends on how much they're actually going to spend on compute and infrastructure in combined with how fast their sales are going to grow. So these numbers are conceptual. The 8 billion is probably pretty accurate. The 17 billion next year will depend on a lot of things that we may or may not know yet. So as the immediate future, OpenAI just got access to 22 and a half billion dollars. But there's another aspect, a big aspect, which is the deal with Microsoft. Microsoft was the early big investor in OpenAI, and without them probably OpenAI would not exist the way it exists today. They invested initially 1 billion and over time, over$13 billion. And they had a very interesting relationship as far as a capped profit sharing with OpenAI because there were a nonprofit organization and that had to be replaced with a different kind of partnership as a for-profit organization is being formed with assets and ownership of different investors. And I must admit this was. Very interesting to watch because there are a lot of interest. One of the big things that was in the original agreement was that OpenAI can stop providing Microsoft with any models once they independently decide they have reached a GI. It was also capped that these models will be available to Microsoft by 2030. Both of these things were not very good for Microsoft. So the final agreement that they have reached is actually very favorable to Microsoft, I believe. And let me share with you some of the details. First of all, microsoft will own 27% of OpenAI in the current$500 billion valuation. That's$135 billion on an investment of 13.8 billion. That's not a bad return, so we're talking about a 10 x return in less than four years from the beginning, initial$1 billion investment. In addition, this has fueled a 2.5% surge in Microsoft shares that is now pushed their total valuation to over trillion dollars in market cap, which by itself makes the initial investment look like peanuts. As far as who can declare a GI and what does it mean? OpenAI retains the ability to declare age achieving a GI, but that cannot prevent them from providing the access to models to Microsoft through 2032. So that's a two year extension. And in addition, the a GI announcement will have to be approved by an independent expert panel that reduces the power of OpenAI to declare that even further and the risk to Microsoft to significantly less than it was before. Now from a cloud compute perspective, I believe this is a very balanced deal that supports the needs of both companies. So openAI is now free to raise capital and sign deals with rivals like Amazon and Google for non Azure web services that that were problematic before. But they are committing to buying$250 billion in Azure Cloud compute, which basically means that Microsoft is still gonna get the lion's share of future compute from OpEnAI. an interesting point that Satya Nadela shared. He shared that when he made the initial investment of$1 billion in OpenAI, bill Gates, the founder of Microsoft, told him that he's making a really bad bet. And the quote is, yeah, you're going to burn this billion dollars. Well, that is not how it turned out. Uh, this turned out to be worth$135 billion right now, and$135 billion right now, it has grown the market cap of Microsoft significantly because of the huge investments in Azure that OpenAI has put into it as well as if the IPO goes forward and they're actually valued at a trillion dollars, that means that Microsoft share will be worth more than 250 billion, which is more than most companies on the planet. So overall, I think both companies can be happy with this new agreement, but putting that behind them puts open AI in a very interesting position of completing running forward now with nothing holding them back. Two five for global dominance. And to do that, one of the things they did this week is they released a letter with an urgent call to the White House declaring that electricity is the new oil in this new global race for achieving very powerful AI and their warning that without massive new energy investments, the US will fall behind China. So putting things in perspective and some information from that letter, China added 429 gigawatts of new power capacity in 2024 versus only 51 gigawatts in the us. That's more than eight x. The amount of electricity that China added compared to the United States and open AI is calling the US government to commit a hundred gigawatts of new energy capacity every single year between now and 2030. With the idea that electrons are the new oil. This is the exact quote from Open Eyes blog post that is framing electricity as a strategic national asset and not just a resource to drive homes and industry. And again, the quote is, electricity is not simply a utility. It's a strategic asset that is critical to building the AI infrastructure that will secure our leadership. Now, for most of us, these numbers mean nothing, all these gigawatts. So to put things in perspective. A-C-N-B-C analyst said that 10 gigawatts equals about the annual power usage of 8 million US households, which means what OpenAI is urging the government to do of a hundred gigawatts per year means the capacity of 80 million homes worth of electricity every single year added to the US grid. That is obviously very significant. I must admit that with the current administration, I would not be surprised if it moves in that direction. Maybe not at the full capacity that that OpenAI is trying to push for, but definitely moving to significantly more power. And with the current administration, it may come to any new company that will be willing to build electrical capacity. Regardless whether it's that positive or negative for the planet, regardless of their sources or the resources. I think that's what's going to happen and I'm not necessarily happy about this. That being said, I do see the logic of this being a significant aspect of the future race to holding the most powerful technology human race has ever developed, and hence there's a reasonable argument behind it. I just really, really wish this will go a lot more towards clean energy, which in the immediate phase will come mostly from nuclear and then maybe some of it from solar and other green sources. Now in an open q and a session, Sam Altman, revealed some interesting things as far as the future of where AI is going and what they are claiming when it comes to how fast can research in the AI field in any field accelerate. Sam Altman said that they're going to have intern level AI research assistant by September of 2026. That's less than a year from now. That means somebody who can work in open AI at a junior capacity in AI research. The more profound projection, said that they're going to have a complete legitimate AI researcher, a system that can autonomously deliver significant research projects by 2028. At that point, we're basically hitting the singularity because AI will be able to develop new ai, independent of human input at a faster and faster pace. or As Jacob Pache, who is open AI's chief scientist said we believe that it is possible that deep learning systems are less than a decade away from Super Intelligence. So basically the systems that we have today will allow us to get to Super intelligence in less than 10 years, and we are already into those 10 years and counting. Another thing that he mentioned that was interesting is that future models will dedicate entire data centers to single problems. Basically, when there are gonna be really, really big problems, humanity, global level problems that we will want to solve, we can dedicate an entire data center to that one problem, allowing the AI to research and get to solutions as an independent research lab running all within the capacity of that data center. That is a very interesting approach, and it sounds like we're not too far from that future. And to connect that back to the initial conversation. As far as the aspect of the nonprofit side of this, the nonprofit side of OpenAI has pledged$25 billion for AI research to curing diseases and advancing safety. That sounds like a really small amount of money compared to the crazy amounts of money that are being thrown around, but it's still 25 billion that OpenAI has committed that other companies has not committed to solving really big global problems and advancing safety of ai, which I think is great news. Now speaking of safety, another thing that was revealed by OpenAI this week is that every single week over 1 million active Chachi PT users discuss explicitly suicidal planning or intent. This is 1 million people every single week that are discussing potential suicide with a chatbot. Now, while this number is really, really large, it's zero point 15% of the 800 million users that it has. But what it is showing is a very interesting phenomenon where a lot of people that are potentially considering suiciding, who definitely not having a good life right now, who may or may not had an opportunity to discuss this with anyone before, either because they were ashamed or they were afraid, or they do not have access to mental health resources, either be, either because the place they live or because of budget constraints, uh, can now consult with something not necessarily someone about their situation. That being said, that makes it very, very scary. And OpenAI, I must admit, I is investing a lot in that direction. They have partnered with over 170 mental health experts to refine chat GT's responses, and they're claiming that GPT five is significantly better at dealing with these kind of situations of any mental health crises that are shared in its chats. As you probably remember, there's an ongoing lawsuit from a parents of a 16-year-old who died by suicide after confining with chat GPT, which has led OpenAI to take a lot of different measures, including. Improving its age prediction to protect children, including releasing their parental controls that they just released a few weeks ago, including new safeguards for long conversation, resilience, and so on and so forth. So I'm very sad that that event had asked has actually happened, but if 1 million people right now are considering suicide, and Chet can prevent them from doing that and hopefully provide them with the right guidance or connect them with the right human assistance, then I think this could be a great promise for mental health globally. And as the research around it evolves, if we can provide mental support for anybody who has internet access for 20 bucks a month or free in many cases, I think that is a very big win for mental health around the world. That being said, there is a lot to ask about is the right thing to put the future mental health of the world in the hands of an ai. Is that a good idea or not? I must admit, I don't know. I'm on the fence on that just because of what I said before. I think it can provide great support for a lot of people who cannot get support otherwise, and by that means it is very, very good. Will it be able to deliver the right support is a question that I'm still asking myself. But the same question can be asked about human psychologists and psychiatrists, which cannot guarantee any success. They are trying the best that they can as well. So again, I'm still on the fence on that, but I'm leaning towards this is a great opportunity for the mental health of our universe. The one thing that I will say that wasn't mentioned in this article that really troubles me is that they clearly saying that they made a lot of progress to make G PT five significantly better and safer. However, if you remember, after all the snafu of the release of G PT five and eliminating all the old models, they brought it back so users can continue to chat with GPT-4 oh, and probably a lot of people will because of the limits on G PT five, at least on the free version. So there's a whole backdoor to this. Oh, we now have a much safer model that everybody can use, where a lot of people cannot use it as much as they want unless they pay. And the fallback will be the model that is not as good at solving these problems, which I find problematic. Can OpenAI find a solution for that? Potentially, like if GPT-4 oh can identify the situation and then roll automatically to G PT five or something like that, there are ways that they can solve it. I haven't seen anywhere them discussing it or even mentioning this situation, Continuing to some more tactical aspects of OpenAI since we're already talked a lot about them. Well, first of all, they shared something that is extremely powerful, which is shared projects ex. Previously this was only available to Business Enterprise and EDU plans, and now it is available to everyone, including free licenses and obviously the Plus and Pro and go users all around the world, across all the different platforms, web, iOS, and Android. So what the hell is this? So if you haven't used GPT projects so far, you are missing out. GPT projects allows you to create a small bubble, a small universe of context. So if you want chat PD to know more about a project you're working on, about a client you're working with, about a plan that you have about a trip that you're planning, whatever the case may be. You can start a new project. You can upload multiple documents and add different instructions to it to explain what this project is all about. Again, it doesn't have to be a project that's just the name that Chachi PT gave for it. So you can now share these projects with anyone you want, which is amazing because you can now work with coworkers at work, with your spouse at home on a trip that you're planning and so on and so forth while sharing this universe of context with them so you can get more custom, more specific answers and results align with information in that project. I am really excited about this because I'll be able to do this with many of my clients and many other people who are in the AI space who are collaborating with me, and I think this. Incredible, and I'm very happy about this new release. OpenAI also announced five key new updates to the recently released Atlas Agentic browser. These will include tab grouping for different profiles, which is really, really cool. So instead of logging out and logging in with different profiles, you can have different tab groups with different profiles. You can have one for work, one for personal, and one for whatever. It doesn't really matter. Uh, that is definitely a very helpful feature. There is going to be a whole overhaul of bookmarking and shortcuts. There is going to be more features in the sidebar, including a, the ability to select models right there from the different Chachi PT versions. They're going to improve the at mentions for richer context across different tabs. So you don't have to copy and paste information from one tab to the other. You can just use better at mentions that they had before. They are dramatically improving the speed of the agent mode, which is right now really, really slow. Like if you tried it, you will pull your hair out before it actually does something. So it's supposed to start working significantly faster and they're updating. Many other small things for what they call everyday essentials, including like an ad blocker and other different small fixes like integrations with password managers, et cetera, et cetera. So lots of new things are coming to Atlas and as I mentioned when it was released, is one more big step to global domination that OpenAI is taking and a very big risk to Google Chrome and other browsers out there. Not because necessarily OpenAI has a better agent browser than they do. And they might, especially compared to Chrome right now. But because they have 800 active million users and that number is growing dramatically, and I'll be really surprised if they don't unify all these different environments. So now those 800 million users don't have to go to a different browser because chat GPD becomes their browser. Another interesting piece of news from OpenAI this week was new pricing capabilities from soa. So right now, if you're using the SOA app, you can create X number of videos per day for free and then you get capped. And in this new scenario, you'll be able to generate new additional Sora videos by paying$4 for every 10 videos that you generate. This basically moves it to a similar model as consuming tokens over the API, which makes a lot more sense for OpenAI. You want to use it more? No problem. Just pay for what you use versus use it for free. This may allow them to provide more people with access. So right now it's iOS only or through API through different platforms, and it is by invitation only. So while it's been one of the fastest, maybe the fastest growing app ever, it is nothing compared to what it could have been if it wasn't by invitation only, and if there was a android application as well. But if you think about the fact that they just don't have enough GPUs to support it, this will allow them to at least get paid for the efforts and then dedicate more GPUs because now they're getting paid for, or like Bill Peebles, the head of OpenAI, SOA, set on X, eventually we will need to bring the free gens down to accommodate growth. Basically what he's saying that right now, yes, you can generate 30 videos per day that will most likely go down and will start paying for more and more of what you are generating, which I believe is perfectly fine and fair. Now, if you remember a few weeks ago when Sora was released and I had a big discussion about this, I was trying to imagine the future of writes for likeness in IP when it comes to generating these videos. And the recent announcement this week, both on the music and video generation, definitely are hinting in the right direction. So another thing that peoples, the head of OpenAI, SOA also said on x is, and I'm quoting. We imagine a world where rights holders have the option to charge extra for cameos of beloved characters and people. Now speaking of cameos, there are two very interesting slash disturbing aspects with the new Sora model. One is that Cameo. The company is suing OpenAI in California federal court for infringing its brand using the concept of cameos right now with deep fakes instead of actual people. So those of you who don't know Cameo, the company, Campo the company, allows you to request celebrities and big known people to record short videos for you and paying them for that. This could be birthday wishes, announcements, literally whatever you want. And now SOA is using the name Cameo, which is obviously a word in English that is not necessarily associated with a company to do the same thing with just avatars. That being said, cameo, the company. Owns the IP two, owns a trademark for the concept of cameo for doing exactly what Sora is now doing, so that is now an open lawsuit. OpenAI obviously said that. OpenAI dismissed it and said, we are reviewing the complaint, but we disagree with these claims and we'll defend our view that no one can claim exclusive ownership over the word cameo. But one of the problems with the cameos in SOA is that it's an opt out system, meaning you can create a replica of anything you want, whether it has it IP protection. Not, and not ask for permission. And if you are the owner of these rights, you have to go to OpenAI and request to plug out of the matrix and otherwise your IP or your cameo can be used for anything. And there's been a big issue in this past week with unauthorized DeepFakes of Martin Luther King Jr. As an example, without asking their estate for any kind of permission. I find this very, very problematic. But this is the reality right now. And now go fight open AI that has deeper pockets than most in the world right now. Definitely deeper than the people that will want to go and sue them, but it also puts a very big target on their back to go and get sued. So I don't see that stopping anytime soon. And two last short things about OpenAI. OpenAI signed an agreement with PayPal to bring PayPal's Agent Commerce into the OpenAI Chat GPT universe. So starting in 2026, PayPal will be integrated into open the AI Agent commerce protocol, also known as a CP is. And users will be able to tap into PayPal's balances banks credit cards to create purchases on the open AI platform. Now, it will also bring all the inventory on the PayPal's seller. Network that has over 10 millions sellers in apparel, fashion, beauty, home improvement, electronics, et cetera. So all of these will become available overnight in the open AI environment and you'll be able to, A, see them and B, pay with PayPal for them in a safe way. Now this is a part of a very big charge by PayPal into the agentic and AI universe. About six months ago, they signed a similar agreement with perplexity they're also working with Google on their agent payment protocol, so they are definitely putting their hands into all the different cookie jars right now. But as part of this agreement with OpenAI, in addition to this really interesting agreement, all 24,000 PayPal employees gain access to GPT Enterprise across everything that they're doing, including obviously Codex for coding and everything else that OpenAI can provide to everybody else in the organization. Overall, huge benefits to both openAI and PayPal in this new partnership. And I hope that as a whole, that will be good for us consumers as well because it will give us other options and abilities to get access to deals and buy stuff online in a secure way while using agents. And the last piece about OpenAI, which will help us transition to the next topic is that open AI is quietly building an AI capable of generating full music track from text and or audio prompts. Those of you who are not aware, there are two leading apps and a lot of other smaller apps in that space already, SUNO and uio. And to put things in perspective, how big this market is. Let's talk specifically about Suno for a minute. Suno just hit$150 million in annual recurring revenue, which is four x what it was last year, which means if you reverse engineer their cost of their either$10 a month to$30 a month tiers, they have about 5 million paying subscribers, which is very, very significant. They're also running on pretty hefty margins of over 60%, even when you are iud. The free tier users in that, and most of their users are obviously in the free tier. They're currently going after the next funding round, which will probably value them at over$2 billion. That's four x from May of 2024, so a year and a half ago of$500 million valuation with a potential raise of over a hundred million dollars. Now those of you who haven't played with Suno or uio, I highly recommend you do that. It creates amazing music of any kind of any genre in seconds with the lyrics and the music and everything in between, and producing it in a way that is really cool and really fun to use and play with. I do this for every single speaking gig that I do, and those of you who've seen me speak, know what I'm talking about. The latest version that is version 4.5 allows you to generate four minute tracks at 32 kilohertz, which is really long and really impressive, across 200 different styles of music. And in addition, they recently introduced suno Studio, which is a Dao like multi-track editing system, you can actually work track by track on separate instruments and really build and change your music however you wish. At the instrument level, it is really, really fun and it allows professional music creators to now use AI to create music or to change existing music or to add layers of music to music they've actually written on their own. So you can either hum or play a guitar tune and build an entire song around it in their environment. Very, very cool. And if you are use, and if you are a pro user, you get full commercial rights for the music that you have generated with it. So this tells you why OpenAI wants to get into this field. It is growing crazy fast. But it's not all unicorns and butterflies. There are multiple lawsuits right now by Universal, Sony, Warner Brothers, et cetera, alleging that they are using their IP on the music that has been generated by human people to create this AI music. So both Suno and UIO are being sued. UIO just signed an agreement. Specifically with Universal as part of that lawsuit to get out of part of it. Suno cases is still moving forward with Sony and Warner Brothers, as well. So there's a very big mix there. By the way, the UIO agreement is not positive at all to uio and even worse to EOS users because the music inside of UIO was locked, so you can't export it anymore, which if you have been paying for uio, that really sucks if you haven't downloaded your music yet. So big pressure from the really big players in order to make this thing stop and go away. Suno is, at least for now, still fighting, but open AI is just around the corner and it will be a lot harder to fight open AI unless they can reach some kind of a precedent before that happens. Suno is obviously claiming fair use, just like in any other similar case in other capacities of AI training. But now let's connect some of the dots and think where this can evolve or where this can go. Let's assume for a minute that all these models, so Suno, uio, OpenAI, whoever else developed these models, gets full integration into Spotify, where you can now create playlists on the fly based on your mood. Spotify has over 600 million active users. That can now overnight start consuming and generating real time AI music to their taste to fit their exact needs at that specific time. Will that completely replace the human generated music industry? I don't think so. I think it's gonna hurt that industry, but I think it's gonna put a premium on human generated music. And I think people will still want to go to live concerts and still wanna consume songs that they know. But I'm looking at this from the perspective of somebody who grew on listening to vinyls and then cassettes, and then CDs, and then DVDs, and then streaming music. And in between MP three players. So the only thing I know is human generated music, which I listen to every single day. I love listening to music. I play bass, and so I'm very connected to my musical background with the genres and the creators that I love. But while my kids or definitely their kids care, whether the music that they love was created by humans or not. I don't know. And if I had to guess, I would guess no, they wouldn't care. They would just care about the fact that they're enjoying the music. That leads to the problem that I mentioned many times before on this podcast, that every person may listen to different music than other people would listen to, which means you may not be able to talk or share music with others because they wouldn't care, because the music that they created or was created for them on the fly, based on their needs, is gonna get them bigger satisfaction and enjoyment than the music that was created specifically for you. That creates an even bigger divide in the world, which again, I don't find as a good thing. So on one hand, explosion and democratization of music creation. Awesome. On the other hand, where is this leading the entire world of music between creators and consumers of music? I don't know. Let's switch gears and talk about philanthropic for a minute. Claude finally is really seeing the biggest issue I had with Claude Sofa, which is it did not have memory. So as of right now, Claude is going to have full memory and on across all its different plans. If you don't have it yet, it is coming in the very near future. The rollout has already started as long as you are a paid user. So what is memory if you don't know, Chachi has been remembering stuff about you and storing it in a memory that you enabling it to provide you better contextualized answers to your needs. Who you are, what your role is, who where'd you grow up, what industry you're in, what services or products you provide, et cetera, et cetera. And this helps Chachi PT be significantly better than Claude because it knows all of that information. Gemini has a similar feature that I must admit so far, hasn't been as good as Chachi PTs, but Claude is now adding this feature that will allow it to remember things about you, about your company, about your industry, and so on, and provide you significantly more relevant and more contextualized responses, which by definition will make Claude a better tool. As I mentioned, right now, my biggest differentiator towards Chachi PT over Claude is memory. And the fact that Claude is now gonna have, it might tilt my usage of Claude to be more than, I'm using Chachi pt. I'm paying the full price for both. So it doesn't really matter from that perspective. Uh, but I really like Claude from any of the things that it's doing. The fact that it did not have memory was a big deal for me because I can get a lot more specific responses from Chachi pt and that benefit that OpenAI had is going away right now. In addition, they did something very, very cool, which allows you to effortlessly import or export memories from other tools and bring them into Claude. Meaning I can now take the memory that I have built in the past X number of months in SHA GPT and bring it into Claude, which is exactly what I'm planning to do as soon as I get access to this feature. Now, just like the other platforms, you have full visibility to these memories. You can go and look at them, you can delete them in Claude you can actually delete them with natural language, which is really cool. You can say, forget that old job that I had in 2024, and it will forget it, et cetera, and so on. So you can control it very easily on your own and have full visibility to what it knows and doesn't know about you. Just like in the other platforms, there's an incognito mode where you can have a conversation that cannot be remembered by the long-term memory of the chat. And all these companies are working towards the same thing, which I will now quote Mike Krieger, the Chief Product Officer of Anthropic. We're building forward Claude to understand your complete work context and adapting automatically. Memory starts with project continuity, but it is really about creating sustained thinking partnership that evolves over weeks and months In the broader picture, he's talking about work, they're building models that will know you and because it knows you, it will be able to be your ultimate assistant and support you in everything that you do from your personal life to work and beyond. And that is the ultimate goal, right? Having a companion that can be the most effective possible because it knows literally everything about you. It sounds a little crazy. Will I be willing to share everything that I know and everything that I do with an AI model? I think over time this will become obvious. Just like people were thinking that putting credit cards in the internet is insane, and now we all have 20 different credit cards across multiple different websites online. Speaking of interesting releases this week, minimax, which is a Chinese company that is known so far, mostly for their video generation and image generation tool just released M two, which is currently the most advanced open source model on the planet. It is a 200 billion parameter. Model, but it only uses$10 billion active for every single question using a mix of agents kind of approach. This is a very similar approach to the way deep seek and Moonshot Kimi is doing their magic, but with significantly less parameters, which makes it faster and cheaper. So to put things in perspective, deep seek version 3.2, which is the latest version, is using 37 billion active parameters. And QI two from Moonshot is using 32 billion parameters. But this new model M two is using only 10 billion parameters. What does that means? It means that it can do everything that is doing significantly faster and significantly cheaper. Putting things in perspective, the API cost 30 cents for million tokens of input and$1.20 for a million tokens of output, which is 8% of what you would pay for Claude Sonnet while running it twice as fast. And it is currently leading across multiple benchmarks over all the open source models, and in some of them even above Gemini 2.5 Pro and Claude Opus 4.1. And to make it even more competitive, they have tailored the backend and API to be ready to connect to your IDE or CICD environments, which are the development solutions that companies use in order to develop their software. And so highly competitive, extremely fast. Open source, really cheap model coming from China, another one of those. And it is just showing you how fast this world is evolving and it is showing you how aggressive the competition becomes, especially on the development side. But that's just a hint for what's coming afterwards. And just to show how powerful these models are, there's a very interesting contest that started in September and is just about to end, which is a competition to see how good will these models be in trading of cryptocurrency. So six large language models, battle in this arena with un running unsupervised from October 18th to November 3rd, 2025. Each got real$10,000 in identical prompts and data to compete with how well they're going to grow or lose this$10,000. Well, right now, deep seek. 3.1 grew, the original$10,000 to 22,900, which is 129% gain by October 27th. Trading across multiple different cryptocurrencies. Quin three max are second with$19,600, which is a 95% growth over the original$10,000. And on the flip side, you have open AI's G PT five who lost 60% and currently has about$4,000. And Gemini 2.5 Pro that lost 57%. In the middle of the pack, you have X AI rock that earned 13% that claude 4.5 sonnet that grew 24%. This is just season one of this arena that, as I mentioned, ends on November 3rd, and then they're gonna open season two and we'll see what happens there. And I will keep on reporting because I find this very interesting. But what does this mean? What could this mean for the future? This could lead to these labs to actually develop models that know how to trade. So right now they're using the raw models as is. These models were not trained in order to trade effectively online. And you see a big, a big variance between them. But assuming this by itself would drive open ai, Google, and so on to say, Ooh, this is interesting. This is a business case that we wanna pursue, which they probably will. They will start developing and trading models to be better traders on stock markets or any other kind of markets like crypto currency in this particular case. That may even lead into other companies who will integrate several different models into a trading agent that can replace between the models based on the strengths and weaknesses of these models. Think about how the development world is working right now with these IDs like cursor and so on. Switching behind the scenes between philanthropic and open AI and other models to optimize for co-generation. The same exact thing can happen with virtual traders. So if we have these virtual traders and over time they can trade better than humans, where does that put the entire current investment and money management industry? Where does it put the existing platforms? What does it mean for the actual stock market if nobody's actually trading based on success and failure of stock and or companies, but purely based on algorithms that are going to compete with one another? Does it completely take away the concept of the stock market? Many, many really big questions that I don't think anybody's answering. And I know it might sound crazy that that's where my head is going when I see a six models battling with$10,000 in their pockets. But I think this is a trajectory in every aspect of what we do in the world right now. Now, since I use the coding industry as an example, huge three big releases this week, head to head. The first one I'll start with is Microsoft GitHub has released what they call mission control, which is a dashboard that controls all the different activities across the different platforms, combining open air, anthropic, Google, and other tools into one unified controlled environment that can run parallel tasks and compare outputs in order to pick the one that is best for your need. In addition, this tool obviously coming from Microsoft comes with enterprise in mind, with granular governance, layer with security policies, access control, audit logs, usage matrix, basically everything you need in order to make sure that what you're doing is running the way you want it to run while running multiple processes and agents from multiple sources in parallel. Or as GitHub, COO, Kyle Dagel said, with so many different agents, there's so many ways of kicking off these asynchronous tasks, and so our big opportunity is to bring this all together. This was immediately embraced and supported by both Mike Krieger, the CPO of Philanthropic who said with agents, hq. Claude can pick up issues, create branches, commit code, and respond to pull requests working alongside your team like any other collaborator. And open AIS spokesperson said, we share GitHub's vision of meeting developers where they work, and we are excited to bring Codex to millions more of developers to use GitHub and Visual Studio Code. The bigger picture here is multi-agent interoperability. Basically the ability to combine multiple agents from multiple vendors in a secure and controlled environment. Now, if you think that Microsoft were the only people who made such an announcement this week, I hinted that that's not the case in the beginning of this episode. So Cursor just launched Cursor 2.0, which is launched with what they called Composer, which is what they're calling a blazing fast frontier coding model, in a revolutionary multi-agent interface that redefines AI assistant development. Now they're claiming it is four times faster then similar intelligent models, composer completes most turns in under 30 seconds. This is per their team, and their benchmark now it's trained on their entire dataset, which is now, I don't know what percentage of the programming world, but many, many, many companies switched to using Cursor as as their developing platform. And so they had a huge data set in order to train this model that is now homegrown versus depending on other people's models. And it will sound very similar to what GitHub just released. So their new cursor, 2.0 IDE centers around parallel agents power by Git workers or remote machines running multiple models to pick the best results. Sounds familiar, right? The other cool thing is that it has a built-in code review that allows you to test the code that it is creating and keep on experimenting and fixing bugs until the output is actually correct and working, which accelerates the development process even further. And if that wasn't enough, you get another choice because Cognition Labs just dropped SWE 1.5, which is a big jump from their previous model. First of all, it is significantly faster, so it can generate 950 tokens per second, which is blazing fast. To put things in perspective, it is six times faster than Haiku, 4.5 and 13 times faster than sonnet 4.5 when it comes to generating code, and it was tested on actual real world tasks versus just benchmarks, which makes it significantly more important as a parameter. They're doing it in collaboration with Cerebra, which is a company that builds inference, accelerator chips. And Cerebra ex post on this said, developers, shouldn't have to choose between an AI that thinks fast and one that thinks well. Yet this has been seemingly, seemingly inescapable trade off in AI coding so far. Well, that is what they're trying to resolve. So bigger picture on the coding world and where it is going and where does that hint that the rest of the world is going? So the coding universe has been leading the charge on where AI can go, mostly because it's a much more structured universe than just the open universe we really live in. But it is definitely showing us where AI can go and will go for the broader world that we actually know, So where is it right now and where I think everything will go across any industry and as well as our personal lives. There's going to be a set of planning agents that are gonna get the goals from humans. Underneath them, there's gonna be orchestration agents who will manage all the execution agents that will actually do the work. There's gonna be complete interoperability across all the different agents from all the different companies in these unified environments that can pick the right agents for the right tools, for the right process, compare the results, and get the best output. They will verify the results. They will fix the results, and they will do all of that with transparency and control and accountability for us to be able to see however, who will check the output. We will never be able to have enough humans on the planet to check all the outputs that these models will create, because they're gonna create it at a speed and at a volume that we cannot even comprehend. So humans will not be able to do it. So checking the models, the entire concept of transparency and control will probably be handed off to other agents. Which means we actually have no control on what they're actually going to develop. Now when, that may sound crazy to you, I have very little doubts that this is where we're going because it is inevitable. If you can plan and book your next trip in five seconds, knowing that it's gonna pick the right flights, the right hotels, the right car, the right process, the right tours, the right tickets, all of that in seconds. Instead, view investing three days in doing this, you will most likely do that. The same thing with buying your house, the same thing with buying a car. The same thing with investing your 401k, the same thing in developing the next project at work. Sounds crazy. Maybe do I think this is where we're going? Yes. Do I think it's coming fast to some industries such as coding and writing code, what we're seeing, this is exactly where it's going right now, and so what will humans do in the process? I think it is very unclear at this point. Will it evolve to a situation where we'll find what other things to do and more meaningful things to do and more satisfying things to do? I sure hope so, but what we're seeing right now in the coding world will eventually roll out to everything else and will just take a little longer just because of the level of complexities and we have very clear signs that this is where it is going. Now speaking of tools that do the work of humans and make things that used to take hours on professionals to happen in minutes, Google just introduced Poel. Poel is an AI powered tool that creates branded ad assets from visuals and videos and text, and everything that comes with creating ads completely fully integrated with Google Ads platform. So the idea behind it is to enable small and medium businesses who do not have the resources and or the time to hire or develop themselves complete ad campaigns to have complete ad campaigns across the entire Google Universe. A hundred percent generated and optimized by ai. What you are putting in, you're putting in your brand guidelines, the product or service details, and your campaign goals, and the system will auto generate all the creatives, including images, headlines, short videos, text and AB testing suggestions, and will deploy it automatically on the Google ad universe. People who has used this in the beta testing phase, reported three times faster ad launch than before, and 25% uplift in click-through rates compared to human generated campaigns. Now this is based on Google's internal data, and this has not been fully tested in the open world yet, but what I can assure you is that once it's up and running and it is now up and running, it will learn way faster than humans can learn, and it will generate ads that will convert significantly better than humans and it will be able to iterate very, very quickly from the initial results each ad will see, which means very quickly it will perform much better than it is performing right now. It will outperform any human marketer out there. What does that mean for large agencies? In the beginning, probably nothing because large agencies mostly support large organizations. What does it mean for really small agencies who mostly supported small businesses? I think they have a very, very serious. Risk to their business model and they will have to find other ways to enhance what they're offering right now. Otherwise, they will run out of business. It is currently live in the US and it will go live in the rest of the world in Q1 of 2026. Speaking of Google, Google Earth just released a whole new suite of AI tools on top of Google Earth, which can fuse and make sense in data from multiple sources such as weather, population, satellite data, et cetera, which is extremely powerful for research across multiple aspects of research. And the idea is being able to spot and connect the dots between patterns across different types of data sources that were previously on the verge of impossible real world pilots that already happened. Show real significant results with the World Health Organization, predicting cholera, outbreaks in Congo, planet maps on deforestation, and checking the impacts on that across climate and things like that. Airbus were able to flag vegetation on par lines and bellwether. Were able to speed hurricane claims for insurers overall. Great play by Google. Definitely, something that is aligned with DeepMind's approach to AI in support of humanity, and I'm very excited to see these kind of announcements. And now to the next head-to-head battle that happened this week. Well, Adobe just made a huge announcement in releases of new models. They just announced Firefly five, which has many new capabilities that will be extremely useful for anybody in the Adobe environment. First of all, it can render images up to four megapixel, which is four x their previous model. It allows full support of layers and prompting so you can prompt different layers and you can actually assume layers just by editing. So you can relate to objects just by speaking about them in a flat image as if it was a layer. So you can manipulate, resize, move items in the image just by relating to them in the text prompt. Very similar to what you can do in Nana Banana, just with more granular control because it is Adobe. They're also allowing you to train your own personalized models just by dragging and dropping your sketches and illustrations and photos. To train a firefly model that then you can reuse to generate similar kind of outputs. I think this would be extremely powerful for creators. they've done a complete overhaul of Firefly on the web and they have added many new capabilities of their video tool that is now in private beta, including layers and timeline based editing, all with text. They also integrated 11 labs as a voice to generate speech into the models. So you can now have the videos with full speech and you get full access to the tools from OpenAI, Google Runway to Paz Flux and many others inside the Adobe Suite. But at the same week, Canva's Affinity dropped a new announcement, which may dramatically change the entire creative industry. So a little bit of background. Affinity is a company that was bought by Canva earlier this year, but they kept it completely independent and they gave them a lot of money in order to develop what they're developing, which has now become the new Affinity app. So the new Affinity app does two things that are unheard of in the industry before. First of all, in a single tool, you can switch between vector, pixel layouts and so on. So between Vector, pixel and layouts, which are three different tools in the Adobe Suite. This is now a single seamless transition inside the Affinity app. But in addition, it is free as of 100% free forever. No subscription, no catches. All you need is your Canva account license, the Free Canva account, and you can use the Affinity app to do more or less everything you can do in the Adobe Suite in a single unified tool and workflow with GPU acceleration for fast edits, even on thousands of layers, that's per them. In addition, it comes with ultra customizable user interface, which every user can create their own perfect studio. Again, very different than the one size fits all of the Adobe Universe or as Camera Adams. The Canva, co-founder and chief product officer said, we're really viewing the entire design ecosystem as one big entity. It's really about your entire team working together. Now, the whole AI aspect of this is actually optional, but you can generate stuff with AI using behind the scenes. Leonardo, which is another company they acquired back in 2024, so what does that mean to the design universe? It means that the heat is on for all the different participants. The two biggest ones is obviously Canva and Adobe. With Adobe pushing down towards, you don't have to be a professional designers to use our tools and now Canva is pushing up into the professional market with the affinity tool. Where will that end? I'm not sure, but I will say something that I said several times before. I think both these tools are at a very serious risk. Once Google and OpenAI and others starts adding layers and higher resolution to their platforms, which is not rocket science, meaning adding the ability to work in layers and work in vectors separate from the way they're working right now. And adding the ability to scale resize and change and manipulate the layers with either a user interface or by using prompts, either voice or text prompts, is something OpenAI and Google can add to their platforms in probably a week. Upscaling already exists by third parties, but they can probably build their own upscales and then many users will not even go to Canva and or Adobe. They will just stay within the tools that they're gonna use for the day-to-day to do everything that they're doing. And especially if you're thinking about the Google environment with the entire suite of office tools where you can generate things inside of the Google universe, whether in docs or slides are any other platform and you will understand how big is the risk for the existing incumbents that has been ruling the design space for a while and now some really fast rapid fire valuations and new raises from this past week. Synthesia, which is a company that allows you to create video avatars, just raised$200 million getting its valuation to$4 billion doubling it's$2.1 billion valuation from January of this year. They have also hit a milestone of a hundred million a RR in April of 2025, which is now already 150 million in October, so a 50% growth in just six months. There was an interesting development in between where in October, 2025, Adobe has put a$3 billion bid on Synthesia, which refused that to say that it was not enough money, and to prove that they're now worth$4 billion in this latest round. By the way, Hagen, their biggest competitor as now announced that they have just hit the$100 million a RR in October, just 29 months after their first million that they've made. This is unfreaking believable. Another company that did a big raise and a big increase in their revenue in their valuation this week is San Francisco Harvey, which is an AI tool for the legal industry and their valuation JU just jumped to$8 billion, closing 150 million Series F led by A 16 Z, which is more than doubling the$3.7 billion valuation from June of this year. Gen Spar, which is a multi-agent platform from Silicon Valley that was founded and led by two people who were executives from Baidu has just raised$200 million round, which is valuing them at over$1 billion, which is more than doubling their valuation of$530 million from February of this year. So whether this is a bubble or not, I'm not a hundred percent sure, but it is very, very obvious that the companies who are moving forward and actually generating real revenue and real growth from real users are demanding crazy valuations and getting these crazy valuations, which doesn't seem like a bubble. Because if you can grow from a hundred million a RR to 150 million a RR in six months, that shows that there's actual demand for your services, which means that it makes sense to invest more money in you. And then the last topic that I want to touch today has to do with the Q3 results shared by the leading tech companies in the world all happening in this past week. So in the earning calls, we learned several different things, and I'm not going to dive into the details, but I wanna dive into one interesting detail. And that is the difference between Microsoft, Google, and Amazon on one side to meta on the other side. All of these companies have increased their CapEx investments in building new data centers by a very big spread to crazy numbers. However, Microsoft, Amazon, and Google have seen growth in their stock because of that, because they are selling these services to other parties. So every growth in their data centers actually leads to significantly more revenue. So yes, they grew their CapEx by a lot, but they also grew their revenue by a lot as a result of that. And on the meta side, they're just investing their own cash versus their client's cash in creating this, which have sent their stock down. Now, mark Zuckerberg still thinks that this is the right direction, obviously, and I'm on quoting. I think that it is the right strategy to aggressively front load building capacity so that we are prepared for the most optimistic cases. So where does this entire episode leave us? It leaves us with crazy growth moving forward, increased competition, increased capacity by huge investments in both the AI technology itself as well as the investment in in infrastructure, whether it is data centers or electricity. And if one thing is obvious that it is not slowing down and it is going to impact literally everything we know. We will be back on Tuesday with a fascinating episode in which I'm going to show you how AI can write. All of your proposals in a way that will be better proposals than you're writing right now, which means you'll be able to win more business with significantly less effort. I've been doing this for the past year plus and I'm seeing amazing results and you'll be able to do it as well after you listen to the episode on Tuesday. If you are enjoying this podcast, please like the podcast and write a review on either Apple Podcasts or Spotify, wherever you're listening, and share it with other people who can benefit from it. I am certain, you know a few people who can benefit from listening to this podcast and all the effort it requires from you is pulling up your phone right now, clicking on the share button and sending it to a few people. I will really appreciate it and they will too. Thank you so much If you do this and have an awesome rest of your weekend.