Leveraging AI

121 | $2,000 per month for the next version of ChatGPT, Ilya Sutskever raises $1B just a few months after leaving OpenAI, Salesforce pivots to AgentForce to focus on AI agents, and many more important AI news from the week ending on September 6, 2024

Isar Meitis Season 1 Episode 121

Is OpenAI About to Unleash GPT-5? The Billion-Dollar Race for Superintelligence

What happens when one of AI’s most powerful minds leaves to form his own company with a billion-dollar seed round? And why is the next version of GPT poised to be 100 times more powerful than its predecessor? 

In this episode of Leveraging AI, we dive into the game-changing news around OpenAI co-founder Ilya Sutskever’s new venture and the jaw-dropping advancements in AI expected in 2024. From billion-dollar funding rounds to the secrets of superintelligence, we break down the latest developments that are reshaping the future of AI and business.

Get ready: the AI revolution is picking up speed, and staying ahead means understanding these new dynamics in the AI landscape.

In this episode, you’ll discover:

  • How Ilya Sutskever’s new company, Safe Superintelligence, raised $1 billion and its goal to develop safe AI.
  • What makes GPT-Next (or GPT-5) *100 times more powerful* than GPT-4.
  • Why OpenAI’s potential $2,000/month subscription tier could change the enterprise AI market.
  • How smaller, specialized models are revolutionizing data generation and training for larger AI systems.
  • Key developments from competitors like Anthropic, Nvidia, and even Elon Musk’s XAI.


Whether you’re intrigued by billion-dollar valuations, groundbreaking AI models, or how to position your company in the AI arms race, this episode is packed with insights that matter to leaders navigating the future of business.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

hello and welcome to a news weekend of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. There are, as every week, a lot of exciting news that happened this week, including some serious funding round with a known name that has started a new company and raised 1 billion as his seed round, which is not something I remember anybody ever doing other than Elon Musk with this new era of AI, but he's Elon. It's not like anybody else and some interesting news as regulations, as well as some additional hints on what should be expecting from GPT five or whatever it's going to be called and when it might come out. So let's get started. And we'll start with Ilya Saskover. So if you remember Ilya, if you have been listening to this podcast for a while or just been following AI news some other way, Ilya has been one of the co founders of OpenAI. He has been the chief scientist of OpenAI for many years, and he has been with that company since its inception. And he is the one that has initiated the ousting of Sam Altman back at the end of last year, because he might have seen something that meant that open AI from a safety perspective is not moving in the right direction. He then went completely undercover when Sam came back and we didn't know exactly what he's up to. Nobody referred to what he was doing in the company. And there was complete radio silence, including from Sam Altman that was asked about it several times. But then Ilya left the company and shortly after founded a new company called SSI Safe Superintelligence, which with a goal to create a safe superintelligence is an AI platform that would be more capable than humans. So there's AGI, which is artificial general intelligence, which is supposed to be an AI entity that is as good as humans in all aspects. All or most cognitive tasks and superintelligence, which surpasses human capability across multiple aspects of cognitive tasks. So ilya founded the company with two other partners, Daniel Gross, who's going to be the CEO of the company, and Daniel Levy. And Daniel Levy, who is supposed to be the principal scientist with Ilya as the chief scientist officer of the company. And they have just announced that they are raising 1 billion at a 5 billion valuation from some large names like Anderson Horowitz and Sequoia and some other big names. This funding is supposed to allow them to buy the computing power they need as well as attract top talent into the company. But they're taking a very different path from the big players such as OpenAI, Anthropic and so on. Their plan is to, as they mentioned, develop a safe, super intelligent solution. So they're planning to stay with a small team that's going to be split between Palo Alto, California and Tel Aviv in Israel, and focus on R and D at least in the next few years. And their idea behind it is that they can focus on developing a safe system without the pressures of developing a product that needs to make money. That being said, as I mentioned, they were able to raise billion dollars as their first round of funding. So there's a lot of people who believe that path will lead to some serious revenue in the future. now, if you ask yourself, how does a company that just gets started, that has no clear horizon on how, or if they're going to make money raises a billion dollars at a 5 billion valuation, there's two answers to that. One is the people, right? You bet on the jockey and all three people are highly capable AI researchers that has done significant things in this field. Daniel Gross has been, leading AI initiatives at Apple and Daniel Levy together with Ilya Saskova has been in OpenAI for many years. So it was pretty obvious they're going to raise a significant amount of money. It just wasn't obvious that it's going to be that much money, but that's the direction. The other interesting thing that they mentioned is that they're planning to scale the AI in different ways than the traditional ways, meaning we all know the scaling laws that basically says that the more data and more compute you're going to give these models, they're going to become better. What Ilya is saying is that they have a different path that they're going to take that will prove that it can be done without going through the regular scaling laws and scaling capabilities, just throwing more compute at it in order to get better models. So what does that mean? They didn't exactly share, but. Ilya being Ilya, again, one of the top minds in AI in the past decade. I'm sure he has an interesting path to go after, and it will be very interesting to follow how SSI does in the next couple of years. Now, from Ilya, who left OpenAI to OpenAI themselves and specifically to OpenAI Japan. So if you've been following the AI news this week, or the internet in general, you have seen a lot of news talking about Tadao Nagasaki, who is the CEO of OpenAI Japan, who made a significant announcement at a KDDI summit. And in that summit, he shared a slide that talks about the growth of chat GPT. And what the graph is showing, it's showing an exponential growth that is showing that GPT Next, that's the name that was on the chart that is going to be released in 2024 is going to be a hundred times more powerful than GPT 4, which is their leading model that now finally some can compete with, but GPT 4. Was created two years ago. And so it's been in red teaming for a while and then being out in the world for a while, but it's a two year old model. And what this graph is potentially showing is that a) the next version of GPT, whatever it's going to be called, whether it's going to be GPT 5, GPT Next. Orion, strawberry, banana, apple, whatever they're going to decide to call it when it comes out is going to be a hundred times more capable than GPT 4. That's a very. Significant statement, and it generates a lot of excitement, obviously, across multiple people from the AI world, myself included, and the fact that it's going to be released in 2024, which is what the graph shows creates even more excitement. So we're recording this episode on Friday, September six, so there's less than four months for it to be released. If they're actually going to release it this year, we're going to talk later on in this episode, why there are other reasons for them to release this model in the next few months. In addition to the fact that everybody's waiting for it for a while. Another interesting piece of news from open AI comes from the information. So those of you who don't know the information, it's a digital magazine that has been very successful in getting scoops and being the first to share many news about many different aspects, but definitely in the AI industry, and even more specifically about OpenAI, they're saying that OpenAI is considering a significantly higher price subscription model for the next models that are going to come out. And there's been rumors about a price tag of 2, 000 a month. So to put things in perspective, you can use ChatGPT for free. The basic subscription is 20 a month. And then there's the teams and the enterprise layers, which vary, slightly higher than that, but there's still double digits in the hands of dollars, and this is talking about$2,000, which might be the price tag per month for using Orion, or as I mentioned, whatever they're gonna call their next model. Now, none of this has been confirmed, and this hasn't been a final price tag yet, but there's a few interesting aspects of this aspect. Number one is currently from these subscriptions from 20 to$30 a month, plus the AI fees. OpenAI manages to make above 3 billion this year. And the number keeps on growing. We're going to talk about that in a minute on how fast it's growing, even on the enterprise aspect. So think about how much money they can make if they start charging$2, 000 instead of$20. So add two orders of magnitude to that revenue, that's becomes a very interesting number. Indeed, that being said, they're obviously going to be significantly less people paying that amount of money. So it will be interesting to see what kind of tiers come out and what kind of benefits you're getting to pay that extra money. But that is at least a conversation that they're having internally. This comes at the wake of what we talked about last week, of the next round that they're seeking to raise, which talks about a few billion dollars in investment for a hundred billion dollar valuation. I must admit that the rumors are talking about just a few billions that they're going to raise, which to me doesn't sound like a lot in the current state of things, where the claims are that they're losing about six to 8 billion every single year in the current state of things. cost structure that the company has with the training needs of GPT 5 together with their current expenses on HR and compute for inference. So raising a few billions doesn't make any sense to me. I assumed they're going to raise a much Larger sum of money, probably in at least double digits, billions, probably 50 to a hundred, but that might be the next step. And this might be just a temporary kind of like bridge round that they're raising to do whatever it is that they're trying to do in the short Now the logical thing, obviously to raise this kind of money, like 50 to a hundred billion would be an IPO, but their current structure where they're a nonprofit that owns a for profit organization may prevent them from doing that. And there's been a lot of conversations about the fact that Sam Altman wants to change that structure so they can be more flexible with how to grow the company, but that's not happening at least as of right now. Now, I just mentioned to you that there's been significant growth on on OpenAI's enterprise subscription. So according to VentureBeat, OpenAI just crossed the 1 million paying business users across GPT enterprise teams and the education licenses. So as I mentioned earlier, There are five tiers, one is free and then four paid ones. But on the professional side, they have three different types of licenses, enterprise that was launched in August of 2023, and then teams that was lost on January of 2024, and then education that was launched on May of 2024. And across all three of them together, they now have more than 1 million users, which is amazing. Extremely impressive for such a short amount of time. The other interesting parameter is that more than half of those licenses are outside of the U S and the biggest markets outside of the U S for open AI are Germany, Japan, and the UK. And they have a big variety of users across different industries. So some mentionable notable names include the Arizona state university, Moderna, Rakuten and Morgan Stanley. So completely different industries, different places around the world, but huge, large organizations that are using open AI across different aspects that the companies that these organizations are doing. Another very interesting reference point to the success of OpenAI. ChatGPT specifically, but in general, AI in the professional market is that the API usage has doubled since the launch of GPT 4. 0 Mini just a couple of months ago. So in July of 2024, OpenAI announced GPT 4. 0 Mini. Which is a smaller, faster, and much cheaper version of GPT 4. 0 and that has taken the AI world by a storm because it provides very capable API for significantly less money that is still very fast, similar to what other models has done. And this, as I mentioned, just announced that doubled the API usage since its launch in July. So what this comes to show you is that there's a very lucrative and ripe enterprise market for AI capabilities and OpenAI are obviously not the only player in the game. So based on TechCrunch, Anthropic has just introduced Claude Enterprise. So it's a new subscription plan for its very successful and very capable AI chatbot, Claude, that they are now providing to enterprise customers. The benefits of the enterprise license beyond the regular paid license is, first of all, you get a much larger context window. So enterprise clients gets 500, 000 tokens context window, where the regular paid subscription is 200, 000. But the more important thing is that ChatGPT's context window is currently 128, 000 tokens. So it's three and a half times larger if you go with the Anthropic Path. So if you have large files or large things that you want to upload or download, it can make a very big difference. Now you also get projects and artifacts, which is their automation and their code and platform running capabilities, which I absolutely love. So that comes with collaborative environments where multiple people can use the same projects and the same artifacts workspaces, which I think is very helpful for companies. They provide GitHub integration for engineering companies. And they provide various managerial and security compliance monitoring and so on to manage the entire environment, which does not exist in the regular paid license. The only thing they didn't share is what's going to be the pricing of the enterprise platform. But I assume it's going to be close to what OpenAI is, which they're not sharing either. It's probably around 30 to 35 a month. That's most likely what it's going to be, but they're jumping all in on that very lucrative market in order to capitalize on the very significant need and drive and willing to spend money that exists right now when it comes to applying AI to at the enterprise level. So since we're already talking about Claude and we mentioned artifacts, which is their setup that allows us to see on the right side of the screen, the outcome of what the AI is doing. Which I absolutely love. I use Artifacts every single day. It allows you to see code running of code that you're generating and the documents that it generates, and graphs if you're generating graphs and so on. It's a very useful tool. So what they're added now, is the capability to highlight just one segment of what's in Artifacts and ask the AI to change that. Just that. So previously, you had to change the whole thing and then it may or may not do exactly the same thing. Usually it will not. And now, if you want to change just one line of code or one aspect of your graph and so on, you can highlight just that section of the code and then ask the A. I. To change just that section. It actually has two different options. Once you highlight the section, one is improve and the other is explain. And they're both very useful, especially when you're writing code. So improve will allow you to fix the code or make upgrades to the code that you highlighted and explain will explain to you what that segment of the code does. So I find this very cool. A similar feature is the Existed for a while on ChatGPT and Gemini, mostly for other results and not specifically for code writing. So in ChatGPT and in Gemini, you can section of an outcome, let's say a document that was written. Then you get these little quotation marks as an icon that you can click and then Provide another prompt that will only impact that segment. So we will rewrite everything else exactly the way the original version was, and then we'll only change just that aspect. So now a similar capability is for code writing in Artifacts in Cloud, which I'm sure will be extremely useful for people who are writing code with these tools. The next big giant we want to talk about today is Nvidia. So Nvidia's stock took a nosedive since their earning earlier this month, and it's declined 9%, which has led to The biggest value loss in the shorter amount of time in the history of the stock market, even it took 10 billion out of the net worth of Jensen Wong himself. So one person, the CEO and the founder lost 10 billion of his own. So lots of money disappeared in just a few days. That being said, obviously the stock is still up year over year, 160 percent and A lot more than that in the last three years. But in addition to that, NVIDIA revealed what they called Eagle. So Eagle is their next generation, high resolution visual AI analysis capability. And it allows the users of it to process images at a high resolution of 1024 by 1024. And it can be used for multiple tasks, including very capable OCR across multiple types of documents, as well as deep understanding what's happening in images. And that has obviously applications across the world. Legal and e commerce and education and accessibility and research and many other aspects, because to be able to understand images and documents in a very deep way is very helpful. Part of NVIDIA's process to be more than just a hardware company. And they've been releasing more and more software platforms, infrastructure, and capabilities for people to use. Hopefully on top of their chips. Now, since we mentioned most of the big players, there's one player we haven't mentioned yet, which is XAI, which is Elon Musk's AI company. Recently, I've shared with you that Grok 2, which is the latest version that released has been significantly better than Grok 1 and 1. 5. And it's the first time that people started taking it seriously because it's providing results that are. in par with many other tools. But when Grok 2 was released, I told you that Elon Musk is talking about that this is just the beginning because he's building the biggest super computer AI training platform that anybody has. And it's now online. So this supercomputer called Colossus has been built in Memphis, Tennessee, and it has 100, 000 NVIDIA H100 GPUs in one cluster. No other company has that, but that's just the beginning because the plan is to expand this to 200, 000 GPUs. Out of those 50, 000 are the new H200 versus the H100. So while today it's the most powerful computing and AI training platform in the world, it's going to become even more powerful and The incredible thing is that they have built this data center in just four months, which is absolutely unheard of. Now, there's a lot of rumors and analysis based on the amount of electricity they have and cooling they have and so on, that there is no way that they can currently run 100, 000 GPUs, even if they are on site and the site is running. But it doesn't really matter whether they're currently running all 100, 000 or that's going to happen in a marathon two months. They are going to have the most capable AI training platform there is on the planet. And there's already rumors that there's conversations within open AI and between open AI and Microsoft that are fearing what that means and looking for ways for Microsoft to provide more computing power to open AI in order to stay. in par with this new capabilities of Grok, but Elon promised when they released Grok 2, that Grok 3 is going to be a completely different kind of animal because of this amazing data center. So all we have to do now is wait and see what that is going to yield. They're talking about that the next version is going to be within just a few months. That's assuming That nothing extreme happens on regulation between now and then. And we're going to talk about that later on in this episode. I want to pause the news just for one second to share something exciting from my company, Multiplai. We have been teaching the AI business transformation course to companies since April of last year. We've been teaching two courses every single month. Most of them are private courses. And we had hundreds of companies take the course and completely transform their businesses with AI based on what they've learned in the course. But this course is instructor led by me and it requires you to spend two hours a week, four weeks in a row at the same time of day, working with me and my team in order to learn these kind of things, which is may or may not be comfortable for your time schedule and other commitments. So I'm really excited to share that we now have an offline self paced version of the same course. You can log into our platform and there's a link in the show notes. So you don't have to look for it or try to figure out or remember what I say. Literally just open the platform, which you're listening right now. And there's a link in the show notes, which will take you to get to the course. The course is eight hours of video of me explaining multiple aspects on how to implement AI in your business, going from an initial introduction to AI. If nothing. All the way to hands-on experimentation and exercises across multiple aspects of the business from data analysis to decision making, to business strategy, to content creation. Literally every aspect you can use AI in your business today, including explanation. Tools and use cases for you to test as well as a full checklist in the end of how to implement AI successfully in your business. So if this is something that's interesting to you and you don't have the time to spend time with me in an instructor led environment, you can now do this on your own and drive significant efficiencies to your business using the step by step course that we now have available and now back to the news. Now from computing capabilities and model capabilities to a very interesting research that was just released by Microsoft researchers. And they found a way to combine small language models called SLM together with large language models to detect And potentially resolve or at least reduce hallucinations. So the goal of this is to do a two step framework where the small language model performs an initial rapid hallucination detection, and then the LLM takes that and completes the step to explain why the hallucination is happening to potentially also prevent it in the future. They are claiming that they're achieving significantly better results with this architecture than just with a large language model on its own. And it's showing a very significant potential for one of the usages for small language models in this particular case to dramatically reduce hallucinations. There's been discussions That Strawberry, the new project by OpenAI, also knows how to do this, which is to evaluate its own answers in order to figure out if they're correct or not, or at least what are the chances that they're hallucinating and not actually based on real information. Either way, it doesn't matter which direction that goes, making a big step in the direction of reducing hallucinations, hopefully close to zero, will make a very significant impact on the ability to use AI for many more business applications. Now, since we mentioned the use of small language models for one thing, let's look at another thing. So Google's DeepMind just released a study that shows that using smaller and weaker models as a way to generate synthetic data for training bigger models is actually providing a much better solution. So they're now using various small models that specialize on specific topics to generate synthetic data in order to train bigger models. And they have tested that against the old way of training across Multiple data sets such as math and GSM 8K, both of these are benchmarks that are commonly used in order to evaluate AI systems. And in both cases, the performance of the models after using this synthetic data created through this methodology has performed better then just the traditional ways of training models, and it also provided 6 percent increase accuracy when it comes to testing the model after it has been trained. So there's multiple research and different ways in which different companies. are addressing different steps of the process in order to reduce the hallucinations that these models have. As I mentioned, in the long run, that will be beneficial to everyone. And from all the U. S. companies to a Chinese company, I've shared with you in many of the recent episodes that there's huge improvements across the ocean with models coming out of China. So the latest release is a new model from Alibaba that launched what they call Quen2VL, which is a new multi modal model that surpasses OpenAI GPT 4 and Anthropic Claude 3. 5 Sonnet across various benchmarks and it outperforms both of these on visual data analysis. So when you upload an image and ask it questions about the image, it actually does better than these leading models by the other two largest companies in the West. Now, in addition, it does very well, meaning at par with those big models across nine out of the other 13 benchmarks in the MMMU, which is maybe the most widely used benchmark, which stands for Massive Multitask Multimodal Understanding Evaluation. So a highly capable model that's coming out of Alibaba and is now available to users of Alibaba in the Far East. And also supports multiple languages, including obviously Chinese and English. Now the flip side to that, as far as successful models and new releases and so on, Business Insider is sharing that Amazon's product Q, which has been released in April, is facing serious difficulties across multiple aspects. So clients are complaining about lack of certain features compared to other large language models, higher costs than other models, difficulty in integrating with other software. And inability to process images embedded in PDF files. Plus some other general problems with Q as a whole. And there's a serious concern based on this article on Business Insider within Amazon that they're going to lose many clients to the Microsoft and maybe Google platforms because they are behind on their AI capabilities with Q. This. Comes on top of the fact that just last week, they've announced that the new Alexa, the more advanced Alexa is not going to run on top of Q, but it's going to run on top of the anthropic platform because Q just wasn't good enough in order to do this. So this is not good news from Amazon. And I must say, I'm really surprised with what's happening to Amazon because Amazon was the first company to release a voice activated, Assistant, which was Alexa. And they were the first to run large scale cloud computing for companies with huge amounts of AI running in the background. And now they find themselves in not a good position in the race. I don't know where that's going to go or what's going to happen in their leadership because of that, but they're definitely in trouble. And now there's a very interesting piece of news that I want to talk to you about because it touches on something I believe every single company in the world needs to do right now. So Salesforce CEO, Mark Benioff has announced that they're doing a, what he calls a hard pivot from just being a CRM to a new platform. They call agent force, which is a new AI agent creation platform, which they're about to debut on their dream force conference, such as just around the corner. Now agent force will allow users to build and deploy autonomous AI powered agents that can make decisions, complete multitask steps, and going way beyond simple chatbots, and this is going to be the focus. Of Salesforce moving forward. That obviously doesn't mean they're abandoning CRM, but from a strategic perspective, the direction they see the world going in the direction they want to lead at is. agents and so they're investing multiple resources within Salesforce to push the company in this new direction. This involves a significant reorganization and a lot of focus on that channel versus just CRM. And they even said that people that are going to come to Dreamforce, which is their Gigantic and extravagantic conference that they do every single year are going to be greeted with welcome to agent force instead of welcome to dreamforce. And this conference is just around the corner. As I mentioned, it's happening in January of 2025. So it will be very interesting to see what they're going to share and release because they're obviously been working on this for a while. If this is going to be released in January. Now, why do I think this is important? I've shared that with you in the past in several different episodes, but for those of you who missed that, I believe that every single company in the world right now has to do a strategic evaluation and look at where its market is going. What kind of changes are going to happen to your industry that will change what your clients are looking for. In other words, what things they're paying for right now, they will not be willing to pay for within two years, three years, 10 years. It doesn't matter. And for some cases, it could be 12 months and what new opportunities are on the horizon that you might be able to serve. If you make the right changes right now, companies who are not going to do that may lose. everything because the entire industry may change and the offering they're offering right now might be either less needed or not needed at all. So if you're in the leadership position in any company, what I suggest to you is take your leadership team, go to an offsite, potentially bring some external help that can help you brainstorm this and think very hard how AI will impact your niche and your industry and figure out what current products and services are going to deteriorate and maybe disappear completely, and what other opportunities are on the horizon that if you move first, you may be able to capitalize on faster and hopefully bigger than your competitors. I told you in the beginning, we're going to talk a little bit about regulation, so the United States and the UK and the European Union have signed on to the Council of Europe's High Level AI Safety Treaty. That's a mouthful, but it's actually a smaller mouthful than what this organization was called before. Before it was called the council of Europe framework convention on artificial intelligence and human rights, democracy, and the rule of the law. So this new name is actually a lot shorter, but many companies have signed to join that treaty, including the companies, the countries that I mentioned before, but also countries like Andorra and Georgia and Iceland, Norway, the Republic of Moldova, San Marino and Israel. Sadly, or interestingly, Absent from this initial treaty are countries from Asia, Middle East and Russia, which potentially means that this treaty may not be as global as people are hoping. But this is the first and initial international group that is looking to evaluate and potentially impact the applications and implications of AI on the human society as a whole. And they've stated that they're going to focus on three main areas, protecting human rights, including data, privacy, and anti discrimination, safeguarding democracy, and upholding the rule of law through AI risk regulation. So the goal, and I talked about this many times before that I was hoping this will happen, that there's going to be some large international cooperation that will push for different limitations on monitoring of AI systems, just like the nuclear agency monitors nuclear weapon development. Now, timelines and the exact things that they're going to be delivering has not been defined yet or at least has not been communicated yet. But one of the problems that they're facing is obviously that individual countries, and as we're going to talk about in a minute, even within countries, there is a complex and fragmented landscape of AI regulation. So we all discussed in the past that the EU has passed EU AI Act. And in the past few episodes, we talked a lot on California's SB 1047 bill that, by the way, has passed the California Senate and now awaits Governor Gavin Newsom's decision whether he's going to sign it into law or veto it. And he has until the end of September, so just a few weeks to make up his decision. That decision will have a significant ripple effect across the entire U. S. First of all, because it's a very extreme regulation that talks about the liability and the different steps that large language model developers have to take in order to make it legal for them to do business in California. So let's just develop these models in California, but even use them in California, but also it will be used as a benchmark for many other regulations. Now, this bill was highly controversial and got support from several different companies, some stronger than others, but it was, but in general, it got a very serious pushback from the tech industry as a whole, but as I mentioned, we will know within a few weeks, whether that is signed into law or not. One of the things that may prevent it from being signed into law. And we talked about this last week is the fact that open AI and Anthropic, the two larger companies in the U S that are developing these frontier models has agreed to share their most advanced models with the federal government before and after they're releasing these models. This may show that federal regulation and federal cooperation may drive better results than a state by state kind of regulation. as I mentioned, it will be very interesting to see what's going to happen. Now, if SB 1047 is signed into law, it will require tech companies to write safety reports about their AI models by January 1st, 2025. This may create a lot of pressure. on companies to release their models before that date. So when I told you earlier that GPT 5 or GPT Next or Orion or Strawberry or Avocado or Banana or whatever they're going to end up calling this thing, that's another sign why it might be released this year. As the slide from OpenAI Japan has mentioned, just to be there before the law starts taking effect. This is also true, obviously, for any other large model company. So we may see a push from Meta to release another model, and Grok, and Anthropic, and so on. Now, will this actually happen? Nobody exactly knows, but as I mentioned, we'll know a lot more by the end of this month, based on the actions of California's governor. Now in the past few weeks, we talked a lot about a I video generators. So there's a new playing down. It's another Chinese company. The name of the product is Minimax, and it has released its first model. This company is backed By two tech giants, Alibaba and Tencent. Both of them are gigantic companies. So they have serious backing. And their first model called Video Zero One supports the generation of high resolution, 1280 by 720 resolution videos, 25 frames per second for up to six seconds clips. That is better than most models out there. And that's their very first version that they released. So on paper, this is a very capable model, but people have tested it, are saying that it's not as good as Luma's, Dream machine, and definitely not as good as Runway gen three, when it comes to the actual videos themselves, as I mentioned, this is a version number one and they have some serious backings. So I'm sure this is going to improve. So this. Wave this tsunami of video creation capabilities is just growing bigger and stronger and all of that before Sora has been released to the public at least not yet as of the recording of this episode and the company has already Mentioned they're already working on version 2 which promises to be Very serious improvements, including image to video capabilities that are missing right now and longer video clips. And staying on the same topic, luma, which is one of the better tools that are out there right now has just released Dream Machine 1. 6, which the biggest difference is that it features camera motion controls for more precise video generation. So you can prompt and literally add the word camera into your prompt. And then a pop up menu pops up, which allows you to. Pan, left, up, down, or orbit, or crane up, basically have much more detailed camera control to have more precise video in the way you want it when you create it. Now, a similar capability already exists in runway three, meaning you can use all these different camera motions just by describing them in the prompt. So this is not a new capability. The user interface is pretty cool because it actually gives you a drop down menu and shows you in graphics. What that motion might look like if you're not an expert on camera movements, but this is not rocket science. Once you understand the different movements of panning and moving and orbiting and so on, you can just use them in runway as well, just regularly in the prompt, but it's a cool improvement when it comes to giving people more control through a basic user interface. That's it for this week. We'll be back on Tuesday with another how to episode that will deep dive into a specific use case and how to do it with AI for business. And my only request is if you are enjoying this podcast, please rate us on your favorite podcasting platform, whether it's Spotify or Apple podcast and share it with people, so pull up your phone right now, unless you're driving and give us a five star review or whatever review you think we deserve, as well as click this share button and share it with a few people that you think that will benefit from learning about AI. And until next time, have an amazing weekend.

People on this episode