Leveraging AI

119 | OpenAI’s 100 Billion-Dollar Valuation, 'Strawberry' reasoning AI launch, Google's Gemini gets new gems, Ophra getting into AI and more AI news for week Aug 31

August 31, 2024 Isar Meitis Season 1 Episode 119

Is OpenAI’s latest project about to change the AI landscape forever?

This week, AI news has been nothing short of groundbreaking—from billion-dollar funding rounds to secret projects and unexpected celebrity endorsements. The race to dominate the AI frontier is heating up, with OpenAI leading the charge. But what does it all mean for the future of technology and business?

In this AI news episode of Leveraging AI, let's dive deep into the most significant developments. We’ll explore OpenAI’s rumored new model that could redefine the boundaries of artificial intelligence. Plus, discover why Apple, NVIDIA, and even Oprah Winfrey are making waves in the AI space.

You'll discover:

  • Why OpenAI's latest funding round is so critical and how it could shape the future of AI.
  • The implications of Apple and NVIDIA's involvement in AI funding.
  • How OpenAI's secretive "Strawberry" project could push the boundaries of AI reasoning.
  • Why Oprah Winfrey’s new AI-focused event signals AI’s entry into mainstream media.
  • Updates on Google’s new AI models and Meta’s rapid growth in AI users.

This episode is your one-stop shop for staying ahead of the curve in AI developments. Whether you're a tech-savvy entrepreneur or a C-Suite leader, these insights will keep you informed and prepared for what’s next in the AI world.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news episode of Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Meitis, your host, and we have a jam packed week of AI news. New advanced model that is expected from OpenAI, a funding round from OpenAI, Gems from Gemini, and even Oprah Winfrey, how is she related to AI? you will find out towards the end of this episode, but now let's get started. if you remember a few weeks ago, we talked about the fact that OpenAI is bleeding money very fast. And when I say very fast, it's billions of dollars every single year. And they need a lot of money to train future models and to stay ahead in the game that they're playing right now, which is the race for AGI. So there's been a lot of news this week about The next funding round that they are planning. And it seems that the new valuation of this round will place their value at over a hundred billion dollars. Now the round is most likely going to be led by Thrive Capital, which is one of their current investors. Other companies that are going to be involved is obviously Microsoft, which is the bigger investor so far, but also according to Bloomberg, Nvidia, and Apple might participate in this round. This is An interesting development from two different aspects. Apple is obviously one of Microsoft's largest competitors and having them as another investor In Microsoft's largest investment that has been a very strategic investment by Microsoft so far will be interesting and is going to make the relationship between Microsoft and OpenAI even more complicated than it is right now. Now from NVIDIA's perspective, it's very interesting because OpenAI is spending billions of dollars annually on NVIDIA GPUs, And now if some of that money is going to come back as an investment from NVIDIA, it's going to make that relationship interesting and complicated as well. And since we mentioned Apple, Apple just announced the release date for iPhone 16. And that date is September 9th, which is just around the corner, 10 a. m. Pacific time. If you want to be specific, there are a few interesting aspects about this announcement. One is that it happens on a Monday and all previous iPhone announcement happened on a Tuesday. The reason is not completely clear. Maybe they're trying to avoid competing with a presidential debate, but I don't think that's the reason I don't know what the reason is, but everybody was expecting it to be September 10th on a Tuesday. That being said, it is unclear at this point whether the AI features that Apple presented just a few months ago are going to be included in this release. The invitation tagline reads, It's glow time, which doesn't really explain what that means. Some people speculate that these are the AI capabilities. Some people speculate maybe it's a new type of display technology. I guess we'll have to wait for September 9th and find out. Now let's go back to OpenAI, openAI just announced a new executive and it's a big one. So Irina Koffman, who is a former senior director of product management and generative AI at Meta. She has spent five years at Meta leading the generative AI team, and she was instrumental to the transition from AI research to AI production. She before that spent 12 years at Google and also Over there, she played multiple roles, but one of them was the chief operating officer for Google AI. So she has almost two decades of AI leadership experience and she's joining OpenAI to lead strategic initiatives. She's going to report directly to Meera Moradi, and she's obviously going to play an instrumental role. In the next phase of open AI, this makes a lot of sense in the current junction where open AI is when they're trying to raise a significant amount of money, as we just said, new valuation and looking for ways to make a lot more money. as I shared with you in previous episodes, they are bleeding billions every single year. The competition on the types of services they're providing are just intensifying as models are becoming cheaper and cheaper to use. So the margins are becoming smaller and obviously leading strategic initiatives that will allow them to at some point make enough money to break even makes sense for them. But the biggest news from open AI comes from. The intensifying rumors that they're going to release strawberry, what was previously potentially called Q star later this fall. So let's make some sense in what is strawberry and QStar and what exactly they're going to be releasing or maybe going to be releasing. So QStar is a secret project that was developed within OpenAI that the rumors claim that was the trigger for Ilia Suski. To start the process who ended with the firing and letting go of Sam Altman at the end of last year. There's a quote from Sam Altman in an interview around that time that said that he had the opportunity to be in the room when they pushed the veil of ignorance back and the frontier of discovery. Forward. I really liked that quote, but obviously he did not elaborate what that means, but Q star was a model that had much more advanced reasoning capabilities and thinking capabilities than potentially any model to date, but definitely at the time where this was a conversation, which is at the end of 2023., I shared with you in the past few weeks that there's more and more rumors about project strawberry within open AI. That is potentially the new name code for Q star and Strawberry might become a part of the next release of open AI that is planned again, the rumors say for this fall now strawberry might become a part of this new release, which is what's supposed to be GPT five. There's now a. code name for it within OpenAI. It's called Project Orion. So I don't know if GPT 5 is going to be called GPT 5 or it's going to be called Orion. But either way, that's the next big model that everybody has been expecting OpenAI to release. We might have seen initial components of it in the capabilities of GPT 4 0 and in the demo they've done earlier this year with the voice capabilities and the video capabilities and Sora and all these components might be segments of that. But the thing that is clear that they're using Strawberry and its advanced reasoning capabilities as a tool to train this Orion algorithm. Model. So in addition to the fact that it might become a part of the model, it is currently being used as a tool to train the model in ways that presumably, were not possible before. So what's the bottom line of all of this? The bottom line is that we're expecting a big release from OpenAI this fall that is supposed to be dramatically different than everything we have seen. So continuing on the topic of OpenAI and touching on safety, OpenAI is currently rejecting and trying to push back legislation SB 1047 in California. We talked about this in last week's episode. It's a legislation that is pushed forward but in California to increase safety of AI models and this legislation is supposed to create safeguards and defund boundaries for the development of larger models based on how much money is invested in developing them and put some more guardrails around their development and provide more visibility to what they might be doing. Now, this legislation has been met with excitement on one side of the aisle and a lot of resentment and pushback, mostly from big tech. One of the companies that was against it was OpenAI, but in this past week, former OpenAI employees, like a lot of people who left the company, Open AI pointing out safety issues has published an open letter to the legislators in California, urging them to pass the bill. They're emphasizing the need for public involvement in high risk AI systems, and they're highlighting that these safety protocols that the bill suggests. are nothing short of necessary. Now the companies, the big tech companies, which as I mentioned, most of them were against it, are claiming that it's going to slow down innovation. It will push tech outside of California and other reasons that makes absolute sense. If you are a big tech company developing AI in California, But on the recent supporting side of the bill, two interesting voices are sound. One of them is Anthropic. So Anthropic worked with the legislators from the first version of this bill to try and get fixes into it and participated in the process of quote unquote, fixing the bill to something that they can live with that will push security, but will also not completely kill the innovation around this field. And the other interesting voice that came out this week supporting this bill is Elon Musk. So Elon, amongst other things, is the head of x. ai and x, formerly Twitter, and SpaceX, and a lot of other really successful company. And he has been involved in AI since the beginning. He has been one of the founders in open AI that gave it a lot of its initial investment. He had a big fight about the way forward with Sam Altman, and he left the company, including the board and left his investments behind. He's suing open AI for their change of ways from being a open source company that is trying to build AI That will benefit humanity to a closed source company that is trying to make money. So there's that in place, but Musk is now supporting the bill. People had mixed feelings about Musk's support. Some saying that they're excited to hear somebody at his position with his experience with AI supporting the bill. But some people were very loud against it, claiming that Musk is leaving California or left with some of his companies. And now he's using this legislation to hurt his competition coming from California. Who's right and who's wrong in this whole mess? I'm not 100 percent sure. I personally think that governments, and California could be just the first one, should be involved and should have guardrails on what AI development can and cannot happen, should and shouldn't happen, and there should be some accountability to what you can develop and deploy. I also think that the companies monitoring themselves is not a good idea. So whether this law as is the right way forward or not, something like that needs to be put in place. And on a much larger scale than just California, most likely at the federal level and also at an international level with some kind of a unified international group that will monitor the development and the deployment of AI systems. Now, staying on the topic of AI safety, the National Institute of Standards and Technology, also known as NIST, has announced that it signed an agreement with Anthropic and OpenAI on, for collaboration, On a I safety research. Now, the main aspect of this is that the U. S. Government Safety Institute will get access to major new a I models before and after they are released to the public. It will allow them the opportunity to evaluate jointly with the companies, the capabilities and risks Of these models before they are being released to the world. And the goal of all of this is to allow the companies to continue the research and continue the innovation while keeping safety as a high priority. As I mentioned, not by self monitoring. Now, in addition to just the U. S., this announcement also said that there's going to be close collaboration with the U. K. Safety Institute, and together they're going to look into new advanced models before they're being released. I think this is a great, Step in the right direction that I hope, as I mentioned, that more groups and more international bodies are going to do the same while teaming up with leading companies in the world, both closed source and open source before they release any of these models and from open AI to Google. As I mentioned, Google made some big announcement this week, but the two biggest ones are improvements to Gemini. One of them is the introduction of GEMS. So in their event in March, Google shared that they're going to deliver GEMS, and GEMS are mini automations that are built within the Gemini environment, similar to custom GPTs on OpenAI, and projects on Anthropic. I was definitely waiting for something like that because I am a Google user. All my companies are run on the Google platform and having the ability to run automations within the Google universe, similar to the way I'm running them in OpenAI and Anthropic, is going to be great. I didn't get a chance to test it yet, so I cannot tell you how well it performs compared to Anthropic's And chat GPT capabilities. I assume it's going to be similar. I'm definitely going to report on that in the next few weeks as I test it out. As part of the release, Google also released a few pre made gems, such as Brainstormer, Career Guide, Writing Editor, and Coding Partner. Each and every one of them is more or less self explanatory, that is supposed to be geared towards specific tasks. I can tell you that I'm using multiple automations right now, both in OpenAI and on the Anthropic platform, for multiple tasks in my company. It's an amazing time saver that gives me great capabilities. And I'm looking forward to testing the Google version as well. In addition, Google integrated Imagine 3, which is their advanced image generator back into Gemini. So now within Gemini itself, you can start creating images. If you remember, there was a whole issue with creating images of people, and I'm not going to go back to all the details, but that's possible now again, you can create images of people, but with a lot of guardrails and red tape of what you can and cannot create as far as images, but the capability exists and it creates very impressive images within the Gemini platform. In addition, Google announced three new experimental models that they are releasing Gemini 1. 5 Flash with 8 billion parameters, Gemini 1. 5 Pro, which is their most capable model, and another model that is an improved version of the existing Gemini 1. 5 Flash. So three different models that are replacing, at least temporarily, the existing models. You can access them for free on Google AI Studio as well as their API platform. And they're also going to be available on the experimental side of Vertex AI, which is their platform for API development on Google Cloud. Now, while these are experimental, the plan is to turn these into the formal production versions in the very near future. So all the other models with all the other companies, they keep on coming out with new models almost on weekly basis. It's not always clear what are the differences between them, but there are improvements in all these models. Now, while we have talked a lot about OpenAI and Google, there are interesting news coming from the side of Meta and how fast their AI model has grown in the past year. So the current statistics about Meta's AI is that they currently, as of August of 2024, have 400 million monthly active users. 40 million daily active users and 185 million weekly users. This last number was even shared by Zuckerberg himself. Now, to put things in perspective, ChatGPT is reported to have 200 million weekly users. So 185 puts it very close. So meta coming out very late to the game, only releasing open source models are closing the gap on weekly users active users on their platform on the leading company, which is open AI. That being said, it's a very different environment because you can access Meta's AI on Facebook, Instagram, and WhatsApp. And a lot of people don't even know that they're using the AI because it's built into the platform itself. So it's not a completely fair comparison because. Meta has three billion users using their various platforms, and by allowing them access from the main page of each and every one of these platforms to its AI capabilities, it almost guarantees a very wide range of usage. The information that was shared in this article does not clearly state how people are using Metas ai and I think it's going to be dramatically different from the ways people use ChatGPT or Claude. That being said, the numbers are still extremely impressive. And from Meta to Anthropic, two very interesting pieces of news from Anthropic this week. The first one is Anthropic has published the system prompts of the latest CLOD models. So CLOD 3 and CLOD 3. 5. The system prompt is basically what makes the large language model work. So the raw model is very generic, but to behave the way it behaves, there is a system prompt that tells it how to answer different questions, what not to answer, how to approach different scenarios, and so on. And this has always been a secret that all the companies, use kept close to their chest without ever releasing it because it's considered a competitive information. And so none of the companies released it until this week. So as I mentioned, Anthropic just released the full system prompt off the cloud models, and they're doing this to promote openness as they have done with everything else that they're doing. So Anthropic's approach all the time has been to be as transparent as possible with everything that they're doing and to promote a safer and more open way of developing these models for better safety. I mentioned earlier their participation and the working closely with the legislator in California and other aspects that are similar to this. And so you can go and read the entire post and see everything that they mentioned, but there's a few things I want to mention. One is that it's instruct, that Claude is instructed to be completely face blind, which means it is not allowed to identify any individuals in images. The chatbot is instructed to appear, and I'm quoting, very smart and intellectually curious and engage in discussions on various topics. There are clear instructions for it on how to address controversial topics and even specific linguistics that are suggested to avoid starting responses with the words certainly or absolutely, which makes it more conversational and sound more human than some of the other chatbots. As I mentioned, worth a read. I will share a link to it in the show notes so you can go and read the whole thing for yourself Now, the other interesting news about Anthropic is that they have signed a deal with Amazon to power the next version of Alexa. there's been many rumors for months that there's going to be a new version of Alexa. Now, this new version of Alexa has a new codename dubbed Remarkable. And so Alexa remarkable is supposed to be launched in October of this year. And the price to use it is going to be five to 10 monthly. And the goal is to enable you to have a much more detailed conversation with Alexa, get a lot more information as well as activate things in your house with natural language in a much more sophisticated way than you can do right now. So I've been an Alexa user more or less since the beginning. I have many of them in the house and it's relatively limited, even though it's very useful for the things I'm using it for. So the new capabilities will allow you to do a lot more, have a lot more complex queries and conversations with it, as well as activate multiple things in the house all at once. So think about you want to close the blinds, change the temperature, and play a specific music and you want to say all of that in one sentence, you'll be able to do that, which you cannot do right now. That being said, I don't see myself paying five to 10 per device in order to get this additional functionality because the way I'm using it right now gives me more or less everything that I need. And if I need more advanced AI capabilities, Alexa is probably not going to be the direction I'm going to go. I'm probably going to use an app on my phone, or if I'm next to my computer, something on my computer. And that might be, Claude or Gemini or ChatGPT or Perplexity or one of those. And so I don't see where that is going as far as driving revenue through the Alexa channel, but an analyst from bank of America has estimated that there's over a hundred million active Alexa devices right now. And he predicts. that there might be a 10 percent adoption. If that happens, that is between 500 million and a billion dollars in annual revenue that will come to Amazon through this new channel, which is definitely worth a trial, especially if they're going to do this through a partnership with Anthropic and they're not going to invest in developing their own model, which was their plan. So for months, they were talking about developing a more advanced Alexa on their own, and now that they're announced that they're doing this with Anthropic, I don't know what exactly it means. Maybe it means it's a temporary solution until they figure out how to do this themselves, but maybe not, maybe they just. Maybe they dropped the whole idea of developing such a model on their own. And they're just going to relay on Anthropic moving forward. Either way, it will be interesting to see how this evolves and are people actually willing to pay for this additional functionality on Alexa. And to continue this trend of this episode of going across all the different providers, there are also some interesting news about XAI and its model GROK2. So GROK2, as I mentioned in the past few episodes, has been released and it's actually doing really well. So VentureBeat reports that GROK2 and GROK2 Mini, the two models that were released, are now slightly better and a lot faster than they were At the release time, just a couple of weeks ago, interesting thing is just two developers at X has completely rewrote some of the aspects of the code on how Grok runs, and now Grok mini is running twice as fast as it ran before, while still being a little better than it was before. So a huge improvement in a very short amount of time. The other interesting aspect is that it is tied the second place with Gemini 1. 5 Pro just behind OpenAI GPT 4. 0. Now, why is that so interesting? It's interesting because X was very late to the game. The first version of Grok was released. People hated it, did not understand what it's going to be useful for, and everybody was joking about Elon that this is just a toy that he's developing just for spite in order to show other companies that he can do it too, but nothing good is going to come out of it. And right now, it is at the top of the game at about half the time, or maybe even less than everybody else was playing this game. That being said, Elon has been in the AI game, as I mentioned earlier in this episode, Since the very beginning, he has been developing AI in several of his companies, a lot of AI development in Tesla. And so he's not new to the AI game. And in general, it's not a good idea to bet against Elon when he has put his head to something, especially when there's Vendetta involved and some revenge to the company that he had to leave after he was one of the co founders. That being said, as I mentioned, Grok 2 is right now ranked number two on the LMSIS chatbot arena and Grok 2 Mini, its little brother, is ranked number five. Still very good. So you have two models coming from X ranking in the top five models in the world right now. And that is before X's new mega supercomputer that they have built, and it's supposed to go online later this year to train Grok 3. so where is it all going? it's very obvious where this is all going. All these companies are investing billions of dollars training better and better, faster and cheaper models that we get to use. Now, how does that work from a business model perspective? I don't really know. I shared that with you in previous episodes. The fact that these models are becoming cheaper because of the intense competition and because of technological improvements are putting in a big question mark, the huge investments that has to be put in place. Tens of billions and hundreds of billions of dollars in computing power and hardware. How does that work from a business model? Not clear, but it will be very interesting to watch in the next few years. Now In a different news about X, this time the platform, formerly Twitter, and the AI chatbot Grok that is in it, there has been some development related to the election. So I shared with you my serious concern about the ability to create misinformation and disinformation and fake news using Grok on the Twitter platform, especially with the addition of the AI. And that has been amplified with the addition of the capability to create images on grok2 when it was released a few weeks ago. And you can go back to the previous episodes to learn about that. But five state secretaries has warned about the grok capability of spreading false election information. So now every time somebody is asking grok about elections, it's going to refer users to vote. gov for information instead of giving answers about the topic. That change was just recently implemented in order to prevent Grok from sharing misinformation related to the elections themselves. ChatGPT has implemented something similarly earlier this year. So it sends people to canivote. org every time it gets asked about election information. Now, does that solve all of our problem with AI in the elections? Absolutely not. There's still multiple ways to create fake news and other misinformation and spread it across multiple channels, but at least it's a step in the right direction. The next few pieces of news are around AI coding platforms. This area of the AI world is on fire. Literally every single week, there's some big news about it. And this week there's several different pieces of news. The first one is a new startup is a AI coding startup called magic. just secured a 320 million funding round. But in addition to the fact it's a lot of money, the investors that our serious players like ex Google CEO, Eric Schmidt and Alphabet's capital G and Atlassian and other big names in the industry. This company has raised 465 million so far, and its previous valuation back in February was 500 million in its previous round, which means now they're worth even more. And they're going to use a lot of that money as part of a partnership with Google Cloud to build two AI supercomputers for each future models. Now, the interesting thing that blew my mind when I read this article about Magic is that they have developed an architecture they call Long Term Memory Network or LTM, and in this architecture, they were able to create a context window of a hundred million tokens. Now, to put things in perspective, and for those of you who don't understand tokens are the way all these models work. They don't really understand words. They understand segments of information, which are about 0. 7 words. Now, the best model that is available to us right now comes from Gemini, which is Google's model that has two million tokens context window. The next best model that we have is Claude with 200, 000 tokens context window. So a hundred million is three orders of magnitude above most models out there, which is absolutely incredible. But what that means from a coding perspective, it means that within one chat, this tool can generate and get in 10 million tokens. lines of code that's in a completely different scale than anything else we had so far, which explains the amount of money and the profile of the investors in this company. Now, based on financial analysts, the AI powered coding tools market is estimated to reach over 27 billion by 2032. I actually think it's going to happen faster because at the pace things are moving right now, these tools are going to be Everywhere people are writing code on the same topic, a company called Abacus AI just launched an open source model that specializes in coding. The new model is called Dracarys and it's optimized for coding tasks. As I mentioned, it is an open source that is built on top of existing other open source models like Metaslama and it is available on Hugging Face and on Abacus AI enterprise offering on their website. And this open source releases obviously comes into a more and more crowded field with tools like GitHub Copilot and Tab9 and Repl. it that has been there for a while together with magic that we just mentioned and some other companies. Now on the same topic, but from a different direction to show you how powerful these tools are Karpathy, which is a big name in the world. He's a former head of Tesla's autopilot program. He's an ex open AI researcher, and he's a big name in the air world in general, just wrote a tweet Sharing how much he loves using Cursor AI, which is a AI coding tool. He describes Cursor as a net win compared to GitHub Copilot. He's not affiliated with that tool. He's just stated that he can't imagine going back to unassisted coding. He even shared how he uses the tool and his approach to coding right now. What he's saying is that he focuses on what he calls half coding, which is writing an initial code and comments and then letting the AI complete the rest of the code. And he is emphasizing the iterative refinements that he can do very quickly with AI generated code. Another big name in the field that is sharing similar sentiments to AI coding is Amazon's Web Services CEO, Matt German, that predicts that most developers may not code traditionally in the near future. Now, that near future might be six months, might be 24 months, but sometime in that time frame, people will stop coding the way we have coded since we had computers and will relay more and more on AI generated assistant or complete code generated by AI just monitored by humans to make sure that the code is okay and that it could be checked in and compiled and so on. And I promise you some interesting news about Oprah Winfrey. So if you needed any signs that AI is not a geeky thing anymore, and it's coming to be. A mainstream thing, Oprah Winfrey just announced that she's going to host an AI and the future of us event on ABC on September 12th at 8 p. m. Now, as you would expect from Oprah, she has some high profile guests like Sam Altman, the CEO of OpenAI, Bill Gates, the co founder of Microsoft, Christopher Wray, the FBI director and many other big names that are going to address the Various aspects of impacts on AI on our future. There's going to be some basic explanation about how AI works and what it is, but also the impact of AI on science, health, education, the job market. There are going to be some demonstrations of AI capabilities, and there's going to be a discussion about potential risks of super intelligent AI. as I mentioned, if you're looking for signs that AI is now mainstream, Oprah is hosting an AI event on ABC. Nothing becomes more mainstream than that. I must admit, I'm really curious to see that particular event, not because I think it's going to share the Things that I may not know, but mostly because I want to hear these particular people and how they're sharing their opinion, not to their geeky audience, but on mainstream media. That's it for this week. We will be back on Tuesday with another fascinating episode that's going to deep dive into how to do something with AI to improve your business. If you enjoy this podcast, please take your phone right now. Give us a review on your favorite podcasting platform. That's an option. And either way, share it with a few people. Click the share button. Yes, right now, unless you're driving, don't do this if you're driving, but if you're not driving, put up your phone, click the share button and send this podcast to a few people that you know, that could benefit from having this information. I would really appreciate it, and you'll be playing a role in providing AI literacy to more people on the planet, which will help us in getting to a better outcome of using AI. and until next time, have an amazing weekend.

People on this episode