Is the battle for AI supremacy being fought over leadership instead of technology? 🤖
This week's episode recaps the drama between OpenAI's board and leadership team. We discuss the temporary firing of CEO Sam Altman, the implications for OpenAI's charter and mission, as well as product updates from Google, Meta, and Anthropic.
Topics we discussed:
💥 Sam Altman reinstated as OpenAI CEO after employee exodus threat
⚖️ Questions remain around OpenAI's non-profit charter and mission
📈 Google delays release of AI model Gemini to compete with OpenAI
✏️ BARD can now summarize YouTube videos
🖼 Meta releases open-source text-to-video model Emu Video
✂️ Meta also launched image/video editing model Emu Edit
👁 Anthropic's Claude 2.1 has 200,000 token context window
💰 Anthropic cuts per-token pricing to compete with OpenAI
No guest for this short news update episode. Check out the Leveraging AI podcast every Tuesday morning for our usual interviews and commentary.
About Leveraging AI
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a short news edition of the Leveraging AI podcast. In today's show, we'll just review updates from this week. The first big news is obviously the return of Sam Altman as CEO of OpenAI and the return of Greg Brockman And a few other people who already Said that they're leaving to start a new department that will do AI in Microsoft. If you somehow missed all the other big news about this topic, the nonprofit board of open AI decided to fire Sam Altman and demote Greg Brockman from being a board member to only just holding his president position. He resigned shortly after a few other people left with them and more or less most of the employees threatened the board that if they will not bring Sam back, they will leave with them. Microsoft immediately hired Sam and Greg Brockman to come and work at a department that they're going to start for them as an AI division of Microsoft. And Microsoft also made an open offer to any scientist, anybody who works at OpenAI to come work for Microsoft for the same exact compensation. That obviously put a serious threat to the existence of OpenAI, which led to the fact that the board that was very clear that they're not wanting to get Sam Altman back to basically resign and let Sam take his position back and Greg as well. A new board is already chosen and that board is going to assign even a bigger board that will hopefully prevent that in the future. They also all agreed to give a third party the opportunity to investigate exactly what led to this situation so they can not have this. craziness happen all over again. So as of right now, when this is being recorded, Sam Altman is the acting CEO again of OpenAI. Everything is back in its place. The only thing that is now questionable is the charter of OpenAI of putting humanity's best interests ahead of capitalism best interest, which was the charter. And now with everything that happened, I don't even know if that's true or not anymore. Only time will tell. The interesting thing that everybody wants to know beyond what exactly happened is the main assumption right now is that open AI Made some kind of a quantum leap forward from a technological perspective in its research that has scared the board enough to say this is beyond the line where we're willing to go and release this to the world. And so I think it will be very interesting to learn what that thing is, but nobody is sharing that information, at least as of yet. So we have to be patient and see how that evolves And from open AI to one of their biggest, at least potential competitors, Google, Google has been working on their next generative AI model called Gemini Gemini was supposed to be released already and now it's presumably has been delayed to Q1 of 2024 Gemini is supposed to be a multi model model that has been developed from the ground up in order to compete with open AI and its partner Microsoft, and my assumption is That after Bard was not impressive to say the least, I think Google cannot allow themselves to release nothing short of spectacular. And so I think they're working very, very hard until they get to that point where they can release a model that is at least as good as GPT 4 and. Preferably better on at least a few things. And I think that is the main reason for the delay. It's obviously not a simple task to develop a model that competes with the best model in the world right now. But if there's a company that can do that, that's Google because they have more computing power, more data, more smart scientists, more resources and money to do these kinds of things. So I I have zero doubt in my mind that Google will play a very serious role in the future of humanity and AI in it. And so I think whether Gemini gets released in its new schedule in Q1, whether it's delayed again, I think once it comes out, it's going to provide a serious competition to open AI and the best, most advanced generative AI model out there.\ news from Google is that they have released the capability for BARD to review a YouTube video and provide you with the highlights as a summary in text so you can take a link from YouTube, drop it into BARD, ask it what the video is all about, and it will give you a quick, short, end to the point summary including defining what genre it is, what type of video it is, and potential follow up questions that you can dive into for deeper understanding on the topics that the video has covered. I think what we'll see going back to the previous news from Google is that more and more integrated capabilities Across all its different tools and software platforms, which I think will be hugely beneficial to us as users. Just think about being able to use AI across everything G Suite, YouTube, Google search, etc. When it will know what happened in the previous steps across the other platforms that could be amazingly powerful and obviously a benefit over what OpenAI has that just provides the large language model, but it will be somewhat similar to what Microsoft is doing with OpenAI's capabilities, where they're integrating it into everything Microsoft, including Bing search and the 365 suite, as well as their cloud=services=. Moving from Google to another big player in the field, as you heard me say several times before, Meta has been playing a very different game than OpenAI and Google and Anthropic and so on. They have been mostly doing stuff for themselves as well as developing and releasing open source models and they've just released a very interesting model for image editing and video, they call it Emu. One is called Enu video and the other is called Emu edit. Emu video is a text to video generator that is open source and available to everyone. And apparently, from what people are saying, it's pretty impressive. And per what they're saying and per human evaluation, it's performing better than any other text to video model that exists today. I haven't checked it myself, but it sounds very promising. You heard me say that before. I think 2024 is going to be the year of text to video. We've already seen some great attempts. The leader right now I think is Runway, but there's a bunch of other companies and a bunch of open source capabilities to do that. And now Emu Video from Meta just provides some more capabilities to the mix. And Emu Edit allows you to edit images and create stickers from images and so on very easily just using text. And from Meta to Anthropic, Anthropic made two big pieces of news this past week. One is that they have released Clod 2. 1, which is their next version of Clod 2, their large language model. I really... Like Claude personally, I use it a lot. The first thing that they've announced is as Claude to now is going to have a 200, 000 tokens context window. That means that they're now back again as number one with the longest context window. They used to be in a very big lead over open AI, but AI in their release of gPT 4 Turbo announced that, that they're going to have a larger context window, 120, 000 tokens over the 100, 000 that Claude used to have. So now Claude has a 200, 000 token context window. the difference between Claude Two to chat GPT four was really, really big and made a huge difference. The difference now is still big. It's 120, 000 tokens for open AI and 200, 000 for Anthropx Claude. But I'm not sure how many people are actually going to use it. And the other thing I'm not sure is the level of accuracy. Of the responses you're going to get if you really upload something that big. But from a technical perspective, they are back in the lead. Another thing that this model does is they're saying it reduced hallucinations compared to the previous models by 50%. This is a huge improvement, but it doesn't mean anything because it's very hard for us to know how many hallucinations were there before and how much are there right now. And is it better or worse than. Other models like OpenAI and Google's and cloud 2. 1 is available both on Claude. ai, the chat that you can do that you can use online for free or a paid version. And it's also available through their API, which is really hard to get from some reason compared with getting the open AI API, which literally anybody can get. And the second big piece of news from Anthropic with regards to cloud two or cloud 2. 1 I should say now is that they have slashed their per token pricing dramatically. This is done obviously in order to a compete with open AI that have reduced their pricing when they released chat GPT for turbo, but also with the release of more and more highly capable open source models, which are becoming more available and more stable, meaning large companies and enterprises can now have a real choice when they come to decide what models they want to use. They can go with open source models, which obviously are going to cost them very little to run. And so. Companies like OpenAI and Anthropic will have to find ways in order to make their services significantly more attractive financially. This is it for the news this week and we'll come back with a regular episode with an interview on Tuesday morning as always and until then, explore AI, test new things, and share what you find with the world and with me on LinkedIn if you like this podcast, please give us a five star review and share it with people that you know, that helps us reach more people and helps us get you better guests. And that's it. I'll see you on Tuesday.