Leveraging AI

67 | Elon Musk sued OpenAI, Groq is supercharging AI performance with LPU, and Sundar Pichai Google's CEO position is at risk, and many more important AI news for the week ending on March 2nd,

March 02, 2024 Isar Meitis Season 1 Episode 67
Leveraging AI
67 | Elon Musk sued OpenAI, Groq is supercharging AI performance with LPU, and Sundar Pichai Google's CEO position is at risk, and many more important AI news for the week ending on March 2nd,
Show Notes Transcript

Are you ready to uncover the future of AI that's transforming our world right now?

Give yourself and your business the highest chances of success in the AI era, with the AI business Transformation Course

In this episode of Leveraging AI, Isar Meitis dives into groundbreaking developments with industry giants and emerging players. Discover how new technologies are reshaping the AI landscape, from Grok's innovative processors to Stability AI's latest release and Elon Musk's legal battle against OpenAI.

Topics We Discussed

  • Grok's revolutionary LPU technology.
  • Stability AI's Stable Diffusion 3 launch.
  • Elon Musk's lawsuit against OpenAI.
  • Google's challenges with Gemini image generation.
  • Microsoft's new ventures in AI integration.
  • The rise of AI-driven coding tools.
  • The evolving landscape of AI in customer service.
  • Meta's upcoming Lama 3 release.
  • AI's increasing role in mobile technology.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to a News Weekend episode of the Leveraging AI podcast. This is Isar meitis your host, and we have a packed episode with really big and exciting news. And usually we start with the bigger players, OpenAI and Microsoft and Google and so on. This week, actually, some of the biggest news do not come from the three giants or the four giants, but from other companies. This episode is brought to you by the AI business transformation course. It's a course we've been running successfully since last year, we're running two courses in parallel every single month, and we're fully booked through the end of March, but the next cohort opens on April 1st and you can sign up on our website. There's going to be a link in the show notes so that you don't forget. If you're looking to advance your understanding in AI to advance your career or to drive growth in your business or your department, it's an incredible course that I've been taking by hundreds of people at this point that are all leaders in different businesses. So check out the link right now To this week's news. And we're going to start with a Company that has taken this week's news by storm. If you've been following the field of AI and that company is Groq. And that's not Grok by X, the large language model that has been pushed out by the company formerly that was Twitter, but actually Grok with a Q in the end G R O Q, which is a company that developed A new kind of processor that instead of the GPUs that everybody's now using from NVIDIA that drove its value through the roof, they've developed what they call an LPU, a language processing unit, which is built specifically to run large language models in a very effective way. And in all the demonstrations that have been circling around on the internet right now, it runs up to 10 times faster than running ChatGPT on GPUs. It's absolutely incredible. Company was founded and is led by Jonathan Ross, who is an ex Google engineer and they've been working on this for a while. So while we are just hearing about it, this company has been around for several years, developing this new technology. What they're saying is that they can scale the processing linearly without traditional bottlenecks that is holding back the performance, and it can generate hundreds of words of answers in under one second. If you just search these demos online, you will be blown away if you're used to the speeds in which you're getting answers from GPT 4 or Claude or Gemini or any of the other big models. There's two great pieces of news about this. One is that it's an independent company outside the big tech that is actually now pushing a completely new technology. The other is that it will allow us to run these models much faster, which significantly less computing power, which is currently creating a huge damage on the environment with the amount of energy and cooling that they require. So where is this going to go? I don't know. My gut feeling tells me a big player will probably buy them out and will own that technology. But the fact that it is possible is really promising and a positive note. Another huge piece of news that I'm personally very excited about is that as we discussed last week, Stability AI is actually launching Stable Diffusion 3, that is on a pre release and is available to some people That started releasing images they created with it, and it's absolutely mind blowing. It is the first time that I've seen a tool that outperforms Midjourney, and that outperforms DALI and Google by a very big spread. And it's open source, meaning you can use it without paying licenses to anybody. And it has a huge community behind it that has been around the stable diffusion world, that has been developing plugins and extensions that I assume will also work. On stable diffusion three. So very exciting news. What they are saying is that it will also allow them to improve into video and 3d capabilities in the future based on this new technology. So if you are in the world of creating images and we all are, regardless of what you're doing in business, these are very good news. The third piece of news that doesn't come from any big player but comes from a very big known person is Elon Musk. Elon Musk has just filed a lawsuit against OpenAI and the approach that they've taken. Now the suit is filed Again, Sam Altman and Greg Brockman and other people in Open AI saying that they've betrayed the foundational mission that was to benefit humanity in an open source way. Those of you who don't know the story here, and I'll do a very quick short version of it. Elon Musk, was one of the founding members of OpenAI together with Sam Altman and Greg and a few others, And they have built it as a non profit organization with the goal of developing the most powerful AI to defend and provide the benefits to humanity. And somewhere in 2018, they took the approach of taking a lot of money from Microsoft and right now, they're more or less being a subsidiary of Microsoft where Microsoft is benefiting from. The huge fallout of capabilities that is now integrated into everything Microsoft and is not open AI anymore because these models are now behind closed doors and only available to open AI potentially with some visibility to Microsoft. So because this company was founded As an open source with very well defined goals that they're now betraying, that's the source of the lawsuit from Elon Musk that left in 2018 after a disagreement with the board and specifically with Sam Altman with the move to involve Microsoft in the process. Where is this going to go? I don't know. I seriously doubt if he's going to win this battle. I'm not sure exactly what the court can do or what could be the potential outcomes. But if there's a person that have deep enough pockets to run this battle, it's Elon later in this episode, we're going to talk about other moves that Microsoft is making in parallel to have other options other than open AI in its system but we'll get to that later. Another huge piece of news that is evolving out of something we discussed last week. last week, I shared with you That Google's Gemini image generation model have been releasing woke or over diversified versions of historical people, not agreeing to create the images of actual images from history or from current times, such as the pope and the founding fathers and so on. Making sure that all these images have a variety of people from different races and different ages and so on. The controversy has been growing dramatically and it's now at the point that the CEO of Google, Sundar Pichai, job is at risk. There are more and more voices that are saying, this is just one symptom. It is showing that Google is not at the same level it was before and cannot compete with the other AI models out there right now. And if you think about the broader picture, Google's main business, which is its search ads income is at risk with this whole AI thing. And if they cannot figure out how to deploy AI models successfully, which so far they've proven they cannot do, they may take a serious hit to their major business, which is their ads on their search. And if that happens, then that's going to be one of the biggest falls or one of the biggest tech empires. ever. So there's more and more voices that are calling for the removal of Sundar Pichai from the leadership of Google, not necessarily from the leadership of Alphabet. But This is a highly interesting and impactful story that is evolving. And I'll keep on reporting to you what happens next, which may happen one hour after he released this podcast, but next week I will let you know what's happening, or if you're following me on LinkedIn, I will obviously share the news If anything evolves before the end of next week, if we're talking about Google, Google keeps on releasing more and more Gemini integrations into different components that Google control anything from its nest home screen to now they've announced that Gemini can create calendar events while speaking to it over your phone or chatting with it over chat. So the Gemini interface gets. Involved more and more into more things. Google, similar to what we've seen Microsoft doing with Copilot initially, this capability to create calendar events through the phone is only rolled out in the U. S., but it's expected to roll out to the rest of the world shortly after. in addition, Google added Gemini chatbot into the Google messaging app. It's rolled out as beta, but it means you can now use chat, create images and so on with Gemini through Google's native messaging interface This is something similar to what we've seen with Meta releasing similar capabilities into WhatsApp. it means we will continually see more and more AI capabilities being integrated into more and more day to day tools that we are currently using across all the different capabilities of phones, computer devices, web, et cetera, because that's what the big players are going to continue pushing. Before we move to somebody else other than Google, Google made a very interesting announcement on the research side, and they have announced what they call the LLM Comparator Evaluation Tool, Which allows the developers of these tools to get a glimpse into how the large language model actually works and what drives it to generate its outputs. So one of the biggest problems with fixing or improving large language models is that it's a black box. Even the people who develop it have very little visibility into what it's actually doing once you ask it to do something. And from there how it creates the answers. What they know how to do is to train these models and fine tune these models. But the actual operation Is not visible to anyone. And this tool allows Google's engineers some visibility and analytics into the processes that the large language model is doing, which allows them to improve and make the models better and better, faster than they could before. And from Google to a interesting collaboration of three big giants in the AI world. The first one is Nvidia, they're combining forces with hugging face and ServiceNow, and they've introduced Star Coder two, which is a family of generative AI coding tools that are available in several different levels, a 3 billion parameter, one 7 billion parameter, and a 15 billion parameter. But he was trained on over 600 programming languages, Which to be fair, I didn't have a clue. There are so many programming languages, but people who have been testing it are saying that is extremely capable and it's optimized for high efficiency alongside high performance through the code generated by these tools. in an interview in the Dubai summit, Jensen Hung, the CEO of NVIDIA, said that learning to code is an outdated priority Because he thinks that these AI models will be able to completely replace people who are writing code in the near future. Now, first of all, Jensen Hung is obviously a really smart person. And when he's saying something like that, we better pay attention. That being said, the biggest problem that I see with that approach is that developing software, not just about writing code, You need people with experience to be a higher level engineers and system engineers to actually build the broader architecture of how software works. And these tools are definitely not doing that at this point. So what's the problem? The problem is if you don't have junior developers That then become senior developers, that they become system engineers. How are you going to get system engineers to do the engineering of the broader systems? So I don't think anybody has an answer to this right now. I think for the near future, hence we'll still need engineers, but the direction is very clear. We will be able to develop a lot more code by people who are a lot less experienced. And if you believe some companies that are moving in that direction, or Jensen Hung, anybody, including you and me, will be able to use natural language to develop any software we want that will run in a very efficient way. I think that's still further out into the future, but it definitely puts an interesting spotlight on maybe one of the most coveted thing that people are rushing to learn today, which is computer engineering and learning how to write code, which may be less and less relevant. I'm sure there will be other variation of this to still keep on creating software. Definitely a huge demand in the next few years is going to be for data scientists and data engineers of different levels. on the same topic, GitHub Copilot, which is another coding tool Just released a co pilot enterprise version that is available for 39 a month through a subscription model. And what it allows companies to do is it gives developer access to the company's internal private code repositories as part of the co pilot. So the co pilot looks at your entire existing code in order to write better new code that will be a better fit to the history of what you have before. And it also has a chat interface that allows you to research and query and look for things in your existing code. This is a huge improvement allowing you to use your proprietary local data in order to make better coding in the future. It also has access through Bing to the internet in order to get you outside information. So another improvement in the code writing capabilities that is aligned with the predictions from Jensen Hung. we shared with you that a few companies have struck licensing deals with large language model owners. And this trend continues, and this time the news comes from Automattic. That's the company that owns Tumblr and WordPress. WordPress is obviously the largest host of blogs and websites in the world. And Tumblr is another blogging platform. And the rumors are saying that they are in talks With me journey and open AI for licensing deals that will allow them to scrape the data from these platforms in order to train their models in return for revenue. This is still rumors. Nobody has confirmed that, but this continues the trend. That we have seen last week, and I'm sure we will continue to see where these large language models, instead of dealing with lawsuits will prefer to cut licensing deals. The question still remains is who owns the data, right? The fact that Tumblr decides to make money from its blogging platform, the people who wrote the blogs or uploaded the images or created the creations. Do not get compensated at this point. So I still see a big gap and I still see that this is just moving the lawsuits down the pipeline where now people may go out and sue WordPress or Tumblr, if this actually moves forward for who owns the data on that platform and why can they make money through selling it to somebody else when they don't potentially own it. So this is definitely not the final step, but it's a trend That we will see more of in 2024 and probably beyond. Another very interesting piece of news comes from Klarna. it's a company that got a lot of bad press in 2022 when they fired 700 plus of its employees. Back then it was correlated to economic uncertainty when a lot of big tech companies fired a lot of people, but this week they came up with a very interesting piece of news and they said that their new chatbot handles two out of three of every customer service request. Combining to 2. 3 million conversations in a very short amount of time. They're saying that it's doing the work of 700 employees, which somehow aligns perfectly with the amount of people that let go two years ago. they're claiming that there's no relation between the two. And I believe them. This was almost two years ago when they fired all these people. But I think the news are very interesting and they're aligned with what we're seeing with a lot of other big tech companies. If people think that AI is not going to take jobs, they are hallucinating more than ChatGPT. It is very clear that AI capabilities will be able to do a lot more with a lot less. And then if you can't grow your business in the same scale as your efficiencies, you will have to let people go because otherwise you're becoming less and less efficient. One of the industries that are very high risk is the contact center industry. If you have seen demos of what these service chatbots And these voice generated agents that can actually talk, have phone calls, both service calls and sales calls. You have not seen what the future looks like. These systems are perfect. They are providing better results than the human people over there. And it's very obvious. They run 24 seven. They are available always. They are always kind. They never get frustrated. They speak and write in every language. They can connect. automatically to your CRM, ERP process, procedures, history, et cetera, et cetera. And they will become better and better because of that over time. And they can do that at a fraction of the cost Of human contact center employees. So I see within the next three to five years, the whole contact center industry crumbling. and if you check out some of the leaders in that industry and their stock, you will see that the direction is very clear. We still didn't talk about Microsoft this week, and obviously that's not possible in the craze that they're in to integrate AI into different things. So in Windows 11, Copilot is starting to get different plugins to do more and more things that are external to the Microsoft ecosystem. So new plugins that are connected to OpenTable and Shopify and Kayak and so on are becoming a part of co pilot to allow co pilot users to do more and more things with it. And this is where the future is going. This is going to be a cross platform, cross app capability that is probably going to be built into every future AI tool, which will allow us to eventually do everything that we're doing today, just by chatting or talking to a personal assistant that will be able to connect to everything that we are connected to. Microsoft also announced upgrades to AI capabilities To photo apps and it's video editor. Microsoft also announced that Microsoft 365 Suite will also have a co pilot for finance employees. It will aim to speed processes such as reporting and collections and auditing via natural language interactions. So a very powerful tool for anybody in the finance aspect of business. It can analyze sales variants and find different insights from your existing financial data, it can pull financial information into Excel without any manual data entry and so on and so forth. it's not the first very specific co pilot that Microsoft released. They released several ones before, but think about being an account receivable person that now can find missing invoices or unpaid anything in seconds just by asking for it. It's really magical for anybody in this field. As I mentioned earlier in this episode, this is the direction that we're going to see. We're going to see more and more of these AI capabilities being embedded into more and more processes in our businesses and in our personal lives. This is not available yet. it's supposed to be available for anybody with a 365 license later this year. I promised you in the beginning of the episode that there's a relevant piece of news to the fact that OpenAI is now under fire, but it was actually released before the lawsuit by Elon Musk on OpenAI. And this new piece of news talks about the fact that Microsoft have signed a multi year agreement with Mistral for their large model. We talked about Mistral many times in this podcast, but Mistral are a French company that has been releasing the most powerful open source models out there. And they've just released their largest model. And this model is going to be available, not only on the Mistral platform, but also on Azure as the only cloud platform it is going to be available on. Now, this partnership is obviously very interesting because Microsoft owns 49 percent stake at OpenAI. And so far, this has been the only large model that they've been offering. But they've been running through a lot of scrutiny, especially in Europe for offering open AI that has regulatory issues with the EU AI regulations. And so this makes sense from Microsoft perspective, obviously diversifying their offering and not relying completely on open AI, regardless of where the open AI issue with Elon Musk or any other lawsuit that they have is going to go. The financial terms have not been clearly disclosed yet, but it's very clear that a mistrial is starting to go less and less open source and they're starting to sell their soul to the devil in this particular case, which is the big players. But from a Microsoft perspective, as I mentioned, it makes absolute sense. If you look at their competitors, Amazon with AWS and Google with Google Cloud. They've been offering a multitude of models that you can run on top of your data, and it makes perfect sense to Microsoft to have additional alternatives and to diversify from being completely reliant on open AI. Now, to talk specifically about Mistral's model, the people who have been using it saying it rivals GPT 4 and CLOUD2 for reasoning, and And it's currently 20 percent cheaper than running GPT 4 Turbo. It supports multiple languages. And the only limitation, if you compare it to the other big models, is that it only has a 32, 000 token context window. If you compare that to ChatGPT that has 128, 000 or Claude that has 200, 000, that's a relatively short context window. But it's still enough for many tasks that we're doing, especially if you integrate it correctly into your processes. In addition, Mistral launched LeChat, which is in French because they are a French company, but it's basically a chat GPT competitor. So it's just. free beta release that you can get access to at chat. mistrial. ai and it allows you to get access to their best models through a chat interface. compared to ChatGPT, it doesn't have web connectivity yet. And some of the other features that we're used to from GPT 4, but from a large language model perspective, it provides very good results. I test it regularly and it actually, on some things, does a better work than ChatGPT. The thing that I like the most about it, that every time I ask it to find me references online, it actually did. And it provided me specific links to specific sources of things I was asking for. in ChatGPT, that's a hit or miss. And in many cases, you're not going to get that. From a political perspective, this move is very interesting because the French government was pushing very hard when the EU did its AI regulations to exclude open source from some of the limitations in order to support Mistral and its growth To not be shackled with the same limitations that the closed source company has and big tech has in order to allow this to be a baseline in Europe. And now they deal with Microsoft. makes it very interesting. it makes you wonder exactly what happened because it's very obvious that a deal like that didn't was not signed in the last two weeks. It was being worked on while they were fighting to get preferred terms under the EU regulations. So I'm sure there's going to be a probe into that, and I'm sure the EU is going to look into this whole relationship with Microsoft through a very critical lens and time will tell how this is going to go. But as of right now, this is what's happening. And from Microsoft and Mistral to Meta, Meta has announced that they're releasing Lama 3, probably in July, the goal for it is to be a competitor to GPT 4, but more than that, they're saying that the focus is to allow it to be more open and less restrictive than it is right now. there's a lot of scrutiny around Lama two that it refuses to engage and answer a pretty wide range of questions. And one of the goals is to allow Lama three to be more conversational and less restricted While still having solid boundaries on the things that it will not do. Meta has and extremely capable AI team led by one of the brightest brains in the AI world. They're developing a whole new type of way to train models more like humans. So there's zero doubt in my mind that they're going to continue being a very important player in this crazy race towards AGI and other AI capabilities. And now two very interesting pieces of news that has to do with the mobile world. One is that OpenAI just released a home screen widget for phones, meaning you can do everything you can do with Chachapiti through your home screen without opening the app. Anything from voice to video to text interactions, straight from your phone's home screen. Brilliant move by OpenAI that will make its chatbot even more used across multiple devices. It will be very interesting to see will Apple allow it to happen in the long run once they start launching their AI stuff, which everybody's expecting that would happen in their development conference later this year. The other smartphone related piece of news is from a company called brain. ai. Brain. ai is building an operating system that will replace the current phone's operating system in a very dramatic way. their approach is that an AI chat slash voice interface will do everything in your phone, no external apps, no clicking on anything, but basically just talking to your phone to activate any service and anything you want, and there's already a phone that's going to come out with that new operating system, and that's a budget T Mobile Revel plus phone that is going to have that operating system built into it. I see two big disadvantage and overall, not a very positive future for this. One big disadvantage is the fact that they don't have third party apps. I think it doesn't matter how good your company is. Third party apps will always be able to provide you more niche, more accurate capabilities across a variety of topic that one company and one always will never be able to provide. The other is it doesn't have obviously enough processing power to do this AI magic on the phone, meaning it's completely reliance on connectivity, which is not always available, meaning you might run into situations where most of your functions of the phone just doesn't work because you don't have great reception. So both these problems are not very promising, but the biggest thing is I have very little doubt that the big players, meaning Google with Android And Apple with its iPhones will release something like this in probably the next release of their phones, if not that the one after that, which means you will have an operating system on your phone that will be fully integrated with an AI capability that we'll be able to talk to and so on. But we'll keep the existing benefits of these phones, including the apps and the ecosystem around it. And so while this is a cool attempt in the right direction, I don't think it has a future because the big players will have something similar with additional benefits. But for now, this is an interesting development. There are probably 10 or more other really big pieces of news from this week that we're not going to share this time, But if you want to have access to them, you can sign up for our newsletter and get those. It is released every week with a lot more stuff than we're just releasing on this podcast. On Tuesday, we have another fascinating episode coming out with an expert and it's going to be absolutely amazing. So don't miss that. If you have been enjoying this podcast, Please rate it on Apple podcast and on Spotify, and please share it with other people who can benefit from it. That's our best way to grow, and that's your own way to help other people be more educated about what's happening in AI. Do that, we will thank you very much. Until then, have an amazing weekend, and we'll be back on Tuesday.