Leveraging AI

65 | Google releases Gemini for workspace, Microsoft releasing Co-Pilot for Windows 11, NVIDIA Passing Google and Amazon in valuation, and more fascinating AI news from the week ending on Feb 23rd

February 24, 2024 Isar Meitis Season 1 Episode 65
Leveraging AI
65 | Google releases Gemini for workspace, Microsoft releasing Co-Pilot for Windows 11, NVIDIA Passing Google and Amazon in valuation, and more fascinating AI news from the week ending on Feb 23rd
Show Notes Transcript

AI is taking over everything we know from Google and Microsoft !

This episode is brought to you by The AI Business Transformation Course - If you want to learn how to implement AI across a business or if you are looking to advance your career with AI knowledge - this is the course for you! 

This jam-packed news episode covers some major AI developments from Google that have people wondering if they're poised to dominate global AI. ๐Ÿ˜ฎ We break down the key announcements and discuss the implications.

Topics we discussed:

๐Ÿ‘‰ Google integrates new AI into Gmail, Docs, Sheets
๐Ÿ‘‰ Backlash over bias in Google's image generator
๐Ÿ‘‰ Massive $60M data deal between Google and Reddit
๐Ÿ‘‰ OpenAI hits $80B valuation after employee share sales
๐Ÿ‘‰ ChatGPT has software glitch and starts spouting gibberish
๐Ÿ‘‰ NVIDIA passes Google and Amazon in market cap
๐Ÿ‘‰ Microsoft adds Copilot capabilities directly into Windows 11
๐Ÿ‘‰ Open source models Mistral and Stable Diffusion release big updates

About Leveraging AI

If youโ€™ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to a short weekend news edition of the Leveraging AI podcast. This is Isar Meitis, your host. And this news edition is not going to be that short because there are a lot of really big news that happened this week. This episode is brought to you by the AI business transformation course. It's a course we've been running successfully since last year, we're running two courses in parallel every single month, and we're fully booked through the end of March, but the next cohort opens on April 1st and you can sign up on our website. There's going to be a link in the show notes so that you don't forget. If you're looking to advance your understanding in AI to advance your career or to drive growth in your business or your department, it's an incredible course that I've been taking by hundreds of people at this point that are all leaders in different businesses. So check out the link right now To this week's news. And we'll start the news with Google because they had so much news this week, we could have done an entire episode just on them, so we'll try to run through their announcement relatively quickly, it's obvious that Google are all in on AI and there's really news coming out of them every single day, sometimes more than once a day. But the first piece of news for this week is that Google has announced that Gemini is now available for Gmail and Docs and Google Sheets as part of their Google One plan. This means you can use the new Gemini models within your G suite slash workspace. Text stack, and you can use it to help you write emails and documents and spreadsheets and PowerPoint presentations and so on, without copying stuff back and forth and switching between different apps to get your AI assistance. This is obviously a very similar approach to Microsoft's Copilot, and I'm sure we'll see deeper and deeper integrations of these tools into the two different environments. This plan starts a month, and it's available to all users in 150 countries. In addition, Google announced That you can now have similar access through your business plan. There's two tiers on the business plan. One is called Gemini Business, and the other is called Gemini Enterprise, where the enterprise version gives you a few more capabilities beyond the business plan. But you can add that to your workspace business plan environment. The business plan costs 20, just like the one that is available to people are not in a business. Any other Google Workspace users and the enterprise plan costs 30 a month. Both are replacing the not so successful duet AI implementation, which they had before Beyond the obvious Office suite, Google has been implementing Gemini across other Google usage. they are adding Gemini models to Performance Max, which is their ad platform. You can now use Gemini in order to create text and images. Straight into your ad campaigns, their data shows that advertiser who improved their performance, max ad strength to an excellent level, which is their ranking sees a 6 percent higher conversion on average, and obviously more money as a result. What they're saying is that. Creating higher variation across your ads creates higher engagement and gets you more results and using Gemini as part of the process to create new text as well as new images that are personalized to specific audiences obviously gets you higher conversion and gets you more money. In addition, they've announced an integration with Canva create more capabilities beyond what's available within their ad platform. Last week, we shared with you that Google released a new Gemini Pro version that's called Gemini 1. 5 pro and that it has a token limit of 1 million tokens. And there's already amazing examples on the internet showing how powerful that thing is. One of them that I watched this week that really blew my mind is a person loaded over 100, 000 lines of code in one swoop in order to help Gemini analyze it and he did a really good job in addition, he loaded images that were related to that code for a total of over 816, 000 tokens, and he's saying he got very good results. So this is very promising for any company who's looking to use a huge amount of data within a large language model, And still get fair results. I want to remind you that the limitation of Claude is 200, 000 tokens and of ChatGPT 128, 000 tokens, but when you load this number of tokens into these two platforms, you see a serious degradation in performance compared to a much smaller context window. So the breakthrough that Google were able to make with running significantly higher context windows is very interesting. In addition to all of this, Google this week released an open source model that they called Gemma, or Gemma, I'm not sure how to pronounce it, but it's spelled G E M M A, which has two versions, A two billion parameters and a seven billion parameter models that are open source and these two open source models are available on Google Cloud on Kaggle on Hugging Face, et cetera, with a very generous free access credits. The goal is obviously to allow the global community help them develop their models even. Further, but it's not common to see a company that has a very powerful model also release an open source model. So kudos to Google for playing both sides of that game. Gemma or Gemma is obviously not as powerful as Gemini, but it's still a very powerful model that I'm sure will find its place in the open source world. When Google released Gemini, they also released a mobile app, and now you can have that mobile app replace your Google Assistant. People who are using it on Android have had mixed feelings on how it performs. On one hand, the ability to ask it questions and get answers. On a much broader aspect than you could do with assistant is very impressive, but on the very basic things that assistant was very good at, such as telling you what's the time or what's the weather or what's your next meeting. Google Gemini struggles and doesn't do a very good job. So right now, probably the recommendation is to keep both applications running and to use assistant for the assistant thing and use Gemini when you're looking for data and information until Google figures out how to get the best of both worlds in a single app.

Isar:

Still in the Google topic, they had some negative news about them this week related to Gemini's image generator after its system was refusing to produce accurate images requested by users and instead produced a diverse version of these people. When requested to create images of Vikings. There were all black when requested to create a image of the Pope. It was a female Pope and will requested to share an image of the founding fathers. It was a diverse version of the founding fathers. There are a lot of other examples like wildfire over the internet and especially on X. And it led to the point that Elon Musk took a jab at Google from preferring diversity and Woke ideas over historical accuracy, this led to Google stopping their image generation Admiting they might have went too far with their diversity ideas as they were implemented and reflected in their model. This is obviously a very big problem. On one hand, we know that these models are bias, but on the other hand, forcing a correction on that bias in what seems to be A hardcore woke philosophy isn't better and is actually, in this particular case, trying to change history the way it actually happened. So it's a big problem that we're going to continue seeing as we move forward with these models and hopefully there's going to be some regulation that will enforce accuracy over political opinions of the people who control these models.

Isar Meitis:

Last piece of news from Google is That they have signed a deal with Reddit in this deal, Google will get access to all of Reddit's data, and they're going to use it for training their AI models, which is obviously a huge amount of data across a really wide span of subjects and formats and Reddit, in return, in addition to a chunk of change, will get access to Google Vertex services in order to enhance its on site search capabilities. Now, the amount wasn't disclosed, but the rumors report that Google are going to pay Reddit 60 million per year in order to use their data. This comes at a very interesting timing when open AI was actually able to push back some of the lawsuits against it that were filed in California for infringing data. But I think it's very clear to everybody that licensing deals are going to become more and more common. At the same time, Reddit is gaining from this because they're about to go public and obviously a unique arrangement that pays them 60 from Google looks very well as part of your IPO. Now in a similar move, OpenAI have signed a deal with Match Group. Match Group is the company behind Tinder and Match and OkCupid and Hinge. The deal includes over 000 licenses of OpenAI business capabilities for Match to use ChatGPT across a wide range of things that they're doing. Match are saying they're going to use ChatGPT for coding, design analysis, build templates, and other daily tasks, including emails and other communications. Two interesting parts to the announcement. One is that the actual agreement was written by Chachapiti itself, which is pretty cool because I, I assume it's a pretty complex legal document. And the other is, and that is actually very important, is that Match is only going to give access to employees that is going to go through rigorous training and testing to make sure that they really understand how to use these tools. The reason I'm saying this is so important is because every organization should do that right now. You should train your employees on the pros and cons and the advantages and risks that are involved with using different AI tools before you allow them to do that, but then absolutely go and encourage them to use these tools because you are going to get business efficiencies if you use these tools correctly. Since we mentioned OpenAI, let's continue talking about them. Reports of saying that OpenAI is now worth just over 80 billion dollars. And when the previous valuation at the end of 2022 was valued at 29 billion dollars. That's almost three X in a single year. That's obviously a huge explosive growth for a company that is not that big. Now, as part of this new valuation, they allowed employees to sell shares in order to capitalize on this huge growth in the company. And this comes as part of their report of 1. 6 billion in revenue in 2023 plus the 10 billion investment they got from Microsoft. and now from good news about OpenAI to some interesting. Less exciting news about OpenAI. We told you last week that they had a big outage, so this week, they had an interesting bug where a lot of users reported that they're seeing ChatGPT start speaking gibberish and really spitting out stuff that makes absolutely no sense. And even connecting half words together that shouldn't go together. OpenAI acknowledged that that's a real issue. At around 10 40 p. m. on Tuesday night, shortly after they said they fixed the problem and that it had to do with some issue with the way ChatGPT predicts the next word, that issue was solved, the reason I think that's important and something we need to think about is we are going to become more and more dependent on On these large language models and AI capabilities, and they're going to become more and more entrenched and a part of our day to day processes. We talked in the very first piece of news of this episode about the fact that it's becoming a part of our office suite, whether Microsoft or Google, and the fact that these things can have downtime and or starts doing really stupid things and start speaking gibberish. Will become a bigger and bigger problem as more and more companies and more and more processes depend on these systems to work properly. And it means that as we implement them, we need to start thinking about fail safe mechanism and work around mechanisms want these things happen. I talked about this in the last several episodes. I think the easiest way to do that is to have licenses with more than one company. So when that company is down, you still have a backup, either open source or from another closed source. In an interesting interview with Sam Altman this week, he was asked what keeps him up at night, and he said all of the sci fi stuff. That's a quote. He clarified that he doesn't talk about killer robots taking over humanity, but he called it very subtle social misalignments. He didn't describe exactly what he means. But he clarified that he thinks that AI systems that are widely used in society, even without evil intent can just go horribly wrong. And again, that's another quote. So again, while we are getting more and more dependent and using these systems regularly, we need to understand that there are. Horrific potential implications, a simple one is not being able to tell what's true anymore on any digital platform, but it could go probably way beyond that if it's keeping Sam Altman up at night. Still in OpenAI, There are a couple of new interesting updates on the GPT store, as we mentioned In several different episodes previously, the GPT store is supposed to turn to be a store eventually where people will be able to make money from the GPTs that they create. So two steps that happened this week. One is there's an updated and upgraded about section for each GPT that includes more information about each of the GPTs, but more importantly, there are now ratings for each and every one of the GPTs, as well as the option to write private feedback to the builder of each GPT. So a few steps forward from just a shared resource to an actual store where GPT builders can start making money. From OpenAI to NVIDIA, NVIDIA released another amazing quarter results, pushed their market cap to 2 trillion, Which means they've passed Google and Amazon in valuation, and they're now the fourth most valuable company in the world. But if you think Nvidia is going to just say, Oh, this is awesome. And stop there, then you got it all wrong. Nvidia is now pushing the boundaries into additional markets and they're planning to enter into the 30 billion market of custom Silicon design, a field that was so far controlled almost exclusively by Broadcom and Marvel, so they're gonna expand beyond just their GPUs into additional chips that are widely used across multiple data centers, which will allow them to grow on additional verticals as well as other players. are getting into the AI chips, which will probably eventually chip away at their ability to continue growing in that path. At the same time, NVIDIA just released GH200. It's an AI chip that is now available to pre order, and it's the fastest, most capable chip they ever released. So even on the development side, they're obviously not resting and running forward, just opening the gap in their capabilities even more than it was before. But since this is obviously a huge opportunity, SoftBank CEO said that they're going to invest 100 billion in a new chip company that will compete with NVIDIA on these chips. SoftBank's biggest bet so far was on ARM architecture of the ARM processor might be a part of their plan, they didn't really specify that, billion dollars are a huge part of SoftBank's free cash flow, which means they're going basically all in, in that direction, in order to compete in this booming market. And since we spoke about all the other giants, or most of them, let's talk about Microsoft. Microsoft added Copilot capabilities into Windows 11 itself. It allows to control various aspects of the actual operating system through chat commands. It will allow accessing features such as narrator and magnification and text size changes and network status, battery info, storage cleaning, even launch native apps and toggle system settings all through a chat with Copilot. in addition, they announced what they call power automate scenarios, which include capabilities within Excel, PDF and so on. Once you have power automate installed, you can activate that plug in across different aspects of copilot, allow you to ask you to do tasks such as writing an email to your team, wishing everybody a happy weekend, list the top five highest mountains in the world in an Excel file, rename all PDF files in a folder, etc, etc. Things that involve. Different applications as well as activating and making changes within the operating system and file itself, by the way, in a similar kind of move, Google share this week but they're enabling the Help Me Write feature which previously only existed within Google Docs and Gmail. They're going to make it now available in everything Chrome. Basically, if you activate that feature and you can activate it in the Chrome settings, you can ask Gemini to help you write something in any page, anywhere on Google Chrome. Which coming to show you that both companies are driving their AI features into more and more aspects of the universes that each and every one of them control. Continuing with Microsoft, they also announced that Microsoft Teams is going to get a new AI powered planner capability for planning tasks and managing projects. This extremely powerful capability will allow project managers to find specific tasks, help in scheduling, assign resources through specific prompts that will obviously make the process significantly faster and more efficient. It aims to boost productivity and streamline workflows across. Everything project management. This is a very obvious and yet highly needed feature. Anybody who ever planned specific projects or even short sprints And trying to allocate people and other resources across various tasks. It's a very long and tedious process, and having AI assistant that could dramatically change that aspect of running a business. This tool is supposed to start rolling out in March, replacing the existing planner and to do apps within Microsoft Teams. Like everything else with CoPilot Microsoft, only premium licensed subscribers will have access to that functionality. But again, if you have somebody who is managing your planning on a weekly or monthly basis, getting one of those licenses to help him in this task will be a no brainer. And from the big giants to two interesting pieces of news about open source models, We've talked about Mistral several times in this podcast There are a French company that has been releasing extremely powerful and probably the world's most advanced and capable open source models. Their next model that is labeled next is now available in chatbot arena sandbox. Those of you who don't know what chatbot arena is, we talked about this in previous episodes, it's a platform where you can go in. And enter your prompts and get two results from two different large language models that are opaque, meaning you don't know what they are and you can rate these results. And only afterwards, it's going to tell you which one is which model, which makes it probably the most relevant ranking capability in the world today for large language models. So their new model is now available over there. And based on leaked information as well as current results, it seems that this new model has a chance to be as good as GPT-4 which no open source model has done so far. Staying in the open source world, Stability AI is releasing Stable Diffusion 3, which has a completely new architecture, which they're calling Diffusion Transformer. So it's a mix between diffusion models and transformer models. They're saying it provides much better and faster results. And they're planning various models in different sizes from 800 million parameters to 8 billion parameters that will allow you to run either faster and more efficient or get better results depending on your immediate needs. They're saying that this model will provide significantly better image quality, multi subject accuracy, and significantly enhanced typography, which as we know is a big issue in most image generators. I love stable diffusion models. There are a lot of tools that are using them today. You can use them natively or through various applications like Leonardo. But overall, they're a very good tool. So I'm looking forward to see how version three is going to do compared to other models that are using such as Me Journey and DAL E. As a reminder, this episode is brought to you by the AI Business Transformation Course. It's an incredible course if you want to know more about AI in the business world, how it can be implemented across an actual business with actual hands on exercises in a cohort of other business leaders. So check out the link in the show notes. And if you are enjoying this podcast, I would really appreciate if you share it with other people that can benefit from it, do it right now, pull up your phone, click the share button and share it with people can benefit from it. And while you have your phone out, it will be really cool. If you can give us a review on Apple podcast or Spotify on Tuesday, we'll be back with another deep dive episode on a specific topic. And this Tuesday, we have a really amazing episode talking about how to use AI to evaluate and build very detailed marketing strategy for businesses. It's an amazing episode and we're going to go step by step on exactly how you can do that. So you don't want to miss that episode. If you want to change the trajectory of your business and until then have an amazing weekend.