Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
109 | AI models extravaganza - New powerful and fast models from OpenAI, Meta, Mistral, and Google all in one week, plus additional AI news from the week ending on July 26th
In this episode of Leveraging AI, Isar talks into a week filled with groundbreaking AI developments, featuring major releases from industry giants like Meta, OpenAI, and Google. As companies race to outdo each other, what does this mean for the future of AI and its application in business?
In this session, you'll discover:
- Meta's Llama 3.1: How this new model is outperforming GPT-4 in key areas and what it means for developers and businesses.
- OpenAI’s Strategic Shift: The implications of releasing GPT-4 Mini's model weights and how it opens new doors for customization.
- The Speed Race in AI: Understanding the impact of new hardware technologies like Grok's LPU and their role in accelerating AI capabilities.
- The Future of AI Search: Insights into OpenAI’s upcoming Search GPT and how it could redefine how we access information.
- Open vs. Closed Source Debate: The evolving battle between open-source and closed-source AI models and what this means for the industry.
Join the waitlist for SearchGPT here: https://openai.com/index/searchgpt-prototype/
For more structured learning, check out Multiplai AI's self-paced course here:
https://multiplai.ai/self-paced-online-course/
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a News Weekend episode of Leveraging AI, the podcast that shares practical ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Mehtis, your host, and I want to start with a quick apology of not recording the news episode and releasing it last week. I was on vacation with my family in Hawaii and the conditions there did not enable me to record the news. So I truly apologize for that. But now let's dive into this week's news. This past week or week and a half had more large model releases than any other week in history. And as the saying goes, when it rains, it pours, I guess this thing also qualifies when it comes to releasing large language models. So all the big companies made announcement and or released models that includes Meta, Mistral, OpenAI, Google, literally almost anyone other than Claude has made big announcement including some interesting news about search tool from OpenAI and a very interesting article about agents from McKinsey. So we have a lot to talk about and digest this week. So let's get started. We'll get started with Meta. Meta, the company behind Facebook that has been releasing very powerful open source models have done this again. They've just released Lama 3. 1, which Meta claims is outperforming GPT 4. 0 and Anthropic 3. 5 Sonnet on several different major benchmarks. Now, Meta just not too long ago, a couple of months ago, just released Lama 3 and they've just released Lama 3. 1, which is significantly bigger, more complex and faster and cheaper than the previous model. Similar to what we've seen with Sonnet 3. 5 being released just shortly after Claude 3. Now, the largest version has 405 billion parameters, but they've also released smaller versions with 70 billions and eight billion parameters that will be cheaper and faster to run for developers, since these are open source, I definitely see all the different models, including the smaller ones being adopted by a lot of developers around the world. Not to put the cost things in perspective, Meta claims that LLAMA 3. 1 costs roughly half of GPT 4. 0 from OpenAI to run. Now, as with all its previous models, Meta is also releasing the models weights that will enable companies to train and custom LLAMA 3. 1 to their needs and their likings, making it a lot more flexible than closed source tools. Now in third party testing, Lama 3. 1 actually stands behind the announcement by Meta beating GPT 4. 0 and other advanced models in several different benchmarks. And that is before being fine tuned by different companies for specific tasks. So we have to assume that it will do even better once people use the weights to create fine tuning of these models. Now, again, to be fair. GPT 4. 0 is based on GPT 4. 0, which is a model that's been around for over a year and a half now, almost two years, actually, and we're all expecting GPT 5. 0 to be something completely different. We just don't know exactly when that's going to be released, but the assumption is within the next few months, we'll get the next model from OpenAI, which will be dramatically better, but then I don't know what the other guys are going to release either. Now, to make things even more interesting, these news from Meta, as I've shared with you in the past, they have a partnership with Grok with a Q. So that's a hardware company that makes chips that allow inference to run significantly faster than any other platform on the planet right now. So inference is the generation of output that comes from these AI models. They have a chip called LPU, Language Processing Unit, different than GPUs, which is what NVIDIA is selling. NVIDIA chips are still the best probably in the world when it comes to training models, but Grok definitely gives them a very serious run for their money when it comes to using the models themselves. So Grok just announced that they are releasing a fine tuned model of Lama 3.1 for all three sizes of models. So four, 400, 5 billion, 70 billion, and 8 billion, and they're all going to be available to the people in the Grok community who use their platform at speeds that are absolutely insane. Those of you who haven't watched any of their demos, I highly recommend doing that. They're currently stating that LLAMA 3. 1 is generated at 1200 tokens per second or more. That's about 900 words per second. It means that instead of what we're used to seeing when AI thinks about stuff, and then we start seeing the words showing up on the screen as if somebody's typing them, they're showing up almost instantly in pages and pages long. It's amazing. Absolutely magical. And the ability to use AI in that speed makes a very big difference. So you're asking, okay, what's the big deal? Why do I need it to work that fast? A it's just a much better experience, but B there are use cases where it makes a very big difference. Let's say you want to use AI to analyze stock and give you stock buying and selling recommendations. You want the responses to happen very quickly. You don't want to wait 35 seconds or a minute or two minute to get answers for complex questions you want to get the answer right now. So you can make the right decision based on livestock input. That's just one example out of many examples that you can think about. And so the combination of the ability to run these platforms, open source platforms on extremely fast hardware is the future that a lot of things are going to go to. And again, think about large companies, corporations, and so on, who has huge amounts of data, who wants to get many answers based on a large data. very fast and do it within a safe and secure environment because it's an open source model that they can run on their own servers. That is very fast. appealing. As I mentioned, I see a very clear future where many large companies and small companies, once this becomes easier to set up, we'll start using these platforms instead of the closed source platforms. By the way, if you're looking for more structured training and learning, we have released our self paced course, which already multiple people have taken, and we're getting amazing feedback from people who have taken the course. It's eight hours of videos of lessons taught by me in live sessions. That is broken down into a self paced course that you can take on your spare time and really understand how AI tools can be used for business processes. So the course focuses on the implementation of AI within a business context, based on the work that I've been doing in the last year and a half with multiple companies around the world. And it's broken down into a simple, Easy to digest step by step learning course that you can now take online on your own time. I'll share the link for that in the show notes as well. Now, as I mentioned, all the other companies did not stay put with this. So OpenAI just late last week announced that they're releasing GPT 4. 0 mini, which is a smaller, faster, cheaper version of GPT 4. 0, which they released not too long ago. So a similar approach to what Claude did with Claude 3. 5 Sonnet. So again, not to confuse anybody who's not in all this terminology, Claude a while back released Claude 3. 0 on three different models, the middle, the Model was called Sonnet and just recently they released Claude Sonnet 3. 5, which is actually better and faster and cheaper than the largest model they released only two months ago. So OpenAI did something similar where they released a new GPT 4. 0 model. That is their Omni model in a smaller version that can run faster and cheaper. But now this week, after the announcement that came from Meta about the release of Lama 3. 1. OpenAI did something they have never done before, and they are releasing the weights of GPT 4. 0 Mini to developers similar to an open source model that will enable companies to train and fine tune GPT 4. 0 Mini for their needs. Now again, that's something that, OpenAI has never done before. And as far as I know, none of the closed source models have ever done before, but I think they're starting to feel very serious pressure from the open source models on what can be done with them and the quality that they're providing and that's leaving them with very little choice, but to enable companies to do similar things with their models. Now, staying on the topic of releasing new powerful models, Mistral, the French company that has been releasing open source models, also just released their latest model. They call it large two, which they claim to Is as good or better in some cases than cutting edge models from both OpenAI and Meta in terms of both code generation, mathematics and reasoning. Now they released that model just one day after Meta released their Lama 3. 1 model and it's supposed to be as good and as I mentioned in some cases, better. One of the things that Mistral has mentioned specifically is that they worked very hard in the training to reduce the hallucinations of their models, basically trying to train their models to say that they don't know something instead of making stuff up when they're asked about these kind of things. Which obviously provides a lot of value if you're using these models, that they still tend to hallucinate quite a lot and make stuff up, which is very problematic if you're trying to run a business operation around it. So this approach of reducing hallucinations, Which I've already shared with you is being also done by the other companies as well is critical for the safe and accurate usage of these tools. Now, something interesting about both Mistral Large 2 and Meta Llama 3. 1, both as I mentioned released this week, is that they're both don't have multi modality capabilities. So The most exciting thing about OpenAI right now is their multimodal approach that allows you to do voice and video and audio and so on and on in a single model. And that's something we're still not seeing from the open source companies. I assume over time we're gonna see more of that, but right now, open AI has a very clear lead, at least in that aspect. Now, where is this whole open source, closed source battle going? I don't really know. I think it's very obvious that we'll get to a point that all these models are going to be good enough for most of our usage, whether it's personal usage or business usage, which means the cost of using these models will go down to zero or the cost of hosting and compute versus the cost of paying for these models. What does that mean for the business model of companies like open AI and Claude, or Google, the closed source leaders of the AI world. I don't really know. I don't think anybody knows, but I think as I mentioned, it's very obvious that these open source tools are not going to be behind in the future. And if they will be, it's going to be very little. And as I mentioned, it's very obvious that in the near future, there will be good enough for most of what we need. And so the good news is that as users, I think the price of using the models for the day to day stuff will go down to. zero or very close to that, which is a benefit for all of us. The bad news is, I don't know what this means for the large companies who their livelihood depend on the income that comes from that, but I assume they will find other ways to monetize their capabilities. Now we already mentioned OpenAI in this news, so let's go back to some other exciting news for OpenAI. OpenAI In their demo of GPT 4. 0 showed a few capabilities that they have not released yet. One of those capabilities is advanced voice capabilities. This allows you to have an actual conversation with the chatbot. Now you can do this right now, meaning you can talk to it via voice and get answers. But it's very clunky because it's actually going through several different models, translating the information back and forth. So when you're talking to the voice input of open AI right now, it's actually translating it into text, basically transcribing what you're saying, taking that as input, and then. Evaluating the data and then giving it back to you, which takes a few seconds, which means there's a delay in the response. It's not a natural conversation. You cannot stop it in the middle of a sentence and interfere with what it's about to say and ask it to do something else. The new capability allows you to have a conversation with the AI model as if it's a person. And if you haven't watched the demos, go check out the demos of GPT 4. 0. It's absolutely amazing. Magic. Now, this capability, as I mentioned, was not released and they did not announce when they're actually going to release it. now they mentioned they're going to start releasing it to a small testing group through the end of this month, which means this week, which means several different people are going to get access to these new capabilities. So OpenAI can start evaluating the capabilities of this in the wild. Now, the reason they have not released it so far is that they're saying that they wanted to put the appropriate guardrails and prevent potential harmful information and usage that they weren't sure they were able to do when they did the demo. Now, this is not new. OpenAI's strategy has always been iterative deployment, so they're continuously releasing more and more capabilities one by one versus developing something in the basement and then delivering AGI to us a couple of years from now, which will surprise everybody. So that has been their approach since day one. You heard Sam Altman talk about this many times, and that's just another step in that process. I think it would be very exciting to see once people start releasing what they're doing with it, as far as the use cases of this new capability. As far as all of us getting access to this, they're saying it will be sometime this fall, which is a pretty broad term, so sometime later this year, we All are expected to get access to this new functionality. But as I mentioned, we should expect to start seeing interesting use cases coming up in the next few weeks. But maybe the most exciting and expected news from open AI this week was the announcement of Search GPT. Now the rumors about this. Started brewing earlier this year about April but nobody confirmed or denied that open AI are working on something like this. Well, as you know, Google and Microsoft has been working very, very hard on generative AI search results that will give you answers instead of search results that you have to search for to get the answers. So this past week, Microsoft announced that they are releasing a new version of Bing that will provide and I'm quoting generate search experience that produces a bespoke and dynamic response to user query. Google is already doing that not amazingly well. And as we know, Perplexity has been doing this for a while. Pretty successfully. This week, open AI finally announced search GPT search. GPT is now I'm quoting designed to give you an answer and search. GPT will quickly and directly respond to your questions with up to date information from the web while giving you clear links to relevant sources, you'll be able to ask follow up questions, Like you would in a conversation with a person with the shared context building with each query. So what does this tell us? It tells us that the age of current search is dying very quickly as more and more platforms will allow us to get answers to what we're asking by summarizing information from the web and trying to give us the best answer possible while still giving us The links to the data sources. So we can go and inspect and check the information is accurate and while allowing us to continue the conversation and asking follow up questions, either based on the links or just based on additional information we want to ask. Now search GPT has not been released to the public yet. They're just open it to a waitlist and we're going to put the link to the waitlist in the show notes. So if you're interested, just go and sign up for that. They haven't shared exactly when that's going to be available, but in previous time, when they've done this, it usually were a few weeks when people started getting access to new functionality. Now, this is very exciting. If you are a user just looking for answers. As I mentioned, these tools are not being perfect yet, but they're getting better and better over time. And the fact that we now have another big player, maybe the leading player when it comes to AI technology, Adding to the mix is bringing us another step closer to getting answers instead of search results when we're searching for information. That being said, if you're a company that has built a lot of content and a lot of efforts and invest a lot of money in SEO, then you're I don't know what that means to you. It definitely means that SEO is going to be dramatically different than it is right now. And companies should expect search traffic to be reduced dramatically to their websites. So if your company's livelihood depends a lot on organic search traffic, you need to start diversifying and you need to start doing this quickly because you will start getting less and less traffic as more and more people and companies have other tools to get answers to what they're looking for that do not lead to your website. So let's stay on OpenAI before we continue with some additional model releases and new releases of capabilities that were done this week by large companies, but staying on OpenAI, as I mentioned, OpenAI just reassigned the executive in charge of ensuring artificial intelligence won't harm society. So Alexander Madry, who is an MIT professor has joined OpenAI last year to lead the preparedness team and to evaluate AI models for catastrophic risks before making them available to the public. This week, OpenAI has reassigned madri to and I'm quoting bigger role within the research organization. Now, he's not the first person that was in charge of security that either left or was reassigned, and it doesn't give a warm and fuzzy feeling as far as the level of security and the importance of that in the efforts by OpenAI. On that topic, by the way, this Monday, five senators sent a letter to Sam Altman, asking him to share more details about his plan to develop AI in a safe way that will not cause harm to the world or society or the U. S. It doesn't matter. Add whatever you want. At the end of that sentence, it seems that open AI is trying to run as fast as possible and throw security out the window based on many events that we've seen in the past few months. Now, on the flip side of that, this week had some interesting news when it comes to AI security. The coalition of secure AI was just formed. It's a new industry body that was announced this week by Aspen Security Forum and the founding and the premier sponsors of this group include IBM, Intel, Microsoft, Nvidia, PayPal, Amazon, Anthropic, Cisco, Chingard, GenLab, OpenAI, and Wiz. So a lot of sponsors and a lot of very important groups are a part of this new initiative and the goal is to use open source style initiatives that aim to build standardized frameworks and tools And empower developers to create secure ai systems. Now they're planning to focus across all aspects of that, which means development, integration, deployment, and operation of AI systems. And the goal is to mitigate risks such as model thefts and data poisoning and prompt injection and other bad things that people can do while you are trying to use AI models. So while this initiative is not built to protect us from the AI itself doing things, it is there to protect us from other people using a I to cause damages to us as individuals or two companies while we're using these systems. Now going back to interesting releases. There were two interesting announcements from Google this week. One of them came from DeepMind, which is the group that is leading the AI research in probably the world, but definitely within Google. And they just released a system they call AlphaProof and AlphaGeometry2, which are models that can do mathematical equations and solve complex mathematical problems in a way that wasn't possible before by any a I platform. That these tools were able to solve four out of the six problems from this year's International Mathematical Olympics, which is maybe the most prestigious mathematical contest in the world for high school students, and they won the equivalent of a silver medal. Now, this may not sound exciting to you, but this means that these tools can do things that AI was not able to do before, including some very advanced reasoning capabilities and step by step working that is beyond the level that is available right now. So While this may sound like it's relevant only to math, it has profound implications to the usage of AI in more complex problem solving. Now, in the same trend that I shared with you before, when it comes to the released of Claude sonnet 3. 5 and. NGPT 4. 0 Mini, Google has released Gemini 1. 5 Flash, same exact concept as the other ones. It's faster and cheaper to run while still providing very high capabilities. but in addition to releasing it, Google announced this week that Gemini 1. 5 Flash is going to be the engine behind the free version of Gemini. So if you're just using Gemini without paying for it, you are going to be using Gemini 1. 5 Flash. It provides improvements across the board from the previous free model, including less latency improvements in reasoning and image understanding. It also has a four time bigger context window than the previous free Gemini Which means now it's going to have 32, 000 tokens context window versus the 8, 000 that existed before. In addition, it's going to now display links to related content when you're asking specific factual information. So similar to the way the other search, basically merging the large language model with search capabilities, similar to what we're seeing across the board from all the people in this field, including the news that we just mentioned from OpenAI. And going from large language models to the visual world, a few interesting news in that aspect as well. Luma AI, the San Francisco based AI video generation tool, just released a very cool feature called Loops in their Dream Machine video generation platform. And as the name suggests, that's what it does. It creates a looped video, which means there's no beginning and no end and no cut off transition, or at least it's not very obvious from the examples that they've released. So if you're trying to create videos for any purpose and you just need something that will loop again and again. You can now do this very easily and very effectively in the Dream Machine platform. The two examples that they shared on X is a spaceship flying through hyperspace, as well as a capybara riding a bicycle in the park. Both look very cool. And as I mentioned, they can be used in multiple use cases specifically for marketing in a very effective way. So this functionality is now available in Dream Machine and you can go and test it out yourself. This is part of the incredible jump forward that we've seen in video generation in the past month and a half from multiple companies, and the only one that hasn't responded yet is OpenAI with Sora, which will be interesting to see when it comes out. It's expected to come out again sometime this year. Now, the other company that made big announcement this week when it comes to AI is Adobe. Adobe has released Adobe Firefly AI capabilities a while back as a standalone capability and it started integrating it into the Adobe suite in the past few months. But now finally they have made a wide release of their AI functionality into Adobe Photoshop and Illustrator. So now you can create images from text. In adobe illustrator and in adobe photoshop without going to third party tools. previously the only functionality that was available in these tools was generative feel which allows you to generate Additional sections of the image or replace specific sections of an existing image. And now you can generate the entire image from scratch, by using text prompts, just like other tools such as Dali and mid journey. In addition, they've released tools like Firefly vector AI model that allows you to create vector graphics, including text patterns that are vector graphics that can be used as background of things, such as patterns behind an image or backdrops of things and so on, all with vector data that takes very little space. And they've also introduced style reference that allows you to generate outputs that mirror existing styles, which is very important for graphic designers who are trying to create cohesive styles for their brand. That's it for this week. this was a jam packed episode with new capabilities and new models. It's very obvious that this train is not slowing down. If anything, it keeps on accelerating And I'm glad to be here and try to help you stay up to date. We'll be back on Tuesday with another fascinating interview that we'll dive into a specific how to with another expert and until then have an amazing weekend.