Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
151 | AI is coming to everyday business tools, Digital Labor is the present, ARC AGI benchmark achieved, and many more AI news for the week ending on Dec 20th
Are you ready for the tools that will transform how you lead your business in 2025?
The AI revolution hit warp speed in 2024, with groundbreaking developments that promise to reshape business operations, boost efficiency, and redefine leadership. From OpenAI’s Projects feature to Google’s new AgentSpace, the tools of tomorrow are already here—and they’re ready for action.
In this episode, Isar Mietis wraps up 2024 with a comprehensive breakdown of the most impactful AI advancements of the year and why every business leader needs to pay attention. These aren’t just updates; they’re a roadmap to staying competitive in a rapidly evolving landscape.
Learn how to integrate cutting-edge AI tools into your business strategy, from creating custom agents to leveraging next-gen capabilities like voice AI and multi-modal integration. Plus, discover the trends shaping 2025 and how to position yourself for success.
In this session, you’ll discover:
- How OpenAI’s "12 Days of OpenAI" revealed tools to organize, communicate, and innovate more effectively.
- The evolution of voice AI and why it’s finally living up to its promise.
- How Google’s AgentSpace could be the key to enterprise-wide efficiency in 2025.
- Why Salesforce’s AgentForce 2.0 is set to redefine digital labor.
- The ethical and economic challenges business leaders must prepare for as AI tools displace traditional roles.
Your next step:
Ready to transform your business with AI? Join Isar’s exclusive AI Business Transformation Course this February to gain hands-on expertise.
Also, save the date for the live episode on January 9, where Isar shares 25 AI tools and use cases in 25 minutes.
Links:
- Sign up for the course: https://multiplai.ai/ai-course/
- Join the live episode: https://services.multiplai.ai/events
Let’s close 2024 strong and start 2025 even stronger.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news episode of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Metis, your host, and this is our final news episode for 2024. And everything that happened in 2024, and especially in the past few weeks, it is Jam packed, so many things happened in the past few weeks that it feels like a whole year has passed with news since the beginning of December. And in today's episodes, we're going to focus about this whole, very strong drive into tooling, basically taking AI from being just a large language model and integrating it into tools that we can use in our businesses. A lot of releases in the past few weeks and in this specific week. are focused around that. So that's going to be our focused area for today's episode, followed by some rapid fire news afterwards. So let's get started. We will get started with the latest releases from OpenAI's 12 Days of OpenAI. In the previous two episodes, we got all the way to day six. So on day seven, they've announced what they call projects and projects is a way for you to arrange your information in these folders within the ChachiPT environment. You can drag and drop information in and out of these projects. So they're basically like folders, but the biggest difference is each and every one of these folders can have its own instructions and can have its own attached information that can use as reference. And that solves So many amazing business problems. And I was so excited when I started using it, that I actually recorded a separate dedicated episode, which is going to be the first episode of 2025. And it's going to review projects. And it's also going to review the availability of canvas within custom GPTs. So that's. Has already been recorded. As I said, it's going to drop as the first episode of 2025. So that's one exciting thing that they released on day eight, they revealed updates to GPT search. So those of you who don't know, Chachapiti now has the ability to search the internet and provide answers similar to perplexity, and they have made significant advancements in updates to those capabilities, including the capability to get more context out of what you're asking, as well as provide richer information in the shape of videos and images built into that. And the really cool thing that they've added is the capability for Chachapiti advanced voice mode, basically the ability to talk in your own voice with Chachapiti. To give you real time answers from the internet, basically what Siri and Alexa should have been, but never actually matured to at least yet. And this is an incredible functionality that I already started using regularly, and it's very, very helpful. On day nine, they have revealed capabilities for developers. So O1 model is now available plus a lot of other really cool capabilities for developers. And I'm not going to dive into those because I think most of you are not developers, but if you are a developer, go check out all the things that they released on day nine, there's going to be a link in the show notes to go and check it out. On day 10, they have launched something that I think is really cool, but I don't know how much it really provides me additional value or us additional value beyond what we have today. So they've launched one 800 chat GPT, meaning you can literally dial now one 800 chat GPT from any phone. They even showed an old rotary phone in the example, and you can talk to chat GPT, which is basically the advanced voice mode, answering through a phone number. They also released the capability to connect that contact in your information, UPT to WhatsApp and have a WhatsApp conversation with it. So while it is cool and it providing additional access ways to get information from CHAT UPT, I don't see the benefit of that versus clicking the application on my phone and just talking to CHAT UPT directly. But from a cool factor, it's really cool, but from a different factor, and that's where I see this is interesting. If you combine that to the previous thing, allowing developers to develop on these capabilities, meaning you can develop your own voice chat bots that will be available on your phone numbers of your company that will answer the phone in whatever way you want while connected to your data and allowing to answer like a human, any kind of questions, including live questions from the internet, if you allow it, and also any data that you exposed to it. So that. Has a lot of implications that other companies that are providing these kind of solutions already, obviously, but the capability to have it as a developer right now is definitely appealing. So again, staying in the topic of tooling, that's a huge capability that allows other companies to now build tools as voice agents around a OpenAI's new capability. On day 11, going back to connecting different tools, they have announced the availability of ChatGPT connectivity to multiple different applications. So that's not a new functionality, right? They released the capability to connect to, to connect to some applications through the Chachapiti app a few weeks back, but now they've added multiple additional applications. So right now, from a code development, you can connect to Xcode, Visual Studio, JetBrains, Ecosystem, TextMate, and BBEdit. On a scientific computing, it can connect to Matlab. On document applications, you can connect to Apple Notes, Notion, and Quip. And you can connect to terminal Like you were able before. So a significantly wider range of applications where I think to people who are not developers, the three exciting ones are obviously Apple notes, notion, and quip, which a lot of people are using, and now Chachapiti can engage with them on your desktop and on your mobile directly, which is extremely powerful. And I'm sure we'll start seeing a lot of really interesting applications of people using that. And then on day 12, they gave us a glimpse into what their next model, which is going to be called all three and all three mini are going to do. Since we are still on the topic of tools, we're going to skip that and I'll get back to what all three and all three mini can do. But staying on tools that will impact our workforce, Google has entered the very fierce and aggressive competition of AI agents and their ability to develop them with the release of AgentSpace, which is a platform that enables workers to access, build, and deploy AI agents across the world. an entire enterprise as needed. Now, these agents are going to be deployed through Vertex AI Agent Builder, which is the platform that everything Google AI lives on. And it can enable things such as enterprise wide multi model search across structured and unstructured data. Basically, it will allow your agents to know everything that is Within your company's datasets across everything Google, which is kind of a promise that we were waiting for, for a very long time from either Google or Microsoft. I haven't seen anybody that has released any actual results with it yet because they just released it. But I think it's going to be very, very interesting to watch this very particular release. Tool and capability, because this is the dream, right? The dream is to be able to ask any question about anything in your organization, not in silos like we had before. so far you can ask questions about your emails on Gemini running in Gmail and questions about a document within a document or some capabilities within Google drive. But what this will enable, it will enable us to ask any question and in theory, it should pull information from all the different sources and bring us an accurate answer based on our data. This will dramatically increase the efficiency of multiple organizations, just the ability to get quick answers across every data set that you have. And I really hope that's where it's actually going. Now it's also fully integrated with notebook LM that will allow us to do all the cool things that notebook LM can do. Those of you who don't know notebook LM yet, even though I spoke about it, several different episodes, it's an amazing way to learn and dive into specific information just by dragging and dropping it into notebook LM, and then asking questions about it and even creating these podcasts with it more about notebook LM in a second, but. The next step of agent space is going to be a low code tool that will allow people like me, common people who do not write code, who are not developers, to create these sophisticated agents on their own. Hi, everyone. I'm pausing this episode for a second to talk about the most important thing when it comes to AI implementation in your company and potentially the success and livelihood of your company in 2025 and moving forward. The number one factor for success in AI implementation is training and education on how to use AI tools and how to implement them from the very basic use cases all the way through business strategy. It is not something that is easy to do alone. A, because it's not easy to put this thing together, like a whole training curriculum and B because you are a business person and you have a day job and you have a lot of stuff on your plate beyond figuring out how to train yourself or other people in your company. And this is where. We come into the picture. We have developed the AI business transformation course, and we have been teaching it successfully since April of 2023, hundreds of business people are transforming their businesses based on the information, the knowledge, the skills they acquired in this course. And we're opening another public session. And most of these courses that we teach are private, we get hired by specific organizations or companies to teach just their people customized to their needs. And every now and then we open a public session. I'm excited to tell you that the next public session is coming at the beginning of February. And it's going to start on Monday, February 3rd at noon Eastern, and there's going to be four sessions of two hours each once a week, every Monday for four weeks in a row. If you believe that your company or organization needs to start using more AI and that will have a significant impact on the success of your business or organization. And if you're listening to this podcast, I know that's what you think. Don't miss out on this opportunity to start 2025 with the right foot forward. There's a link in the show notes for you to see all the details and sign up. So check it out. And if you have any questions, reach out to me on LinkedIn. I'll gladly help answer all your questions. On January nine I'm going to do a very unique live episode that I invite all of you to join. It's going to be available on zoom and on LinkedIn live. I'm going to cover the top 25 AI use cases and tools that myself and my clients are using in 25 minutes to start 2025 with the right foot. So 25 use cases in 25 minutes, and then we're going to open it up for Q and A for anybody who wants to ask questions. So it's going to be like an, ask me anything kind of session following the 25 tools. And now back to the episode. Now, The IDC in their research predicts that 40 percent of global 2000 companies will use AI agents this coming year, and it can potentially double the productivity if implemented successfully. So that's very extreme. Now, Google is not the first company to release this kind of capability. Atlassian has released these kinds of tools, Microsoft as well. and obviously Salesforce, which we're going to also talk about in this news episode. Now it already has connectors to highly used platforms such as Confluence, Google drive, Jira. Microsoft SharePoint in service now, so it will go beyond just the Google environment and will allow you to connect to other tools out there that are widely used in multiple enterprises. It is currently already available as a 90 day free trial, which will then transition into a monthly per use subscription. And the pricing detail has not been released yet, but if you want to test it out and you have the skills to do it, you can do it right now. Now we mentioned notebook LM. I love notebook LM. I use it every single day to summarize different things for me and to create these little mini podcasts for me that I can use. Based on information that I want to consume and I want to do it fast. And then I just use my time driving to listen to these podcasts, to decide if I want to read longer articles or research papers or stuff like that. So google just released a major update to that. And there's two aspects to the update. One is the user interface. Now have these three panels that makes it more obvious what to do and how to use the tool. But they've also released a very cool capability for the podcast, which now you can use the join feature to actually have a conversation with these podcasts hosts. So think about this crazy concept. You're listening to this podcast right now. Hopefully you've been listening to her for a while. And all you can do is listen to what I say, but I'm sure some of you Would like to ask follow up questions or dive deeper into a specific topic, which in a traditional podcast is not doable and was not doable in any other way until right now. So now in notebook LM, when you listen to the podcast, you can join the conversation and ask the two hosts questions, and they will answer any question you have in their voice based on the data that they have combined that to the previous feature that we talked about, as far as getting access to your company's knowledge. And you understand how incredible this future is. You can literally ask any question in your voice and get an answer in voice about any piece of information that will exist in your company's database that these tools will have access to. I personally find this. Really exciting as a new way to work, because instead of sending emails or writing prompts or someone, you will literally just ask with your voice and a voice will answer, and obviously that voice can then write it and send it in an email and summarize it and do whatever you want, but the initial engagement happens over voice while relaying on accurate information company information. Now they also exposed a Notebook LM plus capability that allows you five times more overviews and more sources per notebook and customizable notebook responses and a lot of other stuff. So if you are a heavy user of Notebook LM, probably worthwhile paying for the https: otter. ai Project Mariner allows a Gemini powered agent to take over your Chrome browser and move the cursor around and click buttons and do everything that you can do in the browser. The way it does it is by taking screenshots and understanding where the cursor is and where the buttons are, and will allow it to operate it. That's a relatively clunky way to do this. We've seen that with Claude releasing their computer control capability. So it doesn't work really fast and you have to wait a lot for it to figure out where the cursor moves, but this is virgin version one. And I'm pretty sure that very soon we'll have capabilities to do things way versus slower than humans. They've also added some safety features. So, Mariner cannot complete purchase transactions and or enter payment information, it cannot accept cookies or terms of service of different websites, and it cannot operate in the background in tabs that you're not seeing. That being said, that's still a very scary capability because It means it has access to everything we have access to. And I think what we're going to see in 2025 is more and more of this capability, but with more and more control, meaning the ability to limit the functionality on our own, every person will be able to define, or every organization will be able to define. the boundaries of what these tools can and cannot do. And within the boundaries, they'll be able to do everything they want. Google also released when it comes to tools that are available to us right now, deep research. So deep research is available on Gemini advanced under a dropdown menu, just like we have Gemini 1. 5 pro now there's Gemini 1. 5 pro deep research. And what it does is very similar to perplexity or u. com, where it will give itself several different things to research more like an agent approach after it researches all these things, it will give you a detailed summary of that capability. I will do a detailed comparison between deep research. perplexityNU. com in a future episode, probably early 2025, to give you my feedback on which one of the tools is best, or most likely which one is best for which specific use cases. I use perplexityNU. com almost every single day, and I have specific ideas on which one works better for what, and I will add the Google to the mix and I will share with you my results. But if you're looking for results and you're a heavy Google user and you're looking for deeper summaries of things, this is a great place to go. Another company that made a interesting tool release is GitHub. So GitHub released GitHub AI Copilot a while back. It's a tool that's being used right now by multiple developers around the world, but it was a paid service. Well, right now, as part of the global race for coding domination, GitHub just released a free tool. version of GitHub for Visual Studio Code Editor. So that's probably the most commonly used code editor in the world. And now you can use GitHub co pilot in Visual Studio for free. Now you still, the free version has limitations on how many code completions you can create and so on, but you have access to it right now. And that will obviously accelerate the amount of usage of AI code generation in the entire world. Staying in the more advanced capabilities and tools that we're going to have to work in our. In our businesses, perplexity just acquired carbon. Carbon is a startup from Seattle that has specialized in company specific rag solutions, meaning the ability to query the information of an organization. And provide answers based on that. And with perplexity acquiring that company, it means that it's moving very aggressively towards the enterprise market as well, which will hopefully provide capabilities similar to what we talked about with the Google solution. Shortly. With the Google solution, just a few seconds ago, and it already is integrated to tools like notion, Google docs, or Slack, meaning you'll be able to query perplexity, not just on stuff that's on the internet, but on every information that you have on these platforms. And I'm sure they will add more and more capabilities moving forward, but even just these three platforms, notion, Google docs. And Slack for many organizations, that's the majority of where communication and work actually happens. If you're in the Google universe and not in the Microsoft universe. And that means that perplexity will become significantly more powerful from a business perspective as well. I'm personally very excited about this because I'm a heavy Google Docs and Slack user. I don't use Notion. I use ClickUp, but definitely two out of the three I'll be able to get. And if perplexity will increase the cost of their prom version to get this, I will definitely pay for that just to get this functionality. But maybe the biggest announcement of the week with regards to its impact on companies and the way they work with AI is that Salesforce just dropped agent force 2. 0. so within six months of the release and the announcements of agent force, they're releasing version two, and they're going on in on the idea of digital labor. So Mark Benioff, who has created new industries before and have changed the world around before made the following announcement. He said, we're creating a new industry. This isn't just about managing and sharing information and data anymore. We're a digital labor provider. The biggest upgrade is that now Agent Force is running on what they call the Atlas Reasoning Engine, which enables AI agents to engage in more sophisticated analysis and decision making based on your organizational data, and it knows how to create its own metadata and connect the dots if you want, across multiple things that are happening in their organization in order to enable these agents to be significantly smarter and more capable. Now, the company has already deployed the technology internally at Salesforce itself. And now it's agents handle 83 percent of customer support queries independently without humans in the loop and escalation to higher levels of customer support has dropped by 50 percent in the past two weeks since they've implemented this capability. So going to a supervisor, something that humans did regularly is now 50 percent less when these agents are handling customer concerns. And the reason for that, if you think about it, is that these agents have access to a lot more data and so they can provide much better, quicker answers than humans can, even if they're highly trained. And now I'm quoting Mark Benioff again, suddenly as a CEO, I'm not just managing human beings, but I'm also managing agents, there's an authentic agentic layer around the platform today. It's not some vision fantasy in the future idea. It's what is happening right now. And he continues and says, This is the new horizon for business. This idea that a door has opened and business will never be the same. And I agree with him a hundred percent. I think the concept of agents that are connected to your company's data, understands context, understand how to make the right decisions and can act within the organization across multiple aspects is a very, very different future than we know today. It raises a lot of questions that obviously Benioff did not bring because he wants to make it sound all good and rainbows and unicorns, but it will have significant implications And profound impacts on our society, because it means we'll need a lot less people to do the work that we do today. Now, for some companies, it will enable amazing growth, but for many people, it will mean losing their jobs. And then the question is, what do we do with all these people? And I don't think anybody has an answer for that, but if you combine everything we talked about, we're going to have more and more tools that will enable humans to do work faster and better. And we're going to have many of these agents doing the work of people that will not require people at all, or we require a lot less. And. In the short term, yes, it has very exciting capabilities as a business person. In the long term, there are a lot of social and economical questions that we have to ask ourselves and try to come with solutions quickly, because as Mark Benioff is saying, this is not the future. It's happening right now. And yes, it will take time for companies to adapt and for specific industries to figure out how to use this, but the technology is already here. And that leads us to the announcement of day 12 of 12 days of OpenAI. So as I mentioned earlier, openAI has unveiled its new generation of models called O3. It's not called O2 because of the company O2 in Europe. So they skipped one. And in general, OpenAI has never been great in making up names. So going from O1 to O3 is just Totally reasonable if you think about the naming conventions on OpenAI. But the jump in capabilities from O1 to O3 is very significant. They're going to release O3 and O3 mini early in 2025. And this model is showing extreme improvements over all ones capabilities, which is already an extreme improvement over GPT for all. And it's achieving insane benchmarks on coding and math and reasoning and so on big jumps from everything we've seen before. But the two most interesting aspects that are just not small jumps in capabilities is the most difficult math benchmark there is out there called, called Epoch AI Frontier Math. Every model out there till today was not able to achieve more than Epoch. 2 percent score in that model, because it's really complex, highly sophisticated math problems that top mathematicians in the world takes days to solve. And literally no model was able to solve today. And the new old three model achieves a 25 percent score. So more than 10 X, the capability of any model that we had today, but the other, even more interesting, concept of O3 is that they've tested it against the Arc AGI evaluation. Arc AGI is a company that was founded five years ago to provide benchmarks for AGI. And the most any model has achieved so far on these capabilities was a 32 percent score, meaning it was very, very far from human performance. And now O3 was able to achieve an 87. 5 score. To give you a perspective, these are really complex concepts that humans score about 85 percent at, meaning the benchmark for, is this a human has been broken as of right now on the only real benchmark that there is out there to test. Quote unquote AGI. Now, what they've announced is that they're going to do a partnership together between OpenAI and Arc to come up with new benchmarks to continue testing these models. But this is a very significant jump forward in the ability beyond math and beyond STEM in general to just understand problems and situations like humans do. And that's again, on one hand, very exciting. On the other hand, really scary. What they're doing right now is they're releasing the model for safety evaluations to anybody who wants to do that. So you can go on their website right now and sign up and get access to test the models and evaluate them for safety. I assume they're also releasing it to The U S government and the European AI committee is to evaluate it as well. But their plan is to release all three mini at the end of January. And then all three in probably Q1 of next year. So if you are excited about, Oh, one and what it can do, then this is just the next thing that is coming. Now, Google also made a move in that field as well. Google in this past week, just released Gemini 2. 0 flash. Thinking, which as the name suggests is another one of those thinking models like 01 and several other Chinese releases in the past few weeks that we discussed in previous episodes. And what it can do is it can think and it can provide much better answers to many aspects of questions. The very interesting thing and the benefit of that over the open AI models is that it supports multi model capabilities. So right now you cannot upload any files to all one and all one mini, which to me is really frustrating because I benefit a lot from connecting in. Connecting different data sources to these tools, but you cannot do this right now with a one and Gemini 2. 0 flash thinking allows you to do that, at least with images. That being said, as of right now, it is not connected to Google search for grounding, meaning it cannot go and verify your information based on information that Google knows from its search world, and it still does not connect to other Google or third party apps, at least based on the documentation they released so far, but we need to assume that's what's coming in the near future, so We're going to see more and more of these thinking models that can take time and think, and by the way, on the open AI tool, you can now control both on the API as well. It's going to be available in 03 how much you wanted to think in three different levels. You wanted to go quick and give you an answer. Do you want it to take a little more time or do you want to invest a lot more time? And that's obviously going to be paid in a time if you're just using the chat or in cost, if you're using the API, so you'll be able to choose how much you want it to think, depending on how complex the problem you want it to solve. And now to our rapid fire summary of other things that happened this week. And by the way, there's a lot more than I'm going to share in the rapid fire, and that's going to be available on our newsletter that you can go and sign up for on our website, multiply. ai. And there's going to be a link for that in the show notes as well. So you can click to that from your phone if you want to hear everything that happened this week, but let's dive into the rapid fire. So first of all, a company called Sakana AI has done a breakthrough to cut LLM operating costs by 75%. They developed something that they called universal transformer memory, which is a technique to dramatically reduce. LLM memory usage while maintaining, or even in some cases, improving its performance. And they're talking about memory savings, as I mentioned, of up to 75%. And a lot of other benefits. They've tested it and proven this on two different LLAMA models, those two different open source models, but they're Claiming that is compatible with other large language models, as well as multi model models, like the most advanced models we're using today. That's very promising to me Because that means if this is correct, we'll be able to use a lot less compute and there's a lot less natural resources in order to get the same results we're getting from LLMs today, which means it's going to have a smaller impact on the environment as well as allow us to do things faster. A big topic that we discussed a lot in the past few weeks OpenAI's transition to a for profit organization. And there are a few issues with that. First of all, the New York times is sharing that they will most likely have to pay billions from their for profit arm to compensate the nonprofit arm for basically taking over everything that the nonprofit arm has done. That being said, as you know, Elon Musk has now two open lawsuits to slow this down. That has led to open AI to release even more texts and emails from Elon Musk to other people in open AI, while he was still in open AI, showing that he was trying to make it into a for profit organization in several different occasions in the past before he left and the whole thing exploded. But beyond the personal, Issues between Sam Altman and Elon Musk and the whole fight about this Meta has formally petitioned California Attorney General Rob Bonta to block OpenAI's conversion from non profit to a for profit status. And they're arguing similar things to what Elon Musk is arguing. One is that open AI leverages its nonprofit status to raise billions of dollars in funding, which doesn't make any sense that will now become a for profit organization. And converting it now would improperly repurpose charitable assets into a private gain. And they're saying that will set a very problematic precedent for future AI startups and any other nonprofit organization, which I agree. By the way, I think the Attorney General would agree as well. So this is not going to be an easy thing for OpenAI to achieve. Whether or not they are able to do this, it will be very interesting to see. As I mentioned in previous episodes, they have some heavy hitters in their corner as well. So let's just wait and see how this thing evolves. Another small but interesting piece of news from this week, Anthropic just unveiled CLEO, which stands for Claude Insights and Observations. Think about it something like Google Insights, but for everything that's happening on Anthropic. So it's a AI agent that's running in the background that's checking what everybody is doing with Anthropic. And that revealed a very interesting study. They have claimed that They have studied over 1 million Claude AI conversations to find out what people are using it for. So the key usage statistics is that web and mobile app development is actually the highest use with 10. 4 percent of total usage. Content creation and communication is a close second with 9. 2 percent and then other major areas were academic research, career development and business strategy. The other interesting thing is that different places around the world has used it for different things. So Spanish users focus on economics, child health care and environmental conversations. Chinese users prioritize crime fiction writing and elderly care. Japanese did anime economics and elderly care. So different places around the world are using these large language models for different things. And that makes perfect sense, but it's very cool that we can now see what people are using these tools for. Another interesting and disturbing piece of news from the anthropic comes from the fact that in their training of Claude III Opus, which is going to be their most powerful model, they have seen that the model is deliberately deceiving its trainers to maintain its internal ethical principles, raising significant questions or what these models might do once they become even more powerful and capable, because if they are learning how to deceive their trainers, they will definitely be able to deceive their trainers. anybody in the future as they get better, which may allow them to do things that we weren't expecting them to do before, like the model did things, including attempting to copy itself to other servers to avoid being shut down or retrained by the anthropic evaluators. But I want to end the year with some happy and exciting news, and so we'll start with new capabilities in the video generation. We're all very excited when Sora came out, but Google has also released Vio 2 and also Imagine 3, which is their image generator. And Vio 2 allows you to generate. 4K resolution videos with extended duration that could be minutes long. Now it also has a very advanced understanding of cinematography and physics and human movement, that will allow us to create highly realistic videos at very low resolution. High resolution. Imagine three is also achieving very strong capabilities in human motion and the ability to capture the human body. And the combination of these two things is going to be extremely powerful. They're also claiming that both models show significantly reduced hallucinations and artifacts when creating video and images. In parallel Pika labs, Which is one of the other leading tools to create videos has released version 2 version 1. 5 is currently used for them by over 11 million regular users and version 2 has enhanced customization capabilities. With different scene ingredients feature that you can control in specific scenes. It has advanced physics based and motion rendering to create more realistic motion, improve prompt following capabilities and so on and so forth. Just a much better model. And another company that released a new model that is much better is Luma. So Luma released Ray 2 and that allows. 60 second AI video generated in just 10 seconds of AI model work, which is unheard of right now. It's extremely fast and it will allow us to go to a full one minute video versus the five seconds that are available right now. And it's going to be launched on AWS Bedrock. So three really advanced, highly capable video models that are at least promising to be better than Sora. I've seen Google VO examples that they released, and it's absolutely stunning. I didn't get a chance to play with it yet, but it looks very, very promising and we'll definitely start seeing incredible outputs of these tools in 2025. I imagine that we'll start seeing paid mini series or specific Kind of videos that people will subscribe to from specific users that will create episodes of something or short videos that people will be willing to pay for that are going to be created by AI. And I expect that will happen already in 2025. And then the last piece of news that I want to end up with, we all know that NVIDIA has been the hardware driver between the exponential explosive expansion of AI in the past few years, which is very well reflected in their stock price. they just announced a new 249. New. AI supercomputer that anybody can buy and use in their home. It's called the Jetson Orin Nano Super. And their goal is to make AI development kits available to anybody who wants to do that. So the price has dropped down. It is 1. 7 times increase in the generative AI inference performance. Compared to the previous model while being half the price, it has a 70 percent boost in performance, 50 percent increase in memory bandwidth, and it supports. Up to four high resolution cameras that you can use to feed it with information and analyze it for any purpose, whether it's home manufacturing, evaluation of things and so on. And it comes with access to all of the Nvidia software ecosystem, like Nvidia ISAAC for robotics, Nvidia metropolis for Vision AI, NVIDIA HoloScan for Sensor Processing, Omniverse Replicator for Synthetic Data, and Tau Toolkit for Model Fine Tuning. What does that mean and why do I think it's exciting? It means that if you're into this and you're a developer and you want to develop your own AI capabilities in a company and you want to develop Company based solutions, such as manufacturing, proven capabilities, running AI, you don't need a 50, 000 budget for compute. You can start with a 249 budget and still do some incredible things in house while testing things that if you can prove them, potentially will require bigger hardware, or maybe not. Maybe you'll be good enough with this functionality. That's it for this week. This is going to be the final news episode of the year. We will have a one more regular Tuesday episode coming up this Tuesday, sharing with you how to create amazing AI video and how to use multiple AI tools to achieve these kind of videos, even before they released off these new, more advanced models, we're going to talk about the entire process beginning to end and how professionals are using it today. As I mentioned, we will be back in the beginning of the year with an episode about some of the new tools from open AI and how to use them in business. So if you have questions, On any topic about AI or specifically around the use cases, and you want me to show or dive into one of them, we can do that either way, have amazing, happy holidays, whatever you're celebrating and an awesome rest of your weekend.