Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
133 | OpenAI and Microsoft complex frienemies relationship, Nuclear power for data centers, Adobe's AI features explosion, and many more important AI news for the week ending on October 18
Is your business ready for AI’s power surge—and the complex frenemy dynamics driving the industry forward?
This week on Leveraging AI, host Isar Meitis dives into the most important developments you need to know to stay ahead of the curve. From OpenAI’s uneasy collaboration with Microsoft to the energy crisis AI is fueling, the episode breaks down how the AI landscape is evolving and what it means for businesses at the top.
You’ll also hear about groundbreaking announcements from Adobe, the rise of AI agents, and the shrinking price of intelligence, all leading to a major shift in how businesses can leverage these technologies.
This episode highlights:
- The energy challenges of powering AI: What Google and Amazon’s nuclear bet means for the industry.
- OpenAI vs. Microsoft: Why “frenemies” is the perfect term for their complicated relationship.
- Key insights from the latest State of AI report: OpenAI’s lead is shrinking—here’s what you need to know.
- Adobe’s new AI tools: From video magic to distraction-free photography, how creative professionals will benefit.
- The rise of AI agents: Why 2025 will be their breakout year—and what businesses need to prepare.
If this episode resonates, don't miss out on Isar Meitis’s AI Business Transformation Course—open to the public starting October 28th! Get a four-week, live deep dive into how to leverage AI in your business, with interactive sessions and hands-on guidance. Sign up here: https://multiplai.ai/ai-course/
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news episode of the Leveraging AI podcast, The podcast that shares practical, ethical ways to leverage AI, to improve efficiency, grow your business and advance your career. This is Isar Mehtis, your host. And we have a week without any huge announcements, but with a lot of really interesting announcements, We're going to start with the global energy needs to support AI initiatives, and we're going to continue with a lot of news about open AI. And we're going to end with really exciting news about new and amazing announcements from Adobe. And in between, we have a lot of other good stuff to share with you. So let's get started. As you probably know, because we discussed this in the past, the AI world is consuming a huge amount of energy and when I say a huge amount of energy, the international well, Energy agency forecast that data centers alone will require an electricity consumption that exceeds 1000 TT hours in 2026. That's doubling what it did in 2022, so in four years, but okay. What is that 1000 terawatts? one terawatt can power 70,000 homes for an entire. Year. That means that in 2026 data centers alone will require the energy that could power 70, 000 homes for a thousand years. And so there's a huge and increasing need. And with the amount of data centers being built in order to support new AI initiatives, this becomes just the first step off this whole situation. So in this past week and a half, both Google and Amazon signed several different agreements to develop SMRs, which are small modular reactors, which can be built faster and with smaller footprint than traditional nuclear reactors. These reactors from both companies are supposed to go online around 2030 and start generating more and more power in the 2030s. Now that's not going to cut it for all the energy that's required. And as I mentioned, it's going to take five to six years for them to even come online. Meaning between now and then, these data centers will most likely consume more and more carbon emission generating energy sources, which is not good for the environment anyone. That being said, the fact that's the direction they're taking, that's getting support from us energy secretary and other people is a good direction. So the direction of all these companies in both Amazon and Google have a commitment for using renewable energy and getting to net zero emissions, Google aiming for 2030, Amazon aiming for 2040. But both of them are moving in that direction by investing a lot of money, billions of dollars in developing these small modular reactors to power their data centers. An interesting report came out this week that's called the annual state of AI. It's a report generated every single year in the past several years by Airstreet Capital, which is an investment company that specializes in investing in AI initiatives. And what they are claiming is that OpenAI's lead that was very significant a year ago has largely eroded in this past year. So rival models from Anthropic and X and Google as well as open source models from Meta has closed most of the gap and the gap that ChachiPT has right now is significantly smaller than it had a year ago. That being said, we have to remember that Chachapiti 4 and 4. 0 is still based on an older technology and they probably has written this report before the 0. 1 level of models started being released. So we need to see what OpenAI is going to come up with when it comes to AI. improving on O1, where it's this completely new type of model. And once they can start connecting it to all their existing capabilities, like web search and video generation and all the other stuff they didn't come up with formally, they may take a big lead. Again, we need to remember they just raised 10. 6 billion that enables them to do a lot of interesting things.
We have been talking a lot on this podcast, on the importance of AI education and literacy for people in businesses. It is literally the number one factor of success versus failure when implementing AI in the business. It's actually not the tech, it's the ability to train people and get them to the level of knowledge they need in order to use AI in specific use cases. Use cases successfully, hence generating positive ROI. The biggest question is how do you train yourself? If you're the business person or people in your team, in your company, in the most effective way. I have two pieces of very exciting news for you. Number one is that I have been teaching the AI business transformation course since April of last year. I have been teaching it two times a month, every month, since the beginning of the year, and once a month, all of last year, hundreds of business people and businesses are transforming their way they're doing business because based on the information they've learned in this course. I mostly teach this course privately, meaning organizations and companies hire me to teach just their people. And about once a quarter, we do a publicly available horse. Well, this once a quarter is happening again. So on October 28th of this month, we are opening another course to the public where anyone can join the courses for sessions online, two hours each. So four weeks, two hours every single week with me. Live as an instructor with one hour a week in addition for you to come and ask questions in between based on the homework or things you learn or things you didn't understand. It's a very detailed, comprehensive course. So we'll take you from wherever you are in your journey right now to a level where you understand. What this technology can do for your business across multiple aspects and departments, including a detailed blueprint of how to move forward and implement this from a company wide perspective. So if you are looking to dramatically impact the way you are using AI or your company or your department is using this is an amazing opportunity for you to accelerate your knowledge and start implementing AI. In everything you're doing in your business, you can find the link in the show notes. So you can, you just open your phone right now, find the link to the course, click on it, and you can sign up right now. And now back to the episode.
Speaker 2:Another really important parameter from the report, which we talked about many times in the past on this podcast, is that the cost of inference is dropping and has dropped dramatically. So in this past year, just in ChachiPT alone, GPT 4. 0 is now 100, 000. times cheaper per token than GPT 4 was when it came out in March of 2023. So in a year and a half, the price for inference on ChatGPT alone has dropped by 99%. But that being said, if you compare that to Gemini 1. 5 Pro, it costs 76 percent less per token than the same model cost when it came out in February of 2024. So not a year and a half, but in eight months, the same model costs dropped by three quarters. Add to that the increase in Competition from open source models. And you understand that for us, the people who are going to use this technology, this is fantastic because we are getting closer and closer to the point we get endless intelligence available to us for free or almost for free, the question that I've been asking on this podcast for a long time. How does that align with a business model of these companies generating money from the billions and billions of dollars of investment they're putting into this? And I must admit, I don't have a good answer for that. if you're Amazon or if you're Microsoft, yes, I get it. Because it's going to drive more revenue through their existing services, but companies are just developing the models like OpenAI will have to find ways to transition into developing products that people will pay for versus just the intelligence models themselves. Now, speaking of OpenAI and the future of intelligence, OpenAI just released a swarm framework, which is a framework that allows Any developer to develop experimental networks for orchestrating AI agents. So we've been talking more and more about AI agents on the show, because there's a huge drive in the AI world to develop these new platforms. And to remind everybody that don't remember AI agents are different than the large language models we know by several different types. Aspects, aspect number one, they can do significantly more complex tasks. They can take a complex task and break it into smaller and smaller tasks, and then execute each and every one of the steps, monitoring how well he did iterating on that, and then eventually completing the task and these agents know how to work together in groups in order to achieve more complex tasks faster. So just like humans run projects, when there's a project manager and a lot of people underneath that who perform different aspects of that, it's the same kind of concept. So OpenAI just released a platform that allows developers to develop these kind of networks that will allow agents to collaborate on a high scale multi agent environment. In the past few weeks, we shared that Salesforce has done a similar thing. Amazon has done a similar thing. NVIDIA has done a similar thing. So all the big players are releasing agent development and ready to use agents to the world. Now they're not fully released yet and they're not all completely real autonomous agents. Some of them have like human defined flow charts and Blows that they follow, but the direction is very clear. 2025 is going to be the year of AI agents deployment and it will take probably through the end of the year for them to be reliable and widespread, but that's the direction that everything is going that raises multiple concern. The first concern is obviously security because you need to trust these tools that now have access to multiple systems in your organizations and outside of your organizations. To you. act and make decisions and take actions on your behalf. And that obviously will take time for people who have that, but people in the security world are seriously concerned on how to put safeguards on that across multiple aspects of security, but also it increases serious concerns On job displacements, because these tools, we've been able to do significantly more than we can do with just large language models today. Now the other concerns are actually just enhancing concerns that already exist about bias and fairness as well as increasing the social divide between companies and people have access to this and those who don't. So not everything is unicorns and butterflies when it comes to this initiative. Now we're going to talk a lot about unicorns and butterflies in the AI world. Once we move to talk about Anthropic. But for now, since we started talking about OpenAI, let's continue with them. This past week have shown a lot of interesting aspects of the unique frenemy relationship between OpenAI and Microsoft. So on one hand, OpenAI leadership reported by the information has expressed concerns about Microsoft pace of supplying them new servers and compute, which they need in order to grow even faster. Sam Altman himself. And Sarah Friar told employees that the company would make greater role in securing data centers from other sources that are not through Microsoft. Now, if you remember, Microsoft has pledged 13 billion of investment In OpenAI, most of it through compute, but right now OpenAI feels it's not enough, and they're going to go and search data centers through other providers as well. That actually makes sense because their biggest competitors, such as Anthropic, are using several different providers for their data centers, which gives them more options and different solutions.= We shared with you in the past that OpenAI is already in discussions with Oracle to build what might become the world's most powerful AI data centers in Abilene, Texas, potentially reaching two gigawatts of power and running 200, 000 GPUs. But while this is happening, if you remember, we spoke about in depth about the fact that open AI raised over 6 billion, and it's depending on them changing to a for profit organization in that transition, Microsoft might receive a very significant share of their equity right now. They have none because they're a nonprofit organization and the structure of the agreement was on future revenue sharing, but I'm sure that as part of this restructuring, Microsoft is going to get a nice equity cut. So again, competitors on one hand, partners, and potentially real business partners on the other end. On the same aspect, there's been reports that OpenAI has just opened a new office in Bellevue, Washington, which is just next to Microsoft offices. Based on LinkedIn, they already have 80 employees in the Seattle area, and now they're going to have an office to work from that is going to be very close to Microsoft, which they have a very close relationship with, by the way, when they just raised this money, they share that they're going to open additional offices in New York, Paris, Brussels and Singapore. So a global footprint for this AI giant. Now, while they are getting closer and Microsoft may get significant equity and they've opened offices at the same time, OpenAI poached Microsoft VP of Generative AI Research. according to the report on the information, Sebastian Bubek, which was Microsoft VP of Generative AI Research, just left the company and is going to join OpenAI. His role in Microsoft was focused on the PHY models, which is the series of their smaller scale models that is planned to run at edge devices, basically on your phone, your watch, your laptop, and so on, instead of large language models that are running in the cloud. That shows that OpenAI is going to push in that direction and is going to do more than just opening applications. Then just developing large language models, and they probably want to aim to go more in the direction of edge solutions, which are going to be needed for security as well as speed when it comes to using it on the go, which is the direction everybody is pushing in combination with going to the cloud connection every time that's required, similar to the concepts of solutions that were introduced by Apple earlier this year, by the way, Apple announced, they're not releasing their AI models in the near future. So all the big campaigns that they did and all the noise that they raised before the release of iPhone was basically fake, or maybe they had a serious technical issue just before the release. But as of right now, it's been pushed further out and it's unclear When the Apple intelligence models are going to be available on their edge devices. And in another move by open AI, that seems counterintuitive to the relationship with Microsoft and seems to be competing with windows. They have just released chat GPT for Windows, which is a desktop applications running on Windows platforms. They've released the Mac OS version a while back. I've been using it regularly, but they now just released a Microsoft native application. It is currently available to anybody who has GPT plus or enterprise or teams or the education version of it, and it's accessible through a quick keyboard shortcut. So if you click out space, it will open it right away. The goal over time is obviously to integrate it with more and more windows capabilities, meaning it is going to compete with copilot directly for users on the Microsoft environment. If they can achieve that, that will become part of people's day to day as they run their system. And with the direction that it seems to be going, that AI will be a lot more than just a chat, This may slowly take out the need to use Microsoft tools like Microsoft Office, at least in some instances, again, increasing the competition on PC based devices between Microsoft and OpenAI. And while this is happening, Microsoft and OpenAI potentially have achieved something that it was considered somewhat of a holy grail, which is the ability to train and run AI models across multi data center distribution. According to a researcher, Dylan Patel, Microsoft and OpenAI have cracked the multi data center distribution for training their large language models. So the limitation of running on a single data center is twofold. One, there's a limit to how many GPUs you can run just from a data infrastructure perspective. And yes, they've been growing in the S Elon Musk now has a hundred thousand GPUs running in a single location, but there's still a limit to doing that. If you can run it across multiple data centers, when you solve that problem, the other problem that it solves that we discussed about in the beginning of this episode is that there's a huge Power shortage, like you cannot build these gigantic data centers without building something to supply the huge amount of power that they require and being able to run across multiple data centers at the same time allows you to do stuff that wasn't doable possible without purchasing or building new energy sources. So Microsoft reportedly signed deals worth over 10 billion to run fast fiber across several of their data centers to achieve this goal. I find this very interesting. It's just another solution on how to solve the current energy and the current compute problem that some of these companies have. Now connecting this back to energy and the need for compute, the massive scale of this project will connect a gigawatt worth of power to one unified training environment, which in theory, will allow to train models that are significantly larger than the models that we have right now, which is something we know that open AI and all the leading developers wants to achieve. So, as I mentioned. a frenemy relationship all over the place. They're competing on some things, they're collaborating on others, and we'll keep on monitoring how this moves forward. As the relationship is right now, they both depended on each other. So I don't see them breaking up anytime soon. But this is definitely not your standard business relationship. And speaking of open AI or actually an ex open AI, Miro Moratti, which was the CTO and left just as they were making their recent raise is raising money for her own new startup that is said to develop AI products based on proprietary models. That's still very vague, but We talked about this when she left and the other executives left that these people can literally now build whatever product they want, raise as much money they want, and they will be able to do this very quickly. So here's an example for you. That is already happening. Now we don't talk a lot about models from other places around the world, but every now and then we mentioned very quickly. Powerful models that come out of India and China mostly. And now we have one from Tokyo. So a Tokyo based company called rhymes. AI just released an open source model that is multi model. And that is very capable. That is called Aria. Can process text, code, images, and video in a single multi modal architecture. And it integrates mixture of experts frameworks together with that multi modality, which is the first open source model to combine these two very powerful capabilities. Now the model currently on the leading benchmarks, ARC performs leading open source models like Pixtral 12v and Lama 3. 2 And again, that's not across the board, but it's winning on several different benchmarks, meaning it's one of the most capable open source models around the world. The things it's best at based on the report is financial analysis, data visualization, video processing, and coding. So lots of very useful things with this open source model. As I mentioned, the fact that these models are becoming more and more powerful and more and more available from multiple different sources and are excelling in different tasks will Make them mainstream in more and more places against the closed source, more expensive models, because this is a model you can host on your own servers or on your own hosting platform from Azure or AWS, et cetera, keeping the data secure. While getting very advanced capabilities practically for free, just for the cost of hosting. Now, I told you before, we're going to talk about unicorns and butterflies. Once we move to talk about Anthropic. So this time has now arrived. Dario Amadei, who's the CEO of Anthropic has just released a 15, 000 word essay that he released on his own personal blog that talks about a very bright future for humanity with AI. He's. talking about what the world can achieve when we have what he calls powerful AI. He said he doesn't like the term AGI from a lot of reasons, but the very clear one is that there's no clear definition of what that is. And it's really not a one milestone thing, right? It's like it will be very hard to say, okay, now achieved AGI. Road there is as important because every few weeks there's a better and better capability that eliminates some things that human needs to do. And so he defines powerful AI as a AI solution that will be smarter than noble prize winners across most relevant fields that by itself sounds insane. It has all the interfaces available to humans. Working virtually. So keyboard, mouse, cameras, everything we have to access the digital world will be accessible by this AI. It can be given tasks that are not just simple and short, like writing an essay, but things that would take hours, days, or weeks to complete. And then the AI will go ahead and figure out the task and figure out the details and we'll run it autonomously. He is envisioning that there's going to be millions of instances Of these powerful AI systems that can either run independently and each do its own thing, or they can collaborate as needed with as many of those as required for specific tasks in order to achieve more complex tasks as a team. So basically just like humans work only in a much, much larger scale. And he also thinks that they will be able to observe and generate information and actions. 10 to 100 times faster than humans. So that's how he defines these incredible capabilities that he calls powerful AI. Now, when does he expect us to have these world changing capabilities? He thinks It might be possible by 2026, which is literally around the corner. Even if we assume this is the end of 2026, that's two years. Even if he's off by 50%, that's three years. And he knows, he's at the forefront of one of the most advanced companies out there that are developing these models. So when he's making these predictions, it's not because he's guessing, it's because he's working on developing these platforms and these technologies and he's seeing stuff that we probably won't see for a year. So when he's talking about two years out, we should probably listen and pay attention. Now the implications of these are obviously huge. Massive. Nobody is ready, not from a society perspective, from a workforce perspective, from a political perspective, none of us is ready for that. And yet most of us, and most people are not aware that this is even coming. And we're definitely not taking enough actions from a government perspective, from a society perspective, from a workforce perspective to plan for that. That being said, while I'm terrified from this prediction, Dario is very optimistic. He envisions a world where we can solve world hunger and climate change, and he predicts dramatic economic growth in developing countries all around the world. He proposes That AI could reduce bias in the legal system and undermine repressive governments and a lot of other stuff that sounds absolutely amazing, only he shares very little time in telling how we're going to overcome the dangers to society and civilization, if you want. Now he recognizes the needs To discuss the economic impacts that this will have and job displacement, but it doesn't spend a lot of time on explaining how we're going to solve all these issues. Obviously he's trying to describe the best case scenario. I think he's dramatically undermining how terribly wrong this whole thing may turn out. And he doesn't provide any answers on how we avoid all the risks and pitfalls that could happen. This is coming in the same time that they're looking to raise a lot of money. So it aligns with that need to show why people should invest in them. I think that we'll get that investment because just like people invested 6 billion in open AI, there's a similar reason to invest similar amounts of money. Or at least in the same ballpark in Anthropic because they're doing the same kind of things and have the same kind of chances of success. As I mentioned, I'm personally terrified with this projection and we'll just have to wait and see how this thing rolls out. But if people who don't know about it, share it with them so we can have a broader conversation on what's possibly going on. Coming now, we're going to move to talk about search and the impact of AI on different aspects of search as we know it today. So first of all, in a recent report, search GPT, which is a experimental tool that was released to a small group by open AI is growing at 150 percent month over month. That is comparing to 22 percent growth in perplexity and Claude at the same time. So it's growing very fast. That being said, it's still a very small numbers. So I think growing percentages is not totally a fair comparison as this report is trying to show. But the other interesting fact is that it's generating four X, the amount of referral traffic to brands. Compared to perplexity. So if you use perplexity today, there are links, but most people stay on perplexity and search GPT, which most of us don't have access to is driving a lot more traffic to companies through the way they have implemented their search solution. Does that mean that's the right solution or it's going to win over not sending people traffic, nobody really knows, but that's where it is right now. Search GPT is still. Nope. Tiny, as I mentioned, it's not even 1 percent of global search market. So right now it's not a risk to Google and even not to perplexity. But remember they haven't released that tool to the world. It's a prototype that they released in July of 2024, and we're expecting them to release it to the public before the end of this year. So sometime within the next two to three months, once that happens, I think we'll know a lot more on what is the tool, how does it work and what might be its impact. On the global search, but it's very obvious that if you're dependent on search traffic on organic traffic, you need to start preparing for something new because otherwise you're going to lose a very significant amount of your wellbeing to your company. Another interesting thing that's related to search and is definitely related to AI and what's going on is there been some serious leadership changes at Google. So the number one company in search by a very big spread is making dramatic changes in personnel and in even bigger org chart changes. So. Raghavan, who was the head of search and advertising products, is moving and taking a new role. He is going to be the chief technologist at Google. He's going to be reporting directly to Sundar Pichai and helping them in direction and leadership. And Nick Fox, who has been a longtime executive, is going to take his role in search. That's change number one. Change number two is that the Gemini app team is going to move from reporting to the platforms and devices team to reporting directly to Demis Sassabis at DeepMind. so far, the research side was reporting to Demis and the product side, the Gemini tools and apps were reporting to a different group in the company. And now they're going to both merge under the deep mind team, which shows you how much power the AI team now holds over multiple aspects of Google. So Google obviously understands the threat that is happening right now. And it's making very dramatic changes in order to try to address that. And they need to move very quickly because. My fear for them is that if they take a 5 percent hit in their dominance in search, they're going to take a 30 to 40 to 50 percent hit on their stock price, which is going to be the biggest loss of valuation ever in history. So they have to figure out things very fast, and they're making dramatic changes to, I assume, do exactly that. Now staying in search, perplexity just released a new capability to its model. And that capability allows users to generate charts straight in the perplexity tool using search query data. So you can create now line charts and bar charts based on information in the search. So if you're looking for financial information from the stock market, or if you're looking for other types of numerical information, you can prompt it to generate graphs for you. This was possible for a while on chat GPT. I've never actually tried it with search data in chat GPT. I've tried it Many, many, many times with data that I upload to it, and the capability is amazing. So now this functionality is becoming available straight within the search interface of Perplexity. I use Perplexity every single day, and so I'm very excited about this. have advised users to check the data in the charts before they just share them with the world. I must admit that while I use Perplexity every single day, I find it to be the platform that has the highest number of hallucinations. Sometimes it's absolutely insane how bad the data is when it gives you different answers. So while I really like the tool and I like the interface and I like the way it works, and I definitely like the fact that they're adding more functionality like generating graphs, I warn you to check the data that it provides every single time, because in many cases, it's going to be completely a hundred percent made up and it will look completely realistic. And it's going to give you links with references that do not exist. And I've seen that many times coming from them. So they will have to solve that obviously, but I still think it's a very good tool. And as I mentioned, I use it regularly. I just check it to make sure that the sources are actually real. And from search, let's switch to NVIDIA, NVIDIA's CEO, Jensen Wong just had a really in depth interview on the BG2 podcast. Definitely worth listening to. It's more than an hour long and he talks about more or less everything. Literally any question that they had, he gave them the time to ask and he took the time to answer in depth. I must admit, I'm impressed every time I see Jensen speak, especially in this particular case, he. Knows so much in depth information about everything that's happening in his company and in his industry and so on, way beyond what you expect from a CEO that runs one of the most successful companies in history right now. And so again, I'm very impressed with that combination of in depth knowledge combined with his easygoing and friendly demeanor that he shows every time he speaks on any platform. I want to mention several very important things from that interview. One. Is that he's talking about the fact that they are developing the next versions of AI assistance and that they already have been using AI assistance more or less in everything in NVIDIA. So he's talking about anything from designing new chips to creating software to other aspects. They're already using the capabilities that they're developing to grow their business. Their company faster is also talking about the fact that he sees in the future, doubling the number of employees in the company. So from 23 25, 000 employees, they have right now to 50, 000 employees, but adding 100 million a I assistance to work and collaborate with them. And we're talking about collaboration. He's talking about these tools being working In conjunction together with humans in a seamless way. So having AI assistance in Slack to talk to humans, just like humans communicate. So very clear, very quickly, the border between who's human and who's not in the regular work environment is going to disappear, or maybe is already disappearing, at least at NVIDIA. And the, his logic behind it is basically saying that these AI agents will allow them to grow a lot faster, which will require more humans, which will means more humans will work in the company. Does he really believe that? I'm not a hundred percent sure with the way the direction is going right now, but that's at least the vision that he's sharing about his company. He also talked about the fact that their moat is significantly bigger than just a company. Hardware and I tend to agree, I think the ecosystem that they built around their products and services combined with their switches and GPUs is way ahead of everybody else. And the fact that they're generating so much cash right now helps them to keep that gap and potentially broaden that gap. Now, speaking about NVIDIA's ecosystem, as I mentioned many times before, NVIDIA is way more than a GPU manufacturer. They do a lot in infrastructure for robots. They're doing a lot in developing software, and they're doing a lot in developing large language models, some of their own, and in this particular case, they're They have improved a model that's coming out from meta. So they took 70 B and they've built a new version of it that they have fine tuned to just be better. So it's called Lama 3. B instruct. That's definitely a mouthful, but they just released it this week. And it's performing very well. compared to second tier models. So it outperforms GPT 4 and CLOD 3. So not GPT 4 0 and not CLOD 3. 5 SONNET, but the previous versions from the top models in the world. It is currently ranking number 38 on the LM SYS chatbot arena. As I mentioned, that's not very high, but it's higher than GPT which were Top of the line less than a year ago. So you can now get an open source model that performs at that level and have access to it as an open source model for absolutely free, other than having to host it somewhere. By the way, speaking of chatbot arena, what I found very interesting is that Claude sonnet 3. 5 is only ranked number nine. And on the top few places GPT 4. 0 and then 0. 1. In, and then different variants of Gemini. So Claude right now with its tool that I use every single day. And I don't really like, by the way, I use the others as well, but I would have assumed you would rank higher right now, at least based on LMC's chatbot. It's ranked number nine after many models from chat, after other models from chat GPT and Gemini. But the interesting thing about this new model from Nvidia is, as I mentioned, has 70 billion parameters and it's outperforming GPT 4. Now, we don't know how many parameters GPT 4 has. But the rumors talking between half a bill, between 500 billion to a trillion parameters. So having an open source model that has significantly less parameters that can outperform it across multiple aspects is very promising when it comes to what I said earlier in this show, getting three or almost three high intelligence. To anybody who wants access to it now, staying in the open source world, mistrial, the French company that we talked about many times in the past are making moves and they haven't fully released it yet, but people were able to get a sneak peek into what's coming and there's Three different new things that are coming. One is web search that is going to be integrated through third party capabilities into the platform. The second one is image generation. And the third one is Slack integration. The first two are aimed directly at the consumer market, being able to create images and being able to do search. The third one is aiming directly to the business market with potential integration to Slack, where we'll be able to query your Slack channels using a. Language model that is open source, that can live within your environment and doesn't risk giving away your data. So very interesting moves from Mistral. As I mentioned, the open source market is not slowing down. If anything, it's the other way around. And I promised you lots of news from Adobe. So here we go. Adobe was raining AI news this week. Some of it is stuff that they shared in the past, but haven't released yet. And now it's finally being released. So they just released a new AI video tool that is going to be integrated as an extension into Premiere Pro. You can now extend videos, meaning you can take a video that you already have shot, a real live video and extend it by up to two seconds. You can create videos from text. You can create videos from image with a prompt, similar to other tools, but all built into Premiere Pro. So it runs within your existing video editing platform, which is obviously a lot more powerful than just a standalone generation tool. Now the videos are still limited to 1080p at lower frame rates of 24 frames per second. it also supports 720p and it can create videos of up to five seconds, but they also released a lot of other cool capabilities like camera control, which allows you to control the cameras probably much better than any other video generation tool out there. Now, the generation time takes about 90 seconds in their turbo mode that they, as I mentioned, haven't released yet, but it's going to be released soon, but very quickly, basically 90 seconds, a minute and a half, you can generate five second videos in your existing video environment in Premiere. One of the biggest benefits of it compared to other third party tools is that it's presumably commercially safe because they're trained it on licensed data that they already have, so you're at less of a risk of getting sued for the stuff that you're generating based on third party data. They also unveiled new capabilities in Photoshop. the biggest announcement in Photoshop is distraction removal. So if there's anything blocking the view of the main image, like wires or people or stuff that's stuck in the way of what you're actually trying to shoot, with one click, Photoshop will know how to remove that, which I find really cool. they also now running Firefly image three, which is the latest model integrated into all their tools. So capabilities like Generative Fill and Generative Expand and Generate Similar and Generate Background. All of those are now available with their latest Firefly image three model within. Photoshop. It creates much better photorealistic quality and complex prompt understanding in more dramatic scenarios. So you can do a lot more than you could do with the previous version. They've also released many new updates like new color workflow and Adobe camera row enhancements and content credential. tracking, saying that these things were created using AI. They're also introduce several new projects like project turntable, which allows you to take 2D vector images and rotate them around as if they were 3D. If you haven't seen the demo of that, go check it out. It's absolutely crazy. So think about simple 2D vector graphics and now being able to Rotate it around to get it from whatever angle you want, and then save that as a 2d image that you can use in everything you use vector graphics for. So very cool. They also have project remix that allows you to take rough sketches and turn them into 3d graphics. Detailed digital designs. They have project hi fi which allows Photoshop to generate high resolution image. They have project supersonic that generate sound effects from either prompts or clicking on something in the video. So think about, you have a car driving around or a sports car, and you can click on that and it will generate the sound effect for it. If you didn't capture the right sound or any other sound effect you want, either by prompting it or by clicking on the right. Thing in the video or a combination of the two, as I mentioned, extremely powerful capabilities of AI that is integrated into the place where many, many, many people around the world are currently creating visual outputs. And the next thing that they share that is not being released yet, but they've demoed in the past and they now they want again, is the ability to move away distractions from video. So if there's. A fence that is blocking your view of a basketball game where your son is playing, you can remove the fence and see just a basketball game in one click where it will pick up the fence and we'll remove it from all the frames. These tools are extremely cool. All of those come as part of Adobe's sneak programs, and it's now accessible to people who have the Adobe licenses. That's it for this week with the news jam packed with a lot of small but very important news. On Tuesday, we are releasing our AI readiness for 2025 webinar that we have held this past week, we dove into everything you need to know about the current state of AI in businesses, and how you need to prepare your business to leverage AI in 2025, including the actions that you can start taking right now, this becomes a critical aspect of every business. So I would highly recommend you listen to it. If you find this and every episode that we put out valuable, I would really appreciate it. If you go and open your phone right now and click on the ranking, whether you're on Apple podcast or on Spotify and rank this podcast, and while you have the phone in your hand, click the share button and think about a few people that maybe do not know about this podcast that you know that can benefit from learning more about AI and just share it with them. It helps you share AI literacy, which helps All of us as humanity and as a species get a better outcome of all of this. And it also helps me and I would really appreciate that. And until next time, have an awesome weekend.