Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
135 | Will the global workforce survive in an era of AI agents with computer access? Advanced new image and video generation capabilities and many more news from the week ending on October 25th
What happens when AI agents get smarter than your workforce?
The world of business is on the brink of transformation as AI agents evolve beyond chatbots. From autonomous code generation to real-time business process automation, agents are reshaping how companies operate—and putting traditional roles at risk. Are you ready to adapt, or will the next competitor with a tenth of your overhead costs take your market share?
This weekend’s episode takes you through a detailed breakdown of the latest releases from Claude, Crew AI, and other major players. We dive into the real-world implications of tools that can manage your computer, automate key processes, and even perform creative tasks—completely autonomously.
In this AI news episode, you’ll discover:
- The latest breakthroughs from Claude 3.5 and their game-changing "computer use" capability.
- How Crew AI's funding and partnerships hint at the growing role of agents across Fortune 500 companies.
- Why SaaS applications and administrative tasks could soon become obsolete.
- How AI-generated code and business automation are reshaping industries, and which roles are most at risk.
- The looming question: Can businesses survive without integrating AI at every level?
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news episode of the Leveraging podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Meitis, your host, and we've got a lot of exciting news this week. We're going to talk a lot about agents and the potential impact of AI. AI future on the workforce and our jobs. We're going to talk a lot about new models that are coming out. Some of them are very exciting, including new capabilities from Claude. And we're going to talk a lot about image generation and video generation and where that is going and new releases in that world as well. So like I said, a lot to talk about. Let's get started. We will start today by talking about Claude and their latest releases. Claude has been on fire recently, releasing more and more capabilities. And the first thing that we're going to end this week, they have released three different things. All are exciting. One of them is very exciting and very controversial and disturbing all at the same time. But the first thing they've launched a new version of Sonnet 3. 5. just to recap earlier this year, Anthropic released Claude III, and it had three different models. The smallest one was called Haiku, that was kind of like the faster, more specialized model. Then the middle one was Sonnet, and the biggest one was Opus. And then shortly after that, they released Anthropic. Sonnet 3. 5, which was as good as Opus, but cheaper and faster with some additional capabilities. And they haven't released any other 3. 5 models until now. So now they released an upgrade version of Claude 3. 5 Sonnet, and they also released Haiku 3. 5. So the smaller, faster model, before we talk about the last thing they released, let's talk a little bit. What does that give us? So there's claiming that the new version of Sonnet 3. 5 outperforms and Google GMI 1. 5 Pro in graduate level reasoning, in coding tasks, and in visual analysis. The interesting thing about coding tasks is they were ahead in coding tasks before, and now per them, they have extended their lead in coding capabilities. The other interesting thing is Haiku, the smaller, faster model that is three times faster than the smaller models from the other companies, also outperforms GPT 40 mini and Gemini 1. 5 flash. And it also maintains the performance in coding of Sonnet 3. 5. So it's a very capable model that runs faster and specialized Encoding tasks but the most interesting thing that they released is a feature or actually a separate capability that they're calling computer use and what computer use allows you to do as the name hints is to use your computer so you can install it locally on your computer and the way it works, it actually looks at your screen, takes screenshots of what's happening, can understand what software you're working on and can Operator computer. So basically it has access to the cursor. So placing quote unquote the mouse, anything it wants and typing anything, which basically means it can do anything you can do on your computer. It can browse the web. It can open and close apps. It can copy things from one place to the other and so on and so forth. Basically giving it access to, as I mentioned, any digital thing that we as humans have access to. If you think about the evolution that we've been talking about of agents, and we talked about the fact that one of the things agents will need is access to tools. This provides access to any tool that we have access to because it has access to our computer. Anthropic themselves has released several different examples of use cases. The first one was collecting information from multiple sources to put it into a form. So think about any data entry task that exists in a company. You collect information from an email from A PDF that you're getting from other sources, from your CRM, from multiple sources. Then you need to fill a form to build an order or anything like that. That was the first thing that Anthropic demoed, where it goes to look for information in one source, it doesn't find it in that source, it opens the CRM on its own, finds the relevant company, finds the relevant information and copies it into the form accurately. If I'm a company like UiPath, which is the largest company in the world for RPA today, which is a company that has Is the second largest IPO in European history, huge company, billions of dollars in revenue, huge amount of employees worldwide that has developed the capability to do well, something like that. I would be really scared right now, because if this is coming as part of a model that costs 20 bucks a month, then. They have a serious issue to their entire business model, but this is where it is going. And that has been demoed. It's not perfect yet. It's not running smoothly. It has issues, but these kinks will be solved once they get out of beta, which is what it is right now. The second example that they gave, which is really cool is planning a trip. So this girl is going to San Francisco. She wants to do the hike across Golden Gate bridge and she's, Basically asking it to help her build this out. And it's looking for the route and where is her reservation? Where's our hotel based on the reservation she has in her email and what time of date would be. And then eventually it creates a meeting on her calendar with all the details of exactly what buses she needs to take, where she needs to go. And so on the full plan of that trip. So that's another great example for day to day needs. And then the last one was software related, which really blew my mind because It has, first of all, some complete surreal setup. So think about it. This tool runs on the computer. So it's supposed to write code and then run it on a server. So the first thing it does, it opens the internet and opens Claude, the website. So Claude, the tool on the desktop opens Claude, the website to create the code for it. Then it copies the code Into a computer application where it can paste the code and edit the code. Then it runs to the terminal and create a server for it. So it can execute the code and so on. So literally the life cycle of this really small piece of software, the whole thing runs autonomously without anybody touching it, starting with an idea spoken in simple English. And ending up with a running application that can run on your computer and do specific things. I talked about this many times in the past on this podcast that I see the future of SAS as very murky, especially smaller applications. So I don't see an operating system being replaced that quickly. I don't see Salesforce being replaced that quickly, even though we've heard Klarna is doing exactly that, but they're doing it with a huge, Large investment and a lot of developers, but smaller SAS that does specific tasks that we today pay a lot of money for because it does 150 things out of which we use four is going to go away because literally any person, any company, any department will be able to ask for the specific functionality it needs, or he or she needs. And the A. I. Will write that application more or less on the fly doing everything you need to do. And over time, you will ask for updates and changes based on the way you work. And it will create the code and execute the code and deploy the code for you. So this example that I've just seen from Anthropic just confirms my Assumption that this is the direction that we're going and it's going to become more and more common and we're going to see more and more people doing this. I started doing this and I told you that in previous shows as well. I've never written code in my life, but I started doing this with Chachapiti and with Gemini and with Claude and deploying it through different tools and it explaining me how to use it. Now, it doesn't even need to explain to me anymore. It will do all the deployment and all the setup for me because it can run my computer. Now think about what that means for the future of business. It means that anything a human can do in front of a screen and keyboard, this Machine can do so AI is now going from, okay, I've got this chat bot or this image generation capabilities and so on to this can do anything that we can do in the digital world. Like, as I mentioned right now, it's not there yet, and there are several different issues, right? So the biggest issue. Is obviously trust. How do you know that this thing is actually going to execute the task in the way you want it to execute it? And it's not going to do anything that you either don't expect that could either harm your family, harm you, harm your company, be dangerous. Or it will decide that you are a threat to the computer and will decide to change your password and lock you out. The same thing can happen on a company level where we take over your entire IT system. So there's a lot of questions and in some of the demos that people are already putting out there, you can see that the system doesn't fully follow your instructions. It sometimes, and the funniest one that I've seen is it, Was given instructions. It went to look for specific images of something, and they just went on this rabbit hole of just watching videos and looking at images, just like we would. It's just like people will lose your attention and now you're doing something else, it will venture off to do something else. Now, I think, again, these things are going to get resolved over time. The risks are basically endless. And so I think before the point we can put serious. Real guardrails. I'm going to talk more about this after we talk more about agents. This thing is a big problem, but it's already here. The technology in its raw form is already here. It's already available. And there were several companies before releasing these kinds of tools, but now that it's available through Anthropic, which is one of the main largest used AI tools that cost 20 bucks a month to get access to it. And this may cost more later on when they get out of beta, But the options, both good and bad, are basically limitless. Now staying on the same topic of agents and what they can do a company called crew AI, which is a relatively new startup, but that is making really big waves in the agent world, just secured an 18 million in funding. And I know that doesn't sound impressive with all the AI world where people are raising billions, but it's still their initial funding. And it's very impressive and crew AI per them now being used by nearly half. of fortune 500 companies to create and execute over 10 million agents every single month. Now that's a very impressive number for a relatively young company, which is showing you How big the agent opportunity really is. The agent world is supposed to be 5 billion in 2024 and 50 billion by 2030. I actually think these numbers are low and I think they're going to be higher than that based on being able to replace basically any person in front of a computer. Now, the recent data and research shows that only 10 percent of large enterprises currently are using AI agents, but 82%. Are planning to adopt agent technology within the next two to three years. So that kind of tells you where this is going. Agents are going to be everywhere involved in every process in every large company and very quickly, probably in small companies as well. Now, what crew AI enables you to do is enables you to build your own agents using a simple cloud platform. It enables you to deploy the agents with customized access controls to the different pieces of software you wanted to access and you don't want it to access. It enables you as a company to track the ROI through. What it is replacing, how quickly is it working and how much it actually costs. You can actually see your cost savings as you're using it. And it has, as I mentioned, endless number of real world applications like internal process automation, lead generation, lead nurturing, code writing, code updates, testing, content creation, legal analysis, literally anything we can do on a computer, these tools will be able to do. Staying on the same topic, a company called Salonis has launched what they call Agent C, which is a suite of tools that combines AI agents with process intelligence. When the goal here is obviously not just to be able to build the agents, but to actually analyze specific processes, understand how they work, understand what are the pros and cons of the way they're done right now, how they can be improved from an efficiency perspective, and then Autonomously create an agent that will perform that task. Now process mining software that can capture specific business processes and analyze them has grew 40 percent year over year in 2023 and 90 percent of corporates Are saying that they're planning to increase their process intelligence investments in the next few years. Again, combine that with the capability to use that process analysis to create agent that will replace the process either partially or completely tells you how powerful this is. Now, agency integrates into multiple existing tools that companies are using. So it can integrate with Microsoft co pilot studio and IBM Watson and Amazon bedrock and Crew, ai and many other platforms. So again, you can take this intelligent process analysis tool and connect it to other agent generation tools to build agents more or less automatically that will replace people doing the existing processes. Two examples that the article gives that companies are already using this for a company called cosentino achieved a 5x more credit order processing per day using this technology. A global car manufacturer has automated their entire supplier inquiry responses. So that's a very large, these are large scale operations that are being replaced or dramatically enhanced with this technology. And this week an article called AI agents enter workforce as major tech companies launch enterprise solutions. And they're giving many examples of large corporations and enterprises and how they're already implementing this technology. They're talking about early adoption use cases as replit using Anthropx AI to automate code review. Salesforce projecting 30 percent reduction in call center staffing needs within the next five years. ServiceNow agents are now being tested across multiple customers of them that are all AI based agents, et cetera, et cetera. Multiple use cases that are taking more and more aspect across every department of businesses. Now they're also mentioning what I mentioned before, that these tools have some focus issues and that it will drift off into something else. And that's obviously like serious trust and controls concern when deploying these systems, but it's moving forward and it's moving forward a lot faster than anybody believes. Now, as I told you, I want to expand on a few aspects of this. And the first one is the need for serious guardrails. And the second is the need to verify those guardrails. So we must find ways as a society, as workforce, as companies to know for sure that these systems are limited with what they can do once you give them access to a computer with a keyboard and a mouse and access to everything. Because if this tool can do everything on my computer, it can go to my password platform, like class pass and get my passwords to everything. And then he can access my bank account and every software that I'm using and my phone and everything else. Now I was a, you guys know me now as a geek that likes AI that has been running tech companies for the last one years. But I was a geek when I was a kid too. And I really loved reading Asimov, Isaac Asimov's books. And he wrote multiple books and he wrote multiple series, but one of the series is called the robots series. And in there, he's talking about the three laws of robotics. And he's obviously talking about the future, but we are in this future right now. So his future is our present. And he's talking about three rules that were the basic to the expansion of robotics in the world. And the first law was a robot may not injure a human being or through inaction allow a human being to come to harm. That was law number one. Law number two was a robot must obey the orders given to it by a human being, Except when such orders would conflict with the first law number three was a robot must protect its own existence as long as such protection does not conflict with the first or second law. And then later on, in the later books, he added a fourth law that a robot may not harm humanity. or through inaction allow humanity to come to harm. So basically an expansion of the first law, but rather than one human being, all of humanity with actions that can impact our planet, as an example. I think we're getting to the point that this is what we need. We need some kind of prime directives that these AI systems cannot rake in order to protect us, in order to protect the job force, in order to protect society, in order to protect our planet. there's a lot of ways that this can go wrong. And we're going to talk a little more about this thing when we talk about the interview with Demis Hassabis that happened this week on where this is going and how badly this can go wrong before us paying attention. But we're going to get to that later on. But for this particular thing, I think the ability to generate amazing benefits from these agents is pretty amazing. Obvious. I think it puts a huge risk on job displacement because literally we're able to do everything that we can do. I see that as almost inevitable at this point. And the reason I'm saying it's almost inevitable at this point is because let's say you are a great CEO and you love your people and you want to save every single employee and you don't want to let people go. if you do that, somebody else will go into your industry, into your niche with a company that is one 10th of your size with an overhead, that is one 10th of your overhead, that will be able to have a cost structure that will be significantly more competitive than you. And they will take your customers because they'll be able to serve them at half the price that you're serving them still making more money than you, which means your company will go out of business, which means it's not, you're going to save your employees. They're all going to be doomed instead of just some of them. Now, I know that sounds doom and gloom, but I don't see this going in any other direction because. We will not have a choice because somebody else will start a company that will do what your company does and will be able to do it significantly faster, better, cheaper than you by using AI, which will force everybody to do the same thing. What does that mean to the job force? What does that mean to our economy? What does that mean to society? Nobody has a clue. And yet this thing is here now, like agents are being built. As we are speaking, the technology is getting better and better. Controlling a computer is now available for. Testing through a beta on Claude, like you don't have to go to a large corporation and figure out the deployment of this. It's available right now on anyone's computer. If you want access to it, this is a Pandora's box that is filled with a lot of other smaller Pandora's boxes that nobody understands what's in them. And if we're opening them one by one, this to me is maybe the biggest. Risk that nobody's talking about what is going to happen to our workforce in the next few years. And what does that mean to society? And we need groups, bigger groups, governments, international bodies, corporations that are going to participate in this and everybody that can jump in on a large global scale and figure out what that means for society. I don't have solutions, whether global basic income or global more income, like whatever the solution is, but we will need a solution. And we need to start working on this very, very fast because developing these kinds of solutions takes a few years and we don't have these years already, like, even if there's no new advancements in these models, the capabilities that exist right now, puts us at a very serious risk in my personal belief. Now, continuing on the same topic, the Brookings Institute, which is a nonprofit organization from D. C. that their mission is to conduct and I'm now I'm quoting in depth nonpartisan research to improve policy and government at local, national and global levels. So there are large respectable organization that looks into policies and so on. And they just released a report that is stating that more than 30 percent of U. S. workers could see at least 50 percent of their job tasks disrupted by current generative AI capabilities, while 85 percent of people can see 10 percent of their job tasks impacted. Now they're talking about that. The top exposed industries are computer and mathematical roles with 75 percent task exposure office and administrative at 60 percent business financial operations as 52%, et cetera, et cetera. So there's a bunch of others around 50%. The other interesting thing that they're saying is that the immediate future, there's a big gender impact because the immediate future, they're seeing a lot more administrative tasks being replaced. And most of those are held by female workers. So they're saying that right now, from a gender perspective, 36 percent of female workers and only 25 percent of male workers are at high risk. I think as this move forward, it is gender agnostic because it's going to replace almost every task we do in front of a computer. Now, While these numbers are scary because they're talking about, you know, 70%, 60% and so on, on specific categories, and in some of them 85% the really scary thing is that this research is based on a paper released by open AI that is called GPT or GPTs, labor Market Impact Potential of LLMs. That is based on GPT-4 technology, which is a year old. It does not take into account GPT-4 oh. Gemini 1. 5 Pro, not to mention Strawberry Project or GPT 01, which now can think and do more complex stuff, not to mention controlling computers, not to mention agents. this report, that is already doom and gloom and scary, is not updated to what's available right now and definitely not updated to what's going to be available two or three years from now when we're talking about AGI capabilities that will really be able to do everything that we do better than us and we'll have access to any tool we're going to give it access to. Now, I'm not telling you this to scare you. I'm just telling you this so you're more knowledgeable in what I think is happening. And I assume this isn't happening because I'm looking at what all the big companies are doing to develop these kinds of capabilities. So billions of billions of dollars are going towards developing what I just said. And nobody is really doing anything about this other than competing and trying to get these, To get us there faster. It's just like a train that's going faster and faster to a gorge when maybe there's a bridge, but maybe there isn't. That's kind of how I feel about the situation. And the reason I'm sharing this with you is I want you to share this with everybody. So maybe this gets to the right people and some bigger people start taking action in order to secure our future, or at least make it safer than it seems right now. Now, after all of this big news, let's start talking about very specific news to specific companies. A week cannot go by, obviously, without us talking about OpenAI. So there's a very fierce battle between OpenAI and Anthropic as far as code generation. As I mentioned in the beginning, that's a big focus for Anthropic in their latest release, and it's also a big focus for OpenAI. This report from the information is saying that OpenAI coding focused ChatGPT subscriptions. So the companies who use ChatGPT to create code is at a run rate of 3 billion. That's a huge amount of money and it's a big investment. Chunk of the entire income of OpenAI and ChachiPT. Anthropic is projecting 1 2024. And a lot of it goes to the same tasks, right? Of developing code and creating code for different companies. GitHub Copilot is tracking towards 300 million annually. So it's a very, And so the information is sharing that OpenAI is working on several new products. One is an advanced coding tool for complex software engineering tasks. So basically elevating the level of what AI can do today with coding. The other one is integration with popular code editors like Visual Studio, et cetera. an internal research assistant tool for AI researchers. So people who don't just write code, but that specialize in AI development, there's obviously going to be more and more of those across multiple companies and computer using agent capabilities, similar to what Claude just released as beta. So all these things, now the information has been It's very consistent in releasing accurate information from inside sources. So usually when they're saying something, it's based on real facts and it's not just a rumor. So it's very obvious and it makes sense that OpenAI is working on these things. Now, once they release those, and they may not be released all at once, but they could be released one by one or even in different stages, They're obviously fighting other people than Anthropic. They're also going after GitHub Copilot, as I mentioned, and Cursor, which is one of the most favorite new development platforms that a lot of people have moved to in this AI development era. The interesting thing is Cursor got an investment from OpenAI, and now they're going to release a tool that's going to compete with that, but that we've seen that across this industry time and time again. Love you. More interesting rumor about OpenAI is that, according to The Verge, OpenAI next generation model, that name code Orion, that is supposed to be a hundred times more powerful than GPT 4, is supposed to be launched in December of 2024. If this happens, this is going to put OpenAI and Chachapiti in the lead again, compared to their competition from, The other large frontier model companies like Anthropic and Google and so on, and the other part of the rumor is that unlike previous releases, this initial release is going to go first and foremost to large companies for product development and not to the general public. That being said, Sam Altman tweeted that this is fake news and it's out of control. So I don't know if that's true or not. Again, the verge. I haven't been following news from the Verge very closely to say how accurate they've been. I've been following the information very closely and everything they've released have always been accurate. So time will tell, they're obviously working on this. they haven't been hiding that they've been working on this, but maybe the release date of December is just not accurate. Either way, everybody's expecting the next model release from ChachiPT to be very significant. Now, in a research paper released by two researchers by Chachapiti, they're sharing that they developed a new technology that is accelerating the generation of images with diffusion models 50 times faster than it is today. So they've developed several different sizes of models, but now They're able to create images with diffusion models that are almost at the top level of the other models today. They're claiming a 10 percent gap in quality with a significant reduction in compute and time. So far they've done this for relatively small images, so 512 by 512 pixels. But once they figure out the capability, they can scale it up and add more functionality to it. And they're claiming that the same concepts will be able to be applied to images, audio, and video, which tells us that these things can be in the backend of the next generation of DALI 4 or whatever they're going to call it, and Sora or whatever they end up calling their video model. And so we've talked about this many times. In the past on the show that the capabilities are becoming faster and cheaper for two different reasons. One compute is becoming cheaper, but also because new algorithms and new algorithms and new processes are being developed and they allow the companies to run on the same compute significantly faster in this particular case, 50 X. Faster. So that's a huge improvement. And I'm personally excited as somebody who generates. Lots of images in some videos. I'm very excited about this. Another open AI related news comes actually from Morgan Stanley. So Morgan Stanley, one of the largest investment banks in the world have announced that they are expanding their partnership with open AI. They're claiming that right now, nearly half of Morgan Stanley's 80, 000 employees are using open AI power tools daily. They're claiming game changing efficiency with Salesforce responses to clients inquiries 10 times faster than before. The tool is processing insights from 70. Thousand plus annual research reports. And that these new set of tools from open AI are being used three times higher than previous traditional analysis tools that they've been using. So obviously they're seeing huge benefits from this. And there, as I mentioned, announced that they're expanding this partnership to do more with it. The biggest summarizing research on stock Commodities and industry trends. It processes complex queries about specific companies to allow to make better decisions and it handles industry specific jargon for specific companies and creates data visualization for the employees. multiple tasks around research and preparation to investment. And as you already know, there's been multiple companies around the world that's been doing algo trading for a very long time. So this is just going to expand on that to the point that the stock market is going to run mostly by algorithms, even from private investors and not just from large companies. There's been more and more rumors, by the way, about OpenAI and their tension with Microsoft, with the relationship with Mustafa Suleiman that took the helm of AI at Microsoft. And as I shared with you last week, this frenemies relationship it looks to be not on stable grounds all the time because Mustafa Suleyman has a history with some of the people in OpenAI and also the fact that he came from a competitor and now is running this, there are a lot of discussions that they're not sharing the information that they're supposed to share and that some information that's not supposed to be shared is transferred over, not necessarily through the right channels, so It doesn't seem that there's a lot of trust between these two organizations right now. And as we know, OpenAI are developing things that are competing with Microsoft. We talked a lot about this last week. And Microsoft is also working with the leadership of Mustafa Suleiman to develop their own models that can potentially compete with OpenAI. So it will be interesting to continue following this partnership. Right now, it seems that Microsoft has a lot more leverage in this relationship because they have a very solid agreement with OpenAI as far as how much of the revenue is going to go to Microsoft and access to their models, and they're the ones that control the compute, and they're the ones that are investing a lot of the money, but there's this little interesting clause that we talked about when this relationship started in 2023 allows OpenAI not to deliver the next technology to Microsoft if they define it as AGI and the authority to determine what is AGI and what is not lies with openAI's board. So basically all that OpenAI needs to do in order to unplug Microsoft from the next variation of models is for OpenAI's board to say, okay, this is now AGI. So GPT 5, Orion, whatever it's going to end up being called is now AGI. And hence Microsoft does not get access to it. So this is their leverage in this entire relationship. As I mentioned, it will be interesting to continue following this relationship between these two companies and see where it is evolving. Now it's been a while, I don't want to say a while, maybe a month, since a senior person from is leaving, and then at the same time talking about his concern about how OpenAI is approaching safety. So the current person is Miles Brundage, I hope I don't butcher his last name, but Miles is leaving from his role as Senior Advisor to AGI Readiness. So again, a very senior person in the safety side of OpenAI. And he's making similar claims to previous people who left OpenAI. And Miles is saying that he's planning to pursue policy research in the nonprofit sector that has to do with AI, and he cites his desire for more, and I'm quoting, more ability to publish freely, which basically lets you understand that some of the things he wanted to say in OpenAI, he was not allowed to say. As we know, the AGI readiness team has been wounded down and has been distributed across other segments of the company. So he urges other employees in open AI to speak their minds more freely and discuss the current developments as well as safety or lack of safety. And as I mentioned, he's not the first person leaving. We had CTO Mira Moradi, the chief research officer, Bob McGrew, Research VP Barrett Zoff, research scientists, Andrzej Karpathy, co founder Ilya Saskover, co founder John Shulman, Greg Brockman, who is on quote unquote extended leave, et cetera, et cetera. A lot of senior people have left. Many of them sounded the alarms and the concerns on the push towards competitiveness with not enough expertise. Safety concerns. So here's just another person in this very long list of people that is departing open AI and sounding the bells on safety. That doesn't sound well, especially when you're going back to the topic that we talked about in the beginning of how much safety we actually do need because of where these systems and how these systems are being deployed. Now, since we are speaking about the relationship of Microsoft and open AI, they just launched a 10 million media initiative. To fight copyright battles, the idea is to have funds 5 5 million each with two and a half million dollars in cash, plus two and a half million dollars in software and enterprise credits to use the models. And they already have initial recipients that are going to go after these funds. So consider this a fund that will help companies, A, adopt AI and B, paid for their news outlets. Some of the companies that were mentioned were Newsday and the Chicago Public Media, the Seattle Times, and a few other organizations. The goal is to have a two year fellowship program that allows access to Microsoft Azure and OpenAI credits, and it's focusing on implementation in the newsroom, and in return, it will give them access to the news themselves that they can use with a license to do so rather than stealing the information like they did so far. On the same topic, meta just announced a partnership with Reuters, so it's the first time meta is cutting such a deal. It's a multi year agreement for real news content. The goal is to provide Reuters real time through all of Meta's AI chat capabilities, which are already integrated into Facebook and Instagram and WhatsApp and Messenger. And so you'll be able to get coverage and know what's happening in the news just by asking the bot on each and every one of these platforms. This is rolling out right now. So this is an immediate thing that will be available to all of us in the next few days. Now before we dive into some news from Google, I want to talk specifically about an interview that was held with Google's DeepMind CEO, Demis Hassabis. He was talking a lot about AI agents and AGI and so on. It's a very interesting interview. It's not a very long interview, and I will share the link in the chat so you can find it and listen to the interview. And I want to mention a few things from the interview, a lot of the stuff that he talked about, we kind of known, but I've learned more about Demis and I've been following him for a while now, but I've learned more about him through this interview and a lot of his, the way he thinks about AGI and so on. So first of all, he thinks we'll get to AGI only in about 10 years. So if you're. Following his competitors or other people in the industry that are talking to two, three, maybe five years. He looks at it way further out. He's talking about the capabilities that we need to achieve and master to get full AGI. But that's also required for all the agents that we are starting to see right now are planning and thinking ahead, taking action in the real world. So we talked about having access to a keyboard and mouse and so on. Yeah. Reasoning through problems, so being able to analyze them and understand what are the limitations and work through different steps of the thing, improved memory retention. That's a huge one, because if you think about what we talked about before, being able to solve more complex tasks that may take not five seconds or 20 seconds, but maybe a full day or a few days or a couple of weeks. Will require for the system to know in its memory, everything that is happening across the company connections to other components and everything that it has executed through that period of time. So that's obviously a very big one. Enhanced personalization. That's something else that he's talking about. So the ability to understand the needs of a specific individual or specific organization to really serve them in the most optimal way. And the ability to use various tools that are both software and. So these are the things that he's talking about that everybody's working on right now that are going to push us forward in this more agentic world in the road to AGI, but he also talks about the risks and he is In the middle of the spectrum, right? He talked in this interview about the doomers, but also about the tech people that are pushing very fast forward. And he's in the middle. And the reason he's in the middle, he's saying he's optimistic and he thinks we'll get the best out of it. And he really believes that it will be able to solve diseases and change global warming and find energy solutions that are amazing. So he's talking about this future abundance, but he's saying that there are risks. And I'm going to quote something that he said that I find So those of you who don't know, Demis is a world level chess player. And now I'm quoting from the interview. These are new systems with new technologies. They are incredibly powerful. I've seen this in the microcosm of games. something I understand well, like playing chess, where you start with a system alpha zero, that's random in the morning by, morning coffee break. It's better than it was before. And can beat me. and then by lunchtime, it's better than the world champion. And then by the afternoon, sort of in eight hours, it's better than the best chess players, you know, hard coded chess computers. And when he lost to a computer playing AI, but these was hard coded AI computers, it was really not the AI that we're talking about today without real machine learning. So in eight hours, it goes from not knowing how to play chess, literally making random moves to beating a computer that can beat the best chess players in the world in eight hours. So what he's basically saying that The ability of these systems to learn really complex things much faster than we understand once they have access to the right information, the right compute is beyond our control and is beyond our level of understanding. And basically what he's saying that this thing can take a turn in the road and run in that direction before we understand what's happening. And it might be too late. Going back to my whole point about we have to figure out. How, if it's even possible to put real guardrails on the system and having safe mechanisms like a fail switch that we can actually click to turn this thing off. I don't know if that's possible. I don't know if that's something that's even realistic, but I think that's what we need. And since I told you we're going to talk a little bit about Google, Gemini just introduced more applications for Workspace. So now there are AI capabilities for Gemini in Google Calendar, Google Keep and Google Tasks that can be all activated from the admin panel of Google. So you can activate it to specific employees in your organization and it can create calendar events, it can generate notes, it can manage tasks, it can access and move data in those apps, and it integrates with existing Gemini panel in the workspace apps. Now, I told you, I also have a lot of news about image generation and that world. We shared a lot about Adobe's new releases last week. So Adobe released all these amazing capabilities. And this week, we're MidJourney launches an AI image editor that basically allows you to take real images, actual photos that you took and edit them with MidJourney. Look at the demos online. It's absolutely mind blowing. You can do really cool things. And it allows users to modify an existing images and transform their, in their style. You can highlight specific segments in the picture and then change just those segments for different colors, different patterns, different textures, and so on. Now you can even take doodle art, something you scribbled and turn it into an actual photorealistic image or any other kind of image you want, like cartoonish, whatever it is that you want. So very powerful capabilities, including retexture feature that allows to convert things from one style to the other, either for the full image or for part of the image and so on. And it's becoming very, very good at following the prompts and the user guidance on exactly what it needs to do. Now, the problem with it right now is that it's only accessible to users who have generated 10, 000 images or more have an annual paid membership and have been subscribed paying users for at least a year. So It's not accessible to most, but just like they did with the previous releases, this will change in a few weeks or in the previous time, it was a few months, but eventually we're all going to get access to this. The reason I find this interesting is I think what we're going to see, and I'm going to combine the news from Adobe from last week to the news about Mid Journey, and we're going to talk about Canva in a minute as well, is That the lines between AI image generation to image editing are going to keep on blurring. I think the tools that work traditionally, let's create an AI image with like mid journey and flux one and an ideogram and so on are going to go more and more into tools that will allow you to control and change and edit images, whether created by AI or not. And we're going to see the tools like Adobe and Canva and so on move more and more into the AI image generation capabilities. And the lines are going to be Blurring more and more and more. Now I don't see any larger organization that are using Photoshop right now, ditching Photoshop to start using mid journey, but I definitely see smaller companies, beginners, and people like me who do not really know how to use Photoshop and never had the time to learn, starting to use these kinds of tools because it's just a lot easier to use and they will give you good enough results across multiple things. And I think two years down the road, it will be very hard to distinguish between the two kinds of tools. Now meJourney is also continuing with their plans that they've announced before to release 3D capabilities as well as video capabilities sometime in the future. And staying on the same topic of image generation combined with tools a few weeks ago I shared with you that Leonardo. ai was acquired by Canva so this week Canva has released what they call DreamLab which is a new entire suite of AI capabilities built into Canva on top of Leonardo's Phoenix model. It allows for. Really advanced graphic generation with specific styles, allows it to do improved multi subject image creation and a lot of other capabilities that are built straight into Canva. You will be able to also create text in a much better way. So AI text, which has been a problem until recently. So I've been using ideogram every time I needed to create text, but apparently now you can do this built into Canva. So more and more AI capabilities. Creation, editing, and so on are coming into Canva itself, which as a Canva user, I'm very excited about, but it goes back to prove the point that I said before, the lines are going to be very blurry between content creation tools, image editing tools, and image generation tools and video generation tools that are now available in Canva using Kling in the backend. So all of these tools are built into Canva and you can create a lot more without ever leaving the platform. is stability. I just launched stable diffusion 3. 5, which is a major improvement over stable diffusion three. It released three different models, a large one with 8 billion parameters, a mid one, a large turbo with 8 billion parameters that runs faster and a medium one with 2. 6 billion parameters. And this model achieves enhanced realism and superior prompt adherence and improved text generation and a lot of other better capabilities. As you probably know, Stable Diffusion is an open source model, which means it can be implemented across everything you want without paying licensing fees and so on, as long as you're a small enough company. So you can use Stable Diffusion today for free. We're non commercial, or if you're making less than 1 million, you can still use it for free. And I assume we're going to see a lot more companies are using that. Flux model that was released just a few months ago, has taken over like a wildfire. As far as being integrated into many different tools, and maybe now with the release of Stable Fusion 3. 5, we're going to start seeing it being integrated into more and more places. They're claiming, by the way, that their capabilities are matching or exceeding Flux 1. 1 Pro, which is right now the most advanced Open source model out there. And I must claim as somebody is using it regularly, that is as good as me journey on most things. And again, it's free to use staying in the open source world. A company called Genmo has released Mochi one, which is, they're claiming to be The best open source AI video generation tool. They're claiming that they can compete on some capabilities with Runway Gen 3 and with Luma Dream Machine. Which even if it's not completely true, and even if they're still behind, just like I mentioned a few seconds ago, these tools will grow with their capabilities and these open source tools will be able to do good enough Work. So even if much you want is not there yet. And again, they're claiming that it will be right now. It's lower resolution and it's shorter videos, but the goal for them is to be as good or even better than runway and luma and cling and all of those competitors. And they're focusing right now on just photorealistic content. So not cartoonish stuff like a lot of people are doing. But as I mentioned over time, I'm sure they will develop more and more of that and it will be open source and will be free and accessible to integrate with everything else. So that's the direction this thing is going. Which connects back to the point that I mentioned several times before. I'm going to talk a little more later on when I talk about Anderson Hurwitz, on where is the financial model for the paid models right now, which I'm not a hundred percent sure about, but staying on the topic of video generation. One way just launch what they're calling act one, which is an AI facial motion capture tool. What it allows you to do is allows you to record yourself on the phone doing whatever facial expression you want without any special equipment. And then bringing that Into runway and using it to change the expressions off the videos that are generated within Gen three alpha. So all you need is to create an image of something with a third party tool. This could be a cartoon. This could be a photorealistic person. You upload their images to runway. You upload the video off the expressions that are used as a reference to create a new video. And now that image that you created, that could be a cartoon, will have the same exact facial expressions that you had when you were using the phone. That saves a huge amount of work that studios had to do somewhat manually so far. And again, it's going to be available on a platform that costs a few dollars. Dozens of dollars a month, depending on how many credits you want to actually use. This is absolutely magical. And it's going to dramatically simplify and democratize the generation of high quality video, including people and their face expressions. And it doesn't even have to be people because you can use those face expressions and then apply them to a cat or a legendary magical creature that you will create the image of using a tool like meet journey. And from image generation, let's talk a little bit of perplexity. So perplexity, we talked about them many times before they have been the best AI search engine that is quote unquote competing with Google. They're obviously significantly smaller than go Google, but they've been growing very fast. So Arvin string, as their CEO has announced this week that they're now performing a hundred million queries each week. That's about 400 million queries a month. That's up from 250 million a month in July. So this is. One and a half times bigger in just a few months, which is telling you how fast they're growing, but they're also adding more and more capabilities that are turning them to a lot more than just an old school search engine. So right now they've added real time stock quotes, including the access to historical, Earnings data and visualizations and industry peer comparisons that has to do with stocks and company financial analysis, all powered by financial modeling prep FMP data. They're bringing more and more specific data that you don't just general internet data in order to allow people to do a lot more. They've also not new partnerships with crunch base that will provide private company data access fact set, which is a structured unstructured financial data. And they're making this available for now for enterprise pro users, but I assume this will come to everybody in the near future. Now it's still not competitive and solid and as robust as existing financial research platforms, but the direction is very clear. And this is another place where the lines are going to get blurry tools like Bloomberg To this new age of AI driven tools that will have access to similar data. And we'll be able to do stuff that previously only companies like Bloomberg and Morningstar were able to do, and now will be accessible to hopefully all of us through tools like perplexity. Now, speaking about perplexity, Dow Jones and New York Post is suing perplexity for massive illegal copying of publishers content. And that's a quote from the lawsuit that they filed just on October 21st. This is not the first time an AI company is being sued by different people who publish content. We've seen similar lawsuits from the New York Times suing OpenAI and Microsoft and a lot of other news outlets swing other platforms. So this is not new. And as we've seen before, it leads to more and more licensing deals, like the ones we talked about earlier. And I want to jump to a slightly different topic. I said several times before on this podcast, actually many times that I don't see the business model future of these. Models. And the reason I said that is because models are becoming cheaper and cheaper with every cycle, and you have better and better open source models that may not be the best in the world, but might be good enough for most of the tasks that we need, and they will definitely be good enough sometime in the next two to three years. And then what does that put the money that needs to come in and pay for these, for the development of these models? Well, this week, mark Anderson from Anderson Horowitz, one of the biggest and most respectable VCs in the world said the same things. He's basically saying that AI models are becoming commoditized where he compared it to selling rice. He's claiming that this is a race to the bottom, which is exactly what I've been saying for a very long time. And that it, as I mentioned, it's becoming a commodity and that there's very limited product differentiation between those providers. And if you'll be able to take even an open source model, that is good enough, wrap it with the right capabilities to make the applications, work for you. As relevant as possible to whoever you want, you may provide more value than just competing with other leading frontier models. Now, this is very interesting that he's saying that because Anderson Horowitz was one of the original investors in open AI. They have participated in a 300 million round in open AI in 2023. So when he's coming and saying that when he has a significant investment in one of those companies, it's telling you that this is a very serious risk to the business model of these companies. That's it for this week. We talked about many, many different things. we're going to be back on Tuesday with another fascinating deep dive into a how to with AI, something that you can start implementing in your business immediately In order to get higher efficiencies and higher ROI in your business. If you are enjoying this podcast, please share it with other people who can benefit from it. Please also open your phone right now and rate us on either Spotify or Apple podcast, whatever tool that you're using. It really helps us get the message out there. And we're going to have more people understanding AI, which will hopefully help all of us end with a better outcome of this amazing and yet scary revolution that we're living through right now. And until then have, and until next time have an awesome rest of your day.