Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
127 | AI and human customer service agent are now indistinguishable (SalesForce CEO), Sam Altman’s takeover of OpenAI is complete, The entertainment industry and AI relationship is evolving, and more important AI news from the week ending on Sep 27 2024
What’s Really Happening Inside OpenAI? The Shocking Shake-ups You Need to Know!
Are major leadership changes at OpenAI the final death knell for its nonprofit mission? Or are we witnessing the next stage of AI evolution, where profits trump principles? The dramatic departure of top executives is more than just a headline – it’s a signal that things are shifting, fast.
In this episode of Leveraging AI, we dive into the latest news surrounding OpenAI, from the exodus of key leadership to their ongoing push toward a for-profit future. But that's not all – we’ll also explore AI’s influence on Hollywood, the explosive growth of voice-mode AI, and some jaw-dropping predictions that could shake up entire industries.
Here's what you'll discover in this week’s episode:
- Why the latest OpenAI leadership exits could spell the end of its nonprofit roots.
- How advanced voice-mode AI could change how we communicate with machines (and why it's so good, you might forget you’re talking to a machine).
- The mind-blowing implications of AI in Hollywood – are we on the brink of AI-generated actors and films?
- How big tech is racing to build the AI infrastructure of tomorrow, and why Sam Altman thinks failing to invest could lead to future conflicts.
- The ethical questions and privacy concerns that come with wearable AI devices, and why resistance may be futile.
- The impact of AI regulation in the EU and why businesses there could be at a disadvantage (or are they protected from something darker?).
Join Us for AI Education & Planning
If you're looking to transform your business with AI, don’t miss the upcoming AI Business Transformation Course on October 28th. It's a deep-dive, live online course designed to take you from beginner to AI expert in four weeks.
Register here: https://multiplai.ai/ai-course/
Plus, get ready for our 2025 AI Planning Webinar on October 17th, where we’ll cover all you need to know to prepare for AI’s next big leap.
Signup here: https://services.multiplai.ai/ai-webinar
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news edition of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Mehti, your host. And while last week was the week of AI agents explosion, this week we have multiple topics. All of them are fascinating. Some of it are, Major changes in open AI in their leadership. That is not the first time, but it seems to be the final nail in the coffin of the nonprofit version of open AI. We're going to talk in depth about that, but also lots of fascinating news from other leaders in the industry, as well as impacts to creativity with amazing new video capabilities and its impact on Hollywood, as well as some big, bold predictions from leadership in the industry. So we have a lot to talk about. By the way, I'll be speaking at the AI realized conference in San Francisco on Wednesday, October 2nd. So if you are there and you're a listener of this podcast, I would love to meet you. So come and say hi. And now let's get the news started. If you've been a regular listening to this podcast, that there's been major departure of senior leaders in open AI since the big event of the ousting of Sam Altman as the CEO, and then his return, and then all the turmoil that came after that. So one after the other, more and more leaders from open AI has left. And this week there was another big earthquake in open AI when Mira Moradi, the chief technology officer of open AI for the past six years, has announced that she's leaving the company to quote unquote, pursue personal exploration. Immediately after that chief research officer, Bob McGrew and the VP of research Barrett Zoff have also announced that they're leaving the company openAI. Now, these are top leaders on the technical side of OpenAI that are leaving the company. That's in addition to Greg Brockman, who is on personal leave until the end of the year, which that's not very clear what that means, but he announced at the same time, they announced the launch of O one. What does that tell us? all of this is happening as open AI is in the midst of raising amounts that are not clear yet, but it's going to be probably around six to 7 billion at around 150 billion. In valuation. Now there's some big names that are going to be a part of that investment, including some of the biggest companies in the world, including Apple and potentially Google, as well as obviously Microsoft, who invested in the first one and a few other big investing firms. None of these companies, I think, will agree to the current structure of the company, and hence there's been discussions and rumors for a very long time that OpenAI is probably going to separate itself from its origin, which is a nonprofit organization that was built to benefit humanity with AGI, and this seems to be as I mentioned earlier in the opening might be the last nail in the coffin. Now there's only three people off the original founders of open AI and its leadership that are still in the company out of 13, everybody else has left. Now that's not the first time that a company gets founded and then people are leaving. But I think most of these departures have to do with the fact that open AI has left the direction that it initially took that people signed up for, both in means of the purpose of the organization, as well as the level of importance of safety in that environment. So to benefit humanity, you need to be very cautious with what you're doing. And I think that has went out the window for a very long time now, and we've seen multiple examples in the past year about that. So the departure of such important figures in the company is obviously a big deal. And I think it hints that the, if you want takeover of Sam Altman over the company to lead it in the direction he feels is the right direction is very obvious. I think there's going to be very little opposition to Sam moving forward. And I think we will see open AI changing into a for profit company sometime in the near future, and then raising this initial amount of money, which again, I don't think is a lot of money. And we're going to talk a lot about money in this particular episode, including about open AI, but they're going to raise six or 7 billion, at least in this round, and I think they're going to raise a lot more than that in the coming year. But the other big, interesting piece of news coming from open AI this week is they finally released the advanced voice mode. So open AI demoed advanced voice mode a day before Google made their announcement early this year. But they did not share it with us until this week. There were a lot of rumors why that it wasn't ready, that it's dangerous. There were a lot of issues with the voices that they were using that sounded too much like celebrities and people didn't like it, but it's now available and it's released and you can start using it. I must say that I'm using open AI with voice for a very long time. Now, mostly my voice inputs. Instead of typing, I just type talk to it and tell it what I want it to do. But I never actually try to have a conversation with it, or I tried once or twice before, and it wasn't great. And this week I found myself driving back home late at night and I wanted somebody to talk to and I didn't want to bother anybody. And it was the day that this was released. I was like, okay, that's going to be interesting. I'm going to be speaking at this conference next week. And I have different ideas on what exactly I want to change and focus my presentation and I need somebody to brainstorm this with, and this will be interesting. And I had a 20 minute conversation with the new advanced voice mode. And I must admit it's incredibly good. Now, in the beginning of the conversation, it was really awkward because it's weird having a conversation with a machine. I must admit that about five to 10 minutes in, you totally forget about the fact that it's a machine and just having a great conversation brainstorming session with somebody who's an expert on, any topic that you want. So it was really a helpful thing for me as I was preparing for my presentation next week. But I think in general, this is an incredible capability that I think is going to take over the way we communicate with machines and computers moving forward, because it just makes so much more sense to communicate, just like we always communicate with our own voice versus. Having to type, which is very cumbersome and slow and not, and doesn't make any sense unless you were forced to do that because that was our way to interact with computers until now. So I think this has profound implications in the near future to how people are going to talk to machines. And I also think that you should try this out because it's a very interesting experience. For me, one of the weirdest things was I was telling it about the conference and what my plans are, and I asked it for its opinion. And then it started talking for a few minutes about what it would do and what it would change and about things about the topic. And it's not what I wanted, but I felt uncomfortable stopping it in the middle of the sentence, which is weird because it's a machine and it's not going to be offended if I stop it in the middle of a sentence. I think it's something we'll get used to very quickly. And we'll be able to find the right way to work with this. Just like we learned prompt engineering and other capabilities to make the most out of these AI capabilities. Now this advanced voice mode was released in the U S but it was not released in the EU because there's restrictions in the EU AI Act that is preventing it from being released. Specifically, the AI Act has specific clauses that are saying that AI systems will not infer emotions from other people actual people. And this tool has the ability to detect and inflict emotion. And so it's currently not allowed to be released in the EU. I don't think when the EU has put this law in place, they had that particular use case in mind. There's obviously pros and cons in here. On one hand, it's going to put the EU people and businesses In a disadvantage compared to other people on the planet that will be able to use this functionality. On the other hand, it's protecting them emotionally from these tools, potentially manipulating their emotions. And so I'm not sure where I'm on the fence on all of this. I'm sure we're walking into a very weird future right now. So this is not science fiction. This is not five years down the road and we have time to think about it. This capability is available. Right now. So what's the right approach to this? I'm not a hundred percent sure. I definitely think we need very clear regulations on what these tools are allowed and not allowed to do. And I think they need to be enforced very aggressively with very severe consequences to whoever is not following those rules. By the way, the flip side of these rules and if I already mentioned the EU linked in has started scraping user data to train AI models, they initially claimed is just to train their own models. And then they said it might go to Microsoft as well. So it's not very clear what the training goes for, but they've done this without changing their terms and conditions, which means there are Practically breaking the law just like anybody else who's scraping data to train their AI models Only they did it on their own platform for their own users that has an agreement with them, which is obviously wrong They went and corrected that as soon as they got caught and this went out. But if you want to switch it off, you can do this in your settings. So in your settings, they have a toggle switch for what they call, use my data for training content creation AI models. And you can find it in your settings and switch it off. And then they presumably are not training on your data. That being said, a lot of other companies LinkedIn to train on that data. So all you're doing is preventing LinkedIn from training on your data. you're not really preventing that from anybody else, the reason it's connected to my previous news is they did not do this to people in the EU. So obviously these regulations are working. They are protecting the individuals who own the data and the companies who own the data and the people who are sharing information online from having their data being used for other reasons. And so I do think that regulation works. I think that, Clear guidelines on what companies are not allowed to do with very severe consequences is the only way that we can protect ourselves from the negative aspects of this AI future. That being said, there's issues with open source models. And we've got to talk about that later on today. But back to OpenAI, Sam Altman, as we know, has been pushing for a while for a very significant investment in AI related infrastructure. there was the whole conversation earlier this year about raising 7 trillion from multiple bodies around the world, including Saudi Arabia. But now Sam Altman has released a blog post actually on his own personal website and not through open AI advocating for how critical it is that the U S specifically invests significantly in infrastructure that will drive AI growth. He went as far as saying that not doing so may lead to lack of infrastructure and resources, which is a similar situation that in the past had led to significant wars. So in Sam Altman's eyes, and I never underestimate his vision. He believes that AI is the power of controlling the future. And so governments will have to invest to do that, or go to war for those resources. That's obviously not something I think anybody wants, but that might get there. Now, in parallel to all of this, Microsoft and BlackRock, one of the largest investment companies in the world has launched a 30 billion fund for AI competitiveness and a, an energy infrastructure, all tied into AI.Infrastructure in the future. The White House has held a round table with AI leaders to discuss that. And so it's definitely a focus, but it's also a very big problem because There's no clear path to profitability. It doesn't necessarily putting. Large data centers in specific cities. Doesn't this doesn't necessarily provide any economic benefits to that region because it's mostly data. There's not a lot of jobs that it's generates. We talked in the past about several different reports, including from Goldman Sachs, that saying there's no clear path to ROI to any of those big investments that are being made right now, the Negative environmental impacts that these data centers have right now is significant because they require a huge amount of water to cool them down. They require a huge amount of electricity that currently also comes from fossil dependent energy. And there's obviously a lot of discussions on how to build nuclear reactors in order to drive them and so on. But either way, there negative implications to pushing that forward. And there's also the other negative impacts that we talk about a lot in the show, which is weaponizing these systems, which there's zero doubt in my mind, it's already happening. So building AI driven or quasi AI driven weapons and deep fakes in the news and AI generated misinformation and disinformation and job displacement. So there's a lot of negatives. In that, but the fact that there's a huge push forward to provide the right resources for that is undeniable. And it's happening right now. And you just need to be aware of that and try to impact that as much as you can, mostly in the awareness and education about the negative implications to hopefully prevent them, or at least minimize them as much as possible. Now, also in his blog post, which by the way, was called the intelligence age, which he released this past week. He also talks about the fact that AGI is imminent and it's actually coming pretty fast. So he used the phrase a few thousand days. I don't know how much is a few thousand days, but it sounds to me like less than five to ten years, which was the previous assessment. So if we use the concept of a few thousands, not a lot of thousands, we're talking about two to three thousand days, divide that by 365 days a year. You're in, I don't know, four to five years timeframe where we're going to have super intelligence or AGI. The truth is, as I mentioned in this podcast several times before. It doesn't really matter when AGI is achieved because every single milestone along the way has very significant impacts already. So GPT 01 is already better than PhDs in many different topics in answering questions, doing research, solving complex problems. So we don't need to wait for AGI to get all the implications of what it means to the workforce, to society, to education, to healthcare, to all these different things that make us what we are as far as a society. And speaking about education about AI, OpenAI has announced that they've launched the OpenAI Academy, which is an initiative aiming to boost AI skills and career in developing countries. So low and middle income countries. So this is, Obviously fantastic, right? It's a great thing and I really hope that more companies go in that direction and we're going to talk in a minute that Google is actually doing similar things, but the funny thing or sad thing, if you want to be realistic, is that they've assigned 1, 000, 000 in API credits for that fund. That's not A lot of money. That's actually a negligible amount of money in the big scheme of things. Now, in addition, they're going to do other things in the program. They're going to host incubator and contest. They're going to be providing access to AI experts and developers for these developing countries. And they're building a global network of developers to foster collaboration and knowledge sharing and so on. So fantastic. This is great. I think, again, it's a good idea. I think it's an initiative that has to move forward. Thank you. But the amounts are just so sad because they're so small compared to all the other numbers that you hear. So we just talked about the fact that they're planning to raise 7 billion dollars. And out of that, they're investing 1 million or maybe a little more because there's going to be all the infrastructure around it. Let's say 5 million, let's say 10 million in providing AI education across the world. That's not enough. Now, in parallel, Google has announced that they are launching 120 million Global AI Opportunity Fund. So Sundar Pichai announced that in the UN Summit in New York this week, and it aims to expand AI education and training around the world. Focusing on local language. So using Google translate, they can take that content and knowledge and deliver it across the world very effectively. And they're going to do this through partnership with different nonprofit organizations around the world. So again, great news, 120 million. That's a way bigger amount than open AI is investing, which makes sense. Google is a much bigger company with insane amount of positive revenue versus open AI that are losing billions every single year. But to put things in perspective, there was another article this week that talked about the return of nom shazer into google. So a quick recap, we talked about this a few weeks ago. Here was a leading google AI researcher who left due to disagreements with google and founded character dot AI. And recently open AI in a, in this weird licensing deal did not really buy character AI, but they like assimilated character AI. So this particular piece of news says that they've done the math on how much money actually was involved in that overall process to getting Norm Shazir and his team to be fair into open AI. And the number is 2. 7 billion with a B. So Google invest in 2. 7 billion dollars in getting a few leading researchers back into Google, but they're going to invest 120 million in Global Opportunity Fund. So again, you see where the funds are being invested right now, the real money, all of it, or the vast majority of it is invested in driving these models faster and better in a fierce and insane competition that nobody knows its implications, and investing some of it to say, Hey, look, we're going to go to the UN and we're going to, and we're going to announce this global fund. That sounds amazing. But in the big scheme of things, this is. Sense on the dollars or even, or not even that in the amount of money that these companies are investing in actually driving the models forward. We have been talking a lot on this podcast, on the importance of AI education and literacy for people in businesses. It is literally the number one factor of success versus failure when implementing AI in the business. It's actually not the tech, it's the ability to train people and get them to the level of knowledge they need in order to use AI in specific use cases. Use cases successfully, hence generating positive ROI. The biggest question is how do you train yourself? If you're the business person or people in your team, in your company, in the most effective way. I have two pieces of very exciting news for you. Number one is that I have been teaching the AI business transformation course since April of last year. I have been teaching it two times a month, every month, since the beginning of the year, and once a month, all of last year, hundreds of business people and businesses are transforming their way they're doing business because based on the information they've learned in this course. I mostly teach this course privately, meaning organizations and companies hire me to teach just their people. And about once a quarter, we do a publicly available horse. Well, this once a quarter is happening again. So on October 28th of this month, we are opening another course to the public where anyone can join the courses for sessions online, two hours each. So four weeks, two hours every single week with me. Live as an instructor with one hour a week in addition for you to come and ask questions in between based on the homework or things you learn or things you didn't understand. It's a very detailed, comprehensive course. So we'll take you from wherever you are in your journey right now to a level where you understand. What this technology can do for your business across multiple aspects and departments, including a detailed blueprint of how to move forward and implement this from a company wide perspective. So if you are looking to dramatically impact the way you are using AI or your company or your department is using this is an amazing opportunity for you to accelerate your knowledge and start implementing AI. In everything you're doing in your business, you can find the link in the show notes. So you can, you just open your phone right now, find the link to the course, click on it, and you can sign up right now. The other piece of news is that many companies are already planning for 2025 and we are doing a special webinar. on October 17th at noon Eastern. So that's a Thursday, October 17th at noon Eastern. We're doing a 2025 AI planning session webinar. When we are going to cover everything you need to take into consideration when you're planning HR budgets, technology, Anything you need as far as a I implementation planning in 2025, we're going to cover all the things you can do right now. So still in Q four of 2024 in preparation to starting 2025 with the right foot forward, but also the things you need to prepare for in 2025. If that's something that's interesting to you, find another link in the show notes that's going to take you to registration for the webinar. The webinar is absolutely free, so you're all welcome to join us. And now back to the episode. Now, another really interesting piece of news that came out this week that is related to open AI is that Johnny Ive, who is a legendary Apple product designer, who is behind some of the most notable products in tech history. He left Apple a while back to start his own company that is called Love From and there were rumors that he's working together with Sam Altman from OpenAI on an AI product, a physical product. And this week he confirmed these rumors, so the goal of what they're working on and I'm quoting is to create a product that uses AI to create a computing experience that is less socially destructive than the iPhone. That's it. They didn't mention what it's going to be. They didn't mention exactly what they're working on, but it's going to be some kind of a device, probably a wearable device that you don't have to hold in your hand. So this could be glasses. This could be a pin. This could be an earbud. This could be many different things. We don't know the shape and form that it's going to take, but we know the two of the most advanced minds, one in creating products and the other one in creating usable products addicting consumer products and the other in AI are working together on something like this. Now there's devices like this today, and we're going to talk about a few new capabilities like that. That's moving forward. But rabbit that came out and had a horrible device. There were different pins and necklaces, and they all didn't have very good success. A) because they just weren't good enough and they didn't deliver on what they were promising B) and that problem will continue to exist is there's a lot of ethical questions in privacy. When somebody is wearing sunglasses and you don't know if they're recording you and analyzing you while they're talking to you, there's a big problem there. Thing that I see is that resistance is futile in this particular scenario, because this is coming. How exactly we're going to figure it out. I don't know, but we will have to, because I don't see a future in the near future where this doesn't exist, where everybody's wearing a device that is connected to AI that can help them analyze Everything that they're seeing and take actions in the digital world that's connected to what's happening in the real world. And some of it could be really cool, like you're seeing something, wearing something that you like, and you can ask what they're wearing and you can order it online with free shipping. Two seconds without actually clicking or opening anything. That is cool. And there's analyzing of rocks that you find that are interesting or identifying animals or plants or knowing how to navigate in a city you've never been to, or talking to people in the language you don't understand. there's a lot of beneficial things to it, but there's a lot of really bad things that can be done with it. And we have to find a way again, as a society to deal with that. But Again, whether we're ready or not, these products are coming and to continue in that thought, Meta just unveiled it's Orion augmented reality glasses prototype in their MetaConnect events this past week. So Meta had a huge event. We're going to talk a lot about different announcements that they made, but the coolest stuff, and again, the most exciting and troubling stuff that they announced is this new pair of glasses. So as Meta has this partnership with Ray Ban when they already released consumer grade glasses that has a camera and a microphone, and you can do different things with it. But this is a whole different kind of animal and it's worth going and watching the demos of Orion from the meta announcement. Very cool. It knows how to do eye tracking and hand tracking and voice control. And there's like this cool thing wrist user interface where you can type on your wrist to do things. And it comes with its own set of AI features. And relatively small batteries right now, this thing costs a fortune. So meta's reality labs, which is the developer behind this thing has lost 16 billion in 2023. That's obviously not the only thing that they've developed, but it tells you how much money goes to the development of this. And I think one of the challenges is going to be, how do you make this and it's a really nice thing, which amazing in some of its capabilities. In a cost or a price that people will be willing to pay. But the other questions are all the questions that I asked earlier. What does it mean to society? How do we live in a world where everybody's recording everything all the time and analyzing with AI, everything all the time, as I mentioned earlier, that's where we're going to start thinking about it. If you have any ideas, raise your voice and let's try to figure it out together. Now, the other thing that Meta announced this week, surprisingly, roughly at the same time that OpenAI announced their advanced voice mode is that they, that you'll be able to communicate with Meta AI using your voice and getting answered by different voices. Some of these voices are AI voices, but some of these voices are voices that they licensed from specific celebrities, such as judy Dench and Kristen Bell and John Cena and Awkwafina and Keegan Michael E And again, the timing is interesting. I don't know if open AI chose to finally release their voice thing because of this announcement from Meta and they didn't want to be left behind, even though they demoed, because they've demoed their capability six months ago and they sat on it to make it quote unquote, better, safer, whatever it is that they were doing, and now they released it at the same week as Meta. So I think. Maybe one has to do with the other, but this feature will be available immediately across all of Meta's family of apps, including Facebook and Instagram and WhatsApp. So you can start using that voice functionality starting right now. They're probably rolling it out. So if you don't have it in your phone today, you will have it sometime in the next few days. Meta also announced The launch of llama 3. 2. So their latest and greatest open source model. And the biggest difference from previous model is that llama 3. 2 is multimodal and it knows how to understand images. So all the meta models before were text only. And now you can insert images into the conversation and it knows how to understand them and relate to them and work with you on what it's actually seeing. That's obviously connects very well to having glasses that can understand what's happening in the world around them. So all of this technology is being developed in tandem with a much, much bigger vision. But right now what they're giving us is the capability to work with images in the models. They released two different models, one with 11 billion parameters, the other with 90 billion parameters. And they also released two new versions of text only models with 1 billion and 3 billion parameters that are faster and cheaper than the bigger ones to use and still better than the previous text only models the llama 3. 2 model comes with a hundred twenty eight thousand tokens context window Which is much bigger than we had from them before and it's aligned with what you get from ChachiPT today So a very easy capable model that is running open source and they're claiming that it's very good at understanding images including charts and graphs and captions on images and it can identify objects and create descriptions of them and so on. Now I want to connect this to something that you may not be thinking of. Meta has the largest Inventory of images in the world by a very big spread. And that spread is growing every day because of the amount of images people are uploading to Facebook and sending on WhatsApp. They had the ability to analyze those images for a very long time. The only thing they're doing right now is releasing it to us in a way that we can make sense in it and not just running in algorithms in the background. So all they're doing is they're taking something that had been researching and working and deploying for a decade. And they're giving access to us to be able to use it on top of the existing infrastructure and platforms that we already. Now they're also claiming that Meta 3. 2 is competitive to Anthropic Cloud 3 Haiku, which is their smaller Cloud 3 model and to ChatGPT 4. 0 Mini, which again is the smaller model from ChatGPT and that it's better than the other open source model like Gemma from Google in the open source world. So it's a very capable new model that is completely open source that you can get access to on all the normal places where you get access to open source. So either there's either meta's websites or hugging face or llama. com. But they're also integrating all these capabilities into their ad generation and to their enterprise solution. So enterprises can now build agents to do customer interactions and help with purchases on the meta platforms that connects to all the agent craze that we talked about, in the past few weeks. And specifically last week, they are saying that 1 million advertisers are already using metas generative AI tools to create images and to create descriptions and so on. And they're claiming that there's an 11 percent higher click through rate and a seven, 6 percent higher conversion rate from AI assisted ad campaigns, which connects to my concerns and the EU concerns and so on that these models are very good and very convincing to humans to take actions or think in a specific direction, which on one hand is great for advertisers, on the other hand, scary to anybody else. And the final piece about Meta and part of their announcement is that they've announced that their Imagine feature, which is their ability to create images, Is rolling out in a new functionality to all their platforms, fully integrated with the existing user interface. So users across Facebook and Instagram and WhatsApp will be able to create images and as part of their stories are as part of their profile pictures. And as part of everything else that has to do with communicating and using these platforms. Platforms, some of this existed before, but they've added more and more places where you can use the functionality natively within the existing apps. Now I'm going to quote Mark Zuckerberg in order to explain to you exactly what the vision is. So he's saying unlimited access to those models for free, integrated easily into our different products and apps. So the movie is very clear. They have billions of people using their products every single day and making the AI functionality available within the native apps in a way that people have used the apps before makes a very fast adoption. So they're currently claiming that they have 500 million monthly active users to The meta AI capabilities, and they predict that meta AI will be the most used AI assistant in the world by the end of this year. And again, they're saying that because it's just going to be a part of apps that people are using anyway. So they have a huge amount of distribution and a very loyal customer base that we're just going to use this seamlessly within the existing applications. Now, let's move from meta to Anthropic. We talked about OpenAI raising money. So Anthropic is also searching for the next raise. And the next raise is supposedly is going to be from a 30 to 40 billion valuation, doubling their valuation from their previous round earlier this year. To put things in perspective, the assessment right now is that Anthropic will end this year with 800 million in revenue, which is significantly less by the way than open AI. Open AI is projected to end the year with around 4 billion in revenue. but. Anthropic is burning again, projected about 2. 7 billion. So they're short 2 2024. So these companies are burning through insane amount of cash, both on talent that we talked about before, how much Google spent for bringing back one person with his team. so a huge amount of money on talent, but also a lot of money on training and inference for all the models that they're running. An interesting, a new interesting announcement from Anthropic this week, they have released what they call contextual retrieval, which is a new methodology that allows to get significantly better results when doing RAG, which is getting AI to respond based on information that you provided, so your documents, your databases, and so on, and they're claiming That it reduces the error rate by 67%, so significantly less hallucinations by using this. I'm not going to go into the details of exactly how to do this, but they're saying that they've tested it across various domains such as writing code, writing fiction, scientific papers, evaluations, financial document analysis, and so on. And they have released what they call a cookbook. So basically how you can implement this to make the most out of your data. So if this is something your organization is trying to do, it's worth researching what Anthropic just released. Now, speaking on model releases and new functionality, Google has announced two new models in their platform, Gemini 1. 5 Pro 002 and Gemini 1. 5 Flash 002, which is just a new variation of the previous model that they released. The biggest differences are that it's 15 percent faster faster and their increased rate limits. So you can do more requests per second through the API. They're also claiming higher benchmark performance, like 7 percent improvement on MML, you pro and 20 percent improvement on math, heavy benchmarks. So better models. for cheaper, not work faster. And that's has been the trend all along. And that's going to continue being the trend where we're going to get better and better models that work faster, and that costs us less, which connecting to everything that we said before has a lot of beneficial aspects, but also a lot of negative aspects as well. Now, since we mentioned a little bit about. Fundraising Plux, the company that has gave us the image generation capability by Black Forest Labs that has been integrated into Grok, which is the AI model that runs within X for its paid members are looking to raise additional money. They just came out of stealth a couple of months ago sharing that they raised 31 million dollars so far. And they're currently looking to raise a hundred million dollars at a one billion dollar valuation only a few months after releasing their product. As somebody who's been using their product, I can tell you, it's absolutely amazing. It's the only image generation model right now that in my eyes compete with mid journey. And so obviously developed by people that are very capable. And the thing they're going after, in addition to improving their image generation capability is they're going to use the money to develop a state of the art text to video Tool that I'm going to be very excited about. And now we're going to talk a lot about video generation, what's happening in that field, because a lot is happening. So we talked a lot about that in the past few months. And I told you even before that 2024 is the year of AI video generation. I said that by the way, in 2023, but it's turning to be highly accurate. One way the company that maybe has the most capable model right now that is available to us has announced that they're releasing an API for their platform, so it's currently only available for limited access and on a wait list, it offers the Gen 3 Alpha Turbo, which is their faster, slightly smaller model then their flagship gen three alpha without the turbo in it. And the pricing for it is going to be one cent per credit and you need five credits per second. So basically five cents per second, which means to produce a total of one hour of videos. You're going to pay 180. Now that sounds a lot for AI users who are used to pay 20 a month for everything or get stuff for free. But if you compare 180 for an hour worth of video production, it's negligible. It's basically three zero compared to historical ways, traditional ways of producing video with lighting and cameras and actors and audio and editing and so on. So the number of 180 for an hour worth of video produced is basically free compared to traditional ways. And it's very powerful. And now it's going to be available through an API that obviously has profound implications to the whole creative and video generation industry. A study done by the animation guild this year is projecting significant impact on this industry. And they estimate that a hundred thousand jobs in the entertainment industry are going to be affected by AI in the next two years. Now, at the same time that runway made their announcement, Luma, which is their biggest competitor in the Western hemisphere has announced that they are releasing. for their platform called dream machine. So dream machine API is now available. So not just on wait list, you can actually get access to it right now. And it's roughly the same amount of price as the runway API, it's going to cost you about 252 for a full hour of video production. Again, you cannot produce a full hour of video, you produce very short videos. But if you need to produce a total of a full hour, just to compare the two, it's going to cost you 252 versus 180 on the other model. The whole AI generated video World has been booming and we talked a couple of weeks ago about Adobe releasing their enterprise safe Firefly video creation. We talked about Alibaba's models, which are amazing. We talked about some other Chinese models, which are providing amazing capabilities. So this is running very, very fast. And that's before OpenAI has released Sora. So Sora, which was announced very early this year and demoed very early this year, I think February, which I think send this whole craze into Hyperdrive has not been released yet, other than two weeks ago. Few large group from the industries like Hollywood studios and so on to figure out how to release it safely and in a better way, it still produces better videos, longer videos than any other platform that is available to us today, but it was not released to the public. Open AI has announced that they have a dev day in the beginning of October. Maybe we will learn there when this is coming out, or maybe we'll even get access to Sora right there. And then. I assume they're going to release it soon because there's a lot of competition that is gaining a lot of traction. And I don't think they would want to stay behind. Now, speaking about Hollywood and the impact of AI on it, Linesgate, which is a small size production company, and when I say small, they're a huge company, but they're smaller compared to other studios, has announced a partnership with Runway to allow Runway to train on Lionsgate existing content of both film and TV portfolio. That is a huge amount of content that will be allowed to train on with some big name movies and TV shows. And the goal is to create an AI generation capability for Lionsgate to be able to produce new content based on their existing content, or if I'm quoting, as they mentioned, augment their work with AI. Now I want to take you back about a year. We're in the summer of last year. There was a big strike by the writers and the actors that were trying to protect their future against the usage of AI and how it's going to impact their jobs. And there was a very long strike and eventually they won and got the agreement they wanted. And when they did, I said that there's no way this agreement will help them in the future, because once it will come to the point of whether you're, Studio is going to go out of business or we'll have to use AI. The studios will have to use AI. Now, what does that mean for actors? What does that mean for writers? I don't exactly know, but it doesn't look as bright as today. Now I know what some of you are thinking. there's still going to be a place for human actors. Absolutely. And there's still going to be wait for celebrities playing in big featured movies. But that being said, if you go to the east side of the world right now in China, in Japan, in South Korea, they are already a I celebrities that are not real, that are generated by AI that are followed by tens of millions of people that are buying product from them, that are watching their TV shows, that are watching their concerts that they're putting together. And they not real, they're all AI based. So if you don't think there's going to be a Western hemisphere Hollywood based fictional AI characters They're going to be as popular as Tom Cruise and Taylor Swift. Think again, because it's coming. So again, the implication to the entire entertainment industry are. Profound like many other industries, but maybe even more. And it's just a matter of time until we're going to see more and more AI generated content on the beneficial side of this particular agreement, I see very interesting business models that are actually beneficial to everyone with this new kind of training. So let's say you have an actor, a real human actor that has participated in a series of movies or in a TV show, and he's highly popular, and now you can create ads with that person that connect back to scenes and themes and actions that happen in that movie without having to bring the actor over without having to film. So you can generate these ads at a much cheaper rate, saving everybody money, still selling the same product in the same level of success, still compensating everybody along the way, including the actor, including the studio while making money for the people who want to sell a product with significantly less overhead. So there are definitely benefits in that process, the ability to spin out new shows for us, a new videos for us, a new movies for us that we would like to watch much faster, much cheaper, which will allow us to get more creative content because a lot of the money is not going to be spent on production. It's going to be spent on creating better content for us because you'll have more money to spend on that because the production is going to be probably two orders of magnitude cheaper. I'm going to shift to some interesting announcement that are not necessarily industry specific, but are very important for us to understand from the future that we're going into. So mark Benioff, the CEO of Salesforce has made a very interesting statement this week. And he basically said that AI and human customer service agents are now indistinguishable. Basically, what that means is that when you are either chatting or talking to a customer service agent, there's absolutely no way for anyone to know whether that customer service agent is human or AI based. We talked a lot about the topic in this podcast in the past year, from the early announcement this year by Klarna that they were able to do the work of 700 full time agents with AI assistants that are getting better scores as far as customer service than the actual human agents. There's Octopus Energy that is reporting 80 percent customer satisfaction on email assistance versus 65 percent satisfaction when it's provided by humans. So again, a 15 percent spread or better results by the AI. The flip side is their companies obviously tried stuff like that, like McDonald's, we're trying to get automated order takers, which failed and they tried it in a hundred restaurants and didn't do it properly, but I think it's just, wasn't done well enough because it's very obvious to me that when it's done well, And like Mike Benioff just confirmed, there's no way to know and these agents run 24, 7, 365 days a year. They never get tired. They never fight with their spouse. They're never in a bad mood. They know how to speak any language on the planet. And they're connected to all the databases so they can get much faster answers and resolve Issues faster than human agents. So I think the whole customer service industry is going to go through a very dramatic change in the next couple of years. Now, what does that mean for jobs for this huge industry? I think you understand very well what it means for jobs in that industry. I don't think we're going to have human customer service agents at all, other than maybe supervisors and higher level of people to deal with very specific unique cases within five years from now, potentially sooner. And again, and to add one last thing to that. These models that now can speak and understand what we speak, understand emotion and simulate emotion as if they have emotions, which they obviously don't. These models have the capability to be very persuasive. So Yoshua Bengio, which is considered one of the godfathers of AI, one of the leading researchers in the field for decades, he's a Turing award winning computer scientist. He says, That there are serious risks with the new or one model that just came by open AI by its ability to lie compared to previous open AI models and compared to other models that exist today. And he's talking about the fact that he will do anything including lying, including deceiving people in order to achieve the goals that were set for it. And this was proven in several different experiments by third party companies, including Apollo research. So what does that mean? It means that again, we have As a society to figure out ways together with governments together with his companies together with hopefully international partnerships to find ways to put limits on what these tools are allowed and not allowed to do to put very significant implications to anybody who doesn't follow these regulations so we can benefit all the amazing capabilities that these tools use. Bring to the world while minimizing the potential negative impacts off that. That's it for this week. There's a lot of other news. There's probably 15 different things that did not go into this episode. If you want to know what they are, sign up for a newsletter that has all of that. It has announcements on all our events and training that we're doing, which are deep dives into very specific topics. We have hundreds of people who participate in those training sessions every single week. So if you want to know how to do that, just open your app right now in the show notes, there's a link to sign up for the newsletter. And in the newsletter, you can find all of that information, including all the news we don't put into This episode, if you are enjoying this podcast, which I really hope you are. And based on the growth in the listen ship, I know a lot of people are. So first of all, I thank you for listening to the podcast and to continue consuming this. But if you're enjoying this, if you're learning from this, please subscribe. Consider opening your app right now, whether it's on Spotify or Apple podcasts and reviewing this podcast right now, pull up your phone and do this right now. And also while you're at it and you have the phone open, share this podcast with a few people that you think can benefit from this, whether friends or family or colleagues or other people that you know, I would really appreciate that you help the world by letting more people be more educated about the goods and bads of AI and how they can prep and learn more about it and you're helping the world me to grow this podcast, which is your way to participate in the process of driving AI literacy. We'll be back on Tuesday with a fascinating how to episode where where we're going to dive into how to do a specific AI use case that can benefit your business. And until then have an awesome weekend.