Leveraging AI

115 | The truth is dead! Long leave Grok-2, AI usage, investments and ROI reports, Gemini Live release and more AI news from the week ending on August 16

Isar Meitis Season 1 Episode 115

Can We Trust What We See? The Growing Threat of AI-Generated Misinformation

With the rapid advancement of AI, how do we distinguish reality from illusion? In this episode of Leveraging AI, Isar Meitis dives deep into the unsettling developments around AI-generated content, focusing on the explosive rise of Elon Musk's Grok 2 and its uncensored image generation capabilities.

AI isn't just a tool for efficiency anymore—it's becoming a force that can shape public perception, manipulate truths, and even redefine our reality. With Grok 2's ability to generate highly realistic images, including controversial political depictions, we're entering an era where the line between truth and fiction is blurrier than ever.

In this episode, Isar discusses:
- The significant improvements in Grok 2 compared to its predecessors.
- The ethical implications of AI models that can create uncensored, hyper-realistic images.
- Real-world examples of how these images are already stirring political and social controversies.
- The broader societal risks of not being able to trust digital content.
- Practical steps and strategies to protect your business and society from AI-driven misinformation.
- How Grok 2 is challenging the AI landscape with its new features.
- Why Elon Musk's AI ambitions are more serious than ever—and why you shouldn't bet against him.
- The potential dangers of uncensored AI-generated content on social media.
- How businesses can navigate the complexities of AI advancements while maintaining ethical standards.
- The role of regulation and education in mitigating AI's risks.

Check out episode 13 | The Truth is Dead ! How AI is Putting At Risk The Trust That is The Fabric Of Our Society - https://multiplai.ai/episodes/13/

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a news weekend episode of Leveraging AI, the podcast that shares practical, ethical ways to improve efficiency, grow your business and advance your career. This is Isar Maitis, your host. And as you can hear, I'm fighting some serious allergies. So I apologize in advance for my stuffy voice, but we have a very interesting week and different than usual most of the news episodes that I've done, we're going to do a deep dive into something interesting that happened this week, but we're also going to talk a lot about implementation and statistics of what's actually happening with AI usage in the world right now in specific companies, specific states, specific organizations. There's a lot of that, this week, and I'm actually excited about it because it will allow us to talk on broader things and not just very simple, straightforward things. new releases and stuff like that, but there's a lot of that as well. So let's get started. The biggest news from my perspective that happened this week is that X dot AI, the company by Elon Musk, that is developing his AI models, just released grok two and grok two mini that is showing a huge jump forward from grok one and grok 1. 5, which were released before now grok one was a complete joke. Grok. 1. 5 was still nothing serious that nobody really paid attention to more than just, Oh, this is a cool, fun toy to play with within X, previously Twitter. But the reality is Grok 2 is a serious contender to some of the top models. So like many of the other companies have done, they have released a test version of it into the LMCIS chatbot arena. I talk about this arena a lot, but if you're new to the show, it's a platform that allows people to test their prompts against two unknown chats and then score them to which one is better. They aggregate all that information and they can tell best on real life use cases, which chatbots are doing better. And so they released a test version of grok two into that arena that got comparative results to Claude 3. 5 sonnet and GPT four turbo. So very advanced model. So it's not a joke anymore. Now, when the earlier versions were released and people started saying, Oh, this is not serious. This is not going to go anywhere. I said, what I've learned that to be true, most of the times is you never want to bet against Elon Musk, especially when he has grudge driving him. And in this case, there's a lot of grudge and a lot of bad blood between him and open AI. And so him wanting to take his place and claim the leaderboard in The AI world doesn't surprise me at all. And that's before they have started training Grok 3, which is going to be trained on a hundred thousand new NVIDIA GPUs, which was considered to be the most powerful AI training machine ever built. So that's the next model, which is expected to come at the end of this year. But Grok 2 is very well on key, important topics and multiple benchmarks per them. So this is information coming from X themselves. Take it with a grain of sand, and this information comes from X themselves, so you take whatever you want as far as how this initial feedback on its performance on the different benchmarks come from X itself. It will be a little while before we get third party feedback on it, but based on that, it's doing very well on graduate level science knowledge, general knowledge, and math competition problems. So a lot of things that these models needs to know in order to be better at solving more complex tasks. Both grok two and grok two mini are now available to X premium and premium plus users and enterprise level APIs is coming later this month. So they're definitely developing this as a serious tool and not just an X social media platform toy. But with all this progress, the biggest splash of this release was actually the image generation capabilities. So the previous model was text only, and this new model now enables you to generate images, actually stunning, highly realistic images. But what really made a crazy storm in the last couple of days, is the fact that the image generation capabilities is almost not limited without any safeguards or guardrails of what images you can create. So if you remember last week I told you there's a new contender to the best image generation model in the world. The first one that can actually combat mid journey so far. And that model is called flux dot one. It comes from a company called Black Forest Labs in Germany that has released this model recently. And this is the model that was integrated into Grok 2, enabling it to create really amazing images. But because it has, Almost no safeguards. And when I say almost, it will not create nude images, but other than that, more or less anything else. So within hours, X was filled with crazy, insane photos of anything you can imagine, a huge amount of them with political aspects showing mostly Trump in really weird situations. Some of them aggressive. Some of them are just strange, but also other political figures as well as Elon Musk itself. And I'm talking about hundreds of thousands of images that were created very quickly. Some of them is very easy to understand that it's a satire or a joke, but many of them, you, if you wouldn't know that were created with an AI, it's just It could be sold to you and to me as a true story. And I will probably believe that because they look highly realistic. Now, this obviously raises many big social concerns. And what I did is I went to my favorite social output, which is LinkedIn, where I'm very active. And I wrote a post on Friday morning about it that caught fire and I want to read the post to you. And then I want to talk to you a little bit about the responses because in a few hours, there were dozens of responses. And now it's probably over 80 responses a day later, but more than the fact, there were so many comments, the comments were deep and long. I've been posting on LinkedIn for years, almost daily. And I have my fair share of successful posts that got dozens or hundreds of comments. I've never received many people who wrote full pages of responses with deep thinking and sharing their emotions or ideas or concepts in such length. And so that's why I think it's important to share that because it goes way beyond just the ability to create. images that look realistic. So the post itself, and you can go and look me up on LinkedIn, if you haven't so far, you should probably do that. I share AI content there every single day, usually practical stuff. But in this particular post, I attached three images, one showing Donald Trump kissing a Mexican cartel boss, the other showing Donald Trump in bed in a very laid back and intimate situation with Putin, both smoking cigarettes and wearing pink pajamas. And the third one is Kamala Harris holding two babies. One is obviously Donald Trump and the other, I wasn't. I'm not a hundred percent sure, but I think it's King Joon il, from North Korea. I'm going to read the post to you and I'm going to talk to you a little bit about what happened. So the post started with this statement, Donald Trump was caught on camera kissing a Mexican cartel boss. Or maybe not. Grok, in parentheses, x. ai model, just released Grok 2 and it introduces access with no filtering to the new flux image generator. It is released just three months before the elections. What could possibly go wrong? On May of 2023, I released an episode on the Leveraging AI podcast called The Truth is Dead, How AI is Putting at Risk the Trust that is the Fabric of Our Society. When people ask me, what is the biggest fear? What is my biggest fear of AI? This is it. We have no ability to know what is true and what is not on any digital communication. Even real time video conversations with people we know and 90 percent of our communication is digital. We all must be aware of this as it will have a significant impact on our society. I do not have any great ideas on how to change this trajectory and the only thing I can think of is at least to raise the awareness of this new and strange world that we are walking into. And I'm not even talking about humanoid robots and brain machine interface chips. What are your thoughts? How can our society overcome the obstacles not being able to know what is true and what is not? And as I mentioned, there are Now over 80 comments of people going in depth into this topic. And I want to share with you some of my thoughts on this. First of all, go and listen to episode 13 of this podcast. So if you're listening to this, you can just scroll back and find episode 13 and listen to it where I go to this in depth, but my real biggest fear. In AI is what I just mentioned. We have no ability, none whatsoever right now to know what is true and what is not in any digital communication. Deep fakes today are good enough to do real time deep fakes. Meaning you could be speaking to your spouse, to your kid, to your boss, people that you know very well on any video platform, this could be on social media, this could be on a zoom call, and it may not be, there's already been cases of alleged kidnapping that forced people to pay ransom for their kids that were not kidnapped, but it was very easy to make them believe that they were. There were one event with a financial controller in, I believe, South Korea in a large international corporation, maybe Singapore, that paid 20 million to another company in a fraudulent process that involved him being on a call with a CFO of his company that he knows well, his boss and another senior director from his company on a zoom call that explained why this is urgent and important, Which reduces suspicion from the email he got earlier and he made the payment. So these things are possible right now. Now add to that, the fact that we have a complete democratization of news, and the fact that the younger generation is consuming most of their news, not from news outlets, but from social media. And you have the perfect storm. Literally anyone can create any piece of news. They can think of good, bad aligned with whatever agendas and share it across the universe. And we have no clue whether it's true or not, and I know some of you thinking, yeah, I can still tell the difference. You cannot tell the difference. Add to that the new video models and the abilities to create videos that are highly realistic. And think about a scenario where it's true. A few people, or presumably a few people, share video shots from their phone of a specific event, in a specific city, in a specific location, from several different directions, sharing the story. It will look extremely realistic, there will be no way to tell the difference, and it will spread like wildfire, and none of it has to actually happen in real life. This is what we're walking into. Now, I'm not an AI doomer. I'm a huge believer in the technology. I think it can completely transform our society in very positive ways, but there are risks that we need to be aware of and we have to find ways to address them. Otherwise, as I mentioned, this puts our society at risk because if you can't know what's true and not true, that raises a lot of other questions on many mechanisms that we take for granted as far as how society works. operates. So I'm saying all of this to give you food for thought. If you want to comment on this, please go to this post. As I mentioned, I just posted it yesterday. So it's easy to find, just go to my profile on LinkedIn and add your comments over there, or just comment to me directly on a direct message on a DM on LinkedIn. I would love to hear your thoughts. And if you have any ideas on how we can address this, other than to educate people on what's coming, then please share that with me as well. By the way, on the same topic, I hold AI Friday Hangouts, which is just an open to the public group of people that join every Friday at 1 p. m. Eastern, and we talk about AI. And in 95 percent of the cases, we talk about specific use cases. So people from my community come and they raise questions on stuff they're trying to do, or they share tips. Successes of stuff they were able to do this week. And we all learned together how to implement AI successfully and ethically in our businesses. It's really a lot of fun and it's. completely informal and just a great way to learn how to use AI. But yesterday, the entire conversation was about this. So obviously a lot of people have their own opinions and their fears, and it's important to share that information because sharing it will potentially allow us to find ways to reduce the negative impacts of AI and help us benefit more from the positive ones. By the way, if you want to join those Friday Hangouts, as I mentioned, they're open to the public. All you have to do is find me on LinkedIn and send me a DM saying that you want to join the Friday Hangouts. I will gladly send you an invite. As I mentioned, it's a non committing, there's no cost to participate, and it's just a great way to learn with other people on how they're implementing AI successfully in their businesses. And now back to the news. Staying on the topic of image generation, a federal judge just allowed several important claims to move forward in a lawsuit filed against some big image generation and video generation companies, including Stability AI, Meet Journey, and others. some of the things the judge approved or allowed to move forward on is direct copyright infringement claims against Stability AI, new claims of copyright and trademark infringement claims against Me Journey, copyright claims against Divine Art and Runway and so on and so forth. Now the, some specific allegations were, as an example, Me Journey have their stylist of over 4, 700 artists where you can say, create an image in the style of, so these people are claiming false endorsement, basically using their names to say that it's actually them Quote unquote, creating the thing, creating the output, which is obviously not the case. They're also claiming that obviously these tools were trained on their copyright work without any permission and that some of these services also allows to reproduce copyrighted work. The interesting thing about all of this is obviously not the first lawsuit against these companies. This is happening left and right against all of the big engines that generate either text. Or images or videos because they're all literally stall everything they could in order to use it to train their models They're claiming fair use which I, not a judge, I don't know what's going to be the outcome of this, whether it is fair use or not, but that is not new. But what new here is that the judge allows this to move forward to trial, meaning there's going to be a discovery phase in which all these companies would have to reveal exactly what data they used and how they used it to train their models. That's information we did not have access to so far, other than rumors and anecdotal. from specific companies, but not a wide range of X. Show us exactly how you did it, which is going to be very interesting if, if it actually gets to that. Now, what might the outcome of that trial be? As I mentioned, I don't have a clue. I'm not a legal person. I. Said that time and time again on this podcast, I assume it will end up with some companies writing some very big checks to compensate companies for whatever companies and individuals for whatever damages were presumably caused to them. It also led to the fact that we know a lot of the big players are now signing multiple licensing deals for data with multiple publishers and companies. And the problem, by the way, that I see with what I just said is that it puts the smaller companies that is doing this at a huge disadvantage compared to the big ones. Open AI, Microsoft and Google have a lot more money to deal with these lawsuits and pay whatever fines is going to be placed on them. Then let's say stability AI or mid journey, which are significantly smaller players, which if these kinds of lawsuits move forward, I'm going to put them at a very serious business risk that may force them to shut down their businesses. Whether I agree or disagree with the claims, or I think they're right or wrong. I am a very big promoter of competition. And I think having smaller players that can do innovating things like Majority, like Stability AI, like Flux is very important to the overall success of any industry and the AI industry. And hence, I would love to see a better solution coming forward, rather than, you Pay a huge check that you may or may not be able to pay because that may leave even more power in the hands of the biggest companies in the world today anyway. Now, speaking of image and video generation runway, the company that has gave us Jen runway gen three, which is the most advanced video generation tool we have access to right now. So we've seen Sora and we've seen the tool from Google. None of them is available right now. So there's a bunch of tools that are out there that are actually pretty good. And runway gen three is. amazing. They just released Gen 3 Alpha Turbo, which improves the speed in which you can generate videos. It's seven times faster and 50 percent cheaper than Gen 3 Alpha that was just recently released. Now it's available right now. You can start using it today across all their subscription to plans, including their free trial capabilities. And it's absolutely stunning. It's almost near real time video production, or like their CEO said, it's actually faster to generate the video than to type the prompt that generates the video. And the amazing thing is that it keeps the same level of quality and performance as the regular model. So when I'm saying it's 50 percent cheaper to put this in specific context, it costs five credits to generate one second of video. And each credit is 1 cent. So basically every second of video is going to cost you 5 cents. generating videos at crazy high speeds. If you haven't seen demos of this, just go and check it out. And this is the direction that everything is going right. It's going to get better and better, cheaper and faster across all the different aspects of AI to the point, as I mentioned in several different episodes, that I'm not sure how the business model of this is going to work, because if it's going to cost. almost nothing, basically free to get really good models. And yes, it's going to be more expensive to get better models, but you may not need the better models because the free or almost free models are going to be good enough for 95 percent of the things that we do. I'm not sure what's going to be the business use case for all of that, but it will be interesting to continue watching that evolve. And since we talking about competitiveness in the AI market, OpenAI released a new product. Two new models in the past week without really, again, any PR or sharing it with the world, exactly what they're doing, but they have made two interesting releases. One is a release of a new variation. Both of them are variations of GPT 4 0. One of them is called 2024 0 8 0 8. That's the date it was released. And that model actually overtook the top spot of the LMC's AI chatbot arena leaderboard, meaning per actual users with actual use cases, it is right now the best model available out there for real world usage. It kicked down Gemini 1. 5 pro EXP 0801 that was just released a week ago to the second spot. Again, there were no real release notes or anything from OpenAI stating exactly what this model does. But users are claiming that they're seeing big improvement on technical domains, especially writing code, and better performance in instruction following at complex prompts and tasks. Users are also claiming that it's working significantly faster, with several users claiming that they can now develop complete iOS apps within one hour, of writing code with this new tool. So what does that tell us on a broader scale? There's a concept in the software world called CICD, Continuous Integration Continuous Deployment, which basically means every time you have a new feature that was tested, you release it without a formal big release of the software. That is not new. I've done this in my software companies several years ago, so that's not new. But what's new here is the fact that it's done without any documentation. So usually when you release new features of a software and a lot of software that you're using today is probably doing it, you get an email and you get a note on the app itself or the software itself that's telling you what has changed and how to use the new thing and what functionality there is and so on. And in the AI world, it just doesn't exist. They just release new stuff and they do this for several different purposes. One is They can learn how the thing works out in the real world versus in their lab with their QA teams. So that's a huge benefit to the companies too. We, as the people are using it, are getting the benefits of more advanced features almost every single day, but definitely every single week. So we gain. The benefit of all of that, the disadvantage is obviously a complete chaos of not knowing which models you're actually using, which ones you should use. If you want to do like a serious implementation on an enterprise level, you cannot keep on changing models every single week or every single day, especially without any documentation that explains to exactly what's changing. So from a deployment and software control management, it's a complete chaos. And the third question is if they are releasing these models so quickly, Every single week, How much are they really paying attention to safety of how secure these models are? Sadly, I think we know the answer. That's I think where we are, like, I don't see that changing. I don't see that going away. I think the competition is so fierce that's what these companies are going to do, and that's their way to actually make their path to the next level. Meaning GPT five. Components of it is probably already out there and available to us through various components that OpenAI has already released. And the same thing with Gemini 2 and the same thing with Cloud 4, et cetera. They're going to keep on releasing small capabilities for us to test. So we basically become the QA lab for these companies out in the real world. And this is working so far, but it may have a very serious backlash sometime in the future. Now, as I mentioned, OpenAI launched another model two days earlier called GPT 40 2024 0806. So it's two days before, but this model actually has a very specific reason. This model was developed specifically for computer developers. And the goal of this model is that it's very good in creating structured outputs, such as JSON schema. So if you're writing code and you need to generate very specific JSON schema, Very specific structured outputs. This is the model for you. Now, these two models are released. As I mentioned last week, there was serious turmoil in the leadership with John Shulman, one of the co founders leaving to Anthropic and Greg Brockman taking a long leave of absence, and obviously a lot of other people that are left before, including Ilya Saskover and Andrzej Karpathy and some other big names from the AI world that has left OpenAI, but it seems that it had no impact on the way they work and the speed that they're working, and so far the quality of the stuff that they're releasing to the world. Now, going back and combining all of this of fast releases and adoption and so on, as I mentioned, we're going to talk a lot about adoption and use cases of AI in the world. There was a very interesting research released by two PhD candidates at MIT and Harvard that. Did a deep research on how AI is actually used in the world. So I'm going to start with by quoting them and then I'm going to dive into the details of what they found. We're seeing a giant real world experiment unfold, and it's still uncertain what impact these AI companions we'll have either on us individually or on a society as a whole. Argues Robert Mahari I joined JD PhD candidate at MIT Media Lab and Harvard Law School and Pat Pataranutapron a researcher from the MIT Media Lab. They say That we need to prepare for an addictive intelligence or AI companions that have dark patterns built into them to get us hooked. And they look on how smart regulation can help us prevent some of these risks associated with AI chatbots that get deep inside our heads. End of quote. what did they find in the research? They found that people are forming AI. actual relationships with AI systems, using them as companions, mentors, and more. They also found, not surprisingly, that sexual role playing and creative tasks are among the top popular uses of chatbots. If you ever looked at the statistics on how much of the internet as a whole is dedicated into porn, this will not surprise you. What I mean by that is, That the latest statistics and there are multiple sources and I don't know how many of them are accurate, but they're all land on roughly the same number. That 30 percent of the total data transferred across the internet as a whole is porn and sex oriented. So the fact that this is one of the main usages of AI is not surprising. They're also saying something that we know right now, that productivity gains that are promised by AI is still largely not materialized, even though I'm going to share with you another article that may claim otherwise, as far as the thing they found that people actually use it very successfully for is using AI chatbots for brainstorming, planning, and general information queries. I. Say that a lot when I talk to people and when I speak on stages and when I teach my courses, it is the most amazing brainstorming and ideation tool we ever had. And it's way more powerful in that than generating content, which is the first thing people jump into. So the ability to use AI in data analysis and strategy and brainstorming ideas and figuring out processes that has significantly higher ROI to your business than creating content is probably the first thing. Place you should go, even though most people go to content creation. But the point that they're making is that we need some serious thinking into the kind of regulation and training that we need to put in place in order to, again, avoid some negative aspects. And in this case, they're talking about literal mind manipulation and social manipulation. Either by mistake or by bad actors who just want to use these capabilities to their benefit. Now, on the flip side, the recently formed AI enabled ICT workforce consortium, which is a, Consortium that was established by some major tech companies, including Cisco and Google and IBM and Microsoft and some other really big players has released their report on the transformational opportunity of AI on it jobs that their finding is aligned with what I thought. Just the numbers are way higher than I expected. So they're claiming that based on the research, 92 percent of IT jobs will undergo high or moderate transformation due to AI advancement. Mid level and low level positions are going to be impacted the most with 40 percent of mid level and 37 percent of positions in low level will have the biggest changes. And they're saying that traditional skills like basic programming and data management are going to become less relevant. So if this is something you're doing today, or if this is something you're considering going to school for, you probably need to rethink your future career. Now, what this workforce is built to do is to empower workers through re skilling and up skilling initiatives within these companies. And they are committing together huge amounts of money and initiatives to train 95 million people over the next 10 years in order to make them AI ready. And in order to allow them to not lose their jobs, but actually do something else, once AI does the things that they're doing right now. So it's very obvious per them, and I'm quoting, it's critical need for IT professionals to develop AI literacy, data analysis, and rapid engineering skills. I would say that, first of all, I completely agree with them, but the reality is, this goes way beyond IT. So the reason they're talking about IT is that's what this consortium is. It's a combination of IT companies. but the same thing will happen in most industries. Industries, meaning if you are not looking as a person on how this will impact your career and the path that you chose yourself in life and as a company or as a consortia or as an organization, you're not looking how this will impact your entire industry and you're not taking steps in order to do that. You might find yourself in a really bad situation as this thing evolves. AI literacy is really critical. I can tell you that I'm working regularly with multiple companies. I teach courses. I teach AI courses twice a month. These courses have dozens of people in each and every one of them. Most of them are private courses to different organizations and different companies and groups. And the impact on the people that are taking the courses and the companies that are driving them is incredible. the change can be noticed within weeks of these organizations. Some of them become my consulting clients. So I actually have a view to what they're actually doing after the course. Many of them stay in touch with me on LinkedIn or join the AI Friday hangouts and discuss what they're doing. I can tell you that the impact is profound. So if you're not doing any AI educational initiatives in your company and for yourself as an individual, you need to start looking into that. If you want to find what I'm doing, just go to my website, multiply. ai, and there's a link on the show notes. So it's not hard to find, but it doesn't have to be me. But you do have to do it. Staying on the same topic of AI adoption ramp, which is a large corporation that provides financial platform from many different companies. And because of that, they have a view into how large corporations are spending has released their 2024 business spending benchmark report. This report obviously focuses on Q2 spending and AI plays a very big role in that report. So first of all, the mean spending on AI across the companies that they serve has grown 375 percent year over year from Q2 of 2024 to Q2 of this year. Which tells us that companies are making bigger and bigger bets on AI and more and more companies are doing that. Now, customer retention for AI vendors also improved dramatically. So 70 percent of businesses that started using these vendors in 2023 are still using them in 2024. That number. Was only 41 percent going from 2022 into 2023. So in addition to the fact that companies are spending more and more companies are spending companies that started spending, continue spending on the topic a lot more consistently than they did before. That makes perfect sense. In 2022 it was the wild West. There was no clear guidelines, no clear training. The tools were not ready for real business usage, but that has changed a lot in the last 12 months. So now you have places where you can train your people. You can develop your own training capabilities and the tools themselves are much better tailored for enterprises and business usage. The list of tools companies are spending money on has also very interesting understanding. So the top two companies as far as spent by a very big strand are OpenAI and Anthropic. So that tells you that companies are spending money directly with the large language model that companies are spending most of their money on implementing foundation AI models into their ecosystem. But in addition, a lot of small tools are making a very big splash. Tools like Grammarly and MidJourney and 11Labs are actually ranking high among the AI vendors that get a lot of attention, which tells you that companies, in addition to the infrastructure side of things, are also addressing the cost. actual daily use and needs across multiple aspects of the business, which is the right way to go. So one of the things that I do when I work with my clients is looking at the strategic side, like what kind of licenses you need and what kind of changes you need to make in your business strategy because of AI, but also looking for the small quick wins and use cases that you can benefit. And this report doesn't prove it's right, but it proves that a lot of people are taking the same approach. Now, speaking of AI tools that are growing in traction and usage, A recent report just released the growth that perplexity has experienced in this past year. So in this past month perplexity has hit 250 million queries, which is the same amount of queries they had in all of 2023. So that kind of tells you how popular this tool became. I've been using perplexity for many months now. I use it every single day. I use it more than I use Google because It really changes the mindset going from a search engine to an answer engine or result engine. I don't want to spend time clicking on links to try to find the information I'm looking for. Just give me the information. It just makes a lot more sense. And so This huge growth in perplexity usage does not surprise me at all. Now, the reality is while perplexity is doing great, I don't have a very good, warm and fuzzy feeling about them. And the reason is open AI just started releasing search GPT and obviously Google has stuff up their sleeve as well that they're currently not releasing. I'm going to talk about in a minute. Why? OpenAI has a significantly bigger distribution than Perplexity and once SearchGPT is widely available, assuming it's going to do at least as good of a job as Perplexity is doing, which I don't see a reason why they wouldn't. Most people who already open AI users and I'm not using perplexity, I'm just going to use search GPT and a lot of people who use perplexity right now will do at least some of the searches with search GPT. So I think that perplexity that has done an amazing job so far is going to take a serious hit in the amount of traffic that they're having, but I obviously might be wrong. The one thing that I will tell you about these tools perplexity in specifically is that it has crazy hallucinations. So if you're using perplexity, you still need to go and verify that the answers that it's giving you is accurate and based on real information. Now, I'm not saying it does it all the time, but I encounter these situations at least once a day. And again, I'm a heavy user. I use it a lot, but I now have my own little process of clicking on the links that it gives me to do a very quick verification of the answer. So it's still faster for me than going through 20 different links to look for the information. The information is already there. I just need to quickly verify that it is based on real information from credible sources. You can test that out with the free perplexity version. It's good enough. I'm currently using the paid version because it allows me to do more advanced stuff and with different models in the background. So if you need, I'm going to show you this for a more heavy usage, I highly recommend trying at least for one month, the advanced capabilities and the ability to change the models in the background because it makes a very big difference in the results. Now, I told you I think Google will play a very big role in this as well and they're not playing the game yet, they're not playing the game yet because they have the most to lose. Google's business model depends on the fact that you're clicking on links, mostly paid links. If that goes away, a huge amount of revenue goes away. And so even if Google has a technological solution that solves a lot of the problems that they have with their, AI search right now, they have no real incentive to release it. It's actually the other way around. They have a very big incentive not to release it until they figure out a way how to monetize the AI based search. But that being said, because they have the most to lose and they have. Incredibly deep pockets and great technology and more data than anybody else. I assume they will figure this out. So again, while I love perplexity and what they've done, I don't see a very bright future for them as a company moving forward in the long run. But since we mentioned Google, let's talk about Google a little bit. So google had a big event this week, the launch off the pixel nine. guess what they talked about in the 1st 25 minutes off the pixel nine launch event. Not the pixel line. They talked about AI capabilities and that makes it very obvious what's the focus of Google as a whole. And I assume we'll see a similar thing in the launch of the next iPhone. So these companies are. Completely focused on AI capabilities and integrating them into everything they do. They showed some really advanced capabilities of AI that are going to be done on device. Some of them are amazing and going to be used by everybody. Some of them are a little creepy going back to how we started today's episode. But it's going to be integrated into everything Google and everything pixel, including the phones, including the earbuds and including the pixel watch. So all these AI capabilities are coming to everything Google, many of them embedded on device to increase the privacy and data security, but not all of them. So let me give you a few examples. The most exciting one is Gemini Live. So Gemini Live is a voice communication capability, similar to what open AI. so of course there is no introduced a day before the previous Google event to show that they're doing it well, Google beat them to the punch, maybe getting the last laugh in the whole event, and they're releasing it so you'll be able to communicate with Gemini, just like you communicating with a regular person, interrupting it in the middle of sentences and having an actual conversation with it. But the cool thing is, they're going to integrate that with everything on the phone. So different apps and capabilities of everything in the Google universe should you be able to have like the virtual assistants were meant to be, we're going to start seeing this finally materialize, meaning you'll be able to ask questions about anything that your phone has access to, which is, anything, and get very detailed and accurate answers based on actual information that is personalized to you. Another cool and a little creepy capability that they showed is the ability to add yourself and or other people into pictures that they're not showing up in. So think about the whole concept of selfies may go away. We take selfies because we want to be in the picture with something else. Now it's always a little awkward because it's a weird angle and we have a big face where everything or everybody else in the background is in the normal size. you don't need to do that anymore. You can take the picture off the view or the people or the situation that you're trying to take a picture of, and then add yourself very naturally into that image. Now, going back to what we talked about with grok in the beginning, that raises a lot of questions and concerns, but that functionality is available on the pixel phones, starting So Another somewhat creepy and yet very useful functionality that they added is the ability to record, transcribe, and summarize every phone call that you're having. So I've been using these kind of tools on online video calls, such as zoom and teams and Google meet for months, and they're extremely powerful. And I always wanted to be able to do this on phone calls that I'm having for business purposes, which is the only thing I didn't have documented and summarized. that's not going to be available inside the phone itself. Again, that raises a lot of privacy concerns because some people probably do not want to be recorded and summarized. And so a lot of ethical questions have to be raised. I think we got to get used to a world where everything we do is transcribed and documented somehow, because I just don't see a different path forward. What does that do to human communication as a whole, especially on the digital universe? Don't know, maybe we'll just get used to it and that's it. And we won't think about it anymore. but I do think it raises concerns and things we think about. I really hope that all these things, what we talked about in the beginning with Grok and the inability to know what's true or not. And these kinds of things that I'm talking to you right now with the Google deployment, I really hope it will lead to stronger real life, human communication, just one to one face to face meeting with people, because that's going to be the only way we can trust that the things that are said to us are actually said by the specific people that we're seeing. So since we are talking about Google, let's continue with Google and let's connect it to the topic of how people are using AI and what's the ROI of that. So Google Cloud just hired the National Research Group to do a research for them on the usage of AI across large corporation. They surveyed 2, 500 C suite leaders, including CISOs, and COOs. So basically anybody with a C in the beginning of the name, of global enterprises that make more than 10 million in annual revenue. And their findings are so positive to me that I feel that it has something to do with the fact that this whole research was commissioned by Google Cloud, but yet this is a very serious research. So I want to share the results with you. So 61 percent of the surveyed executives shared that their organizations are actively using generative AI in production in daily things. They're doing their businesses. 86 percent of those using generative AI see increased revenue that on average adds to 6 percent growth. On the productivity aspect, 45 percent of those people interviewed report that employees productivity has at least doubled by using AI. 56 percent said that JDI improved. security in their organizations that I found completely unreasonable and 75 percent site improved leads and customer acquisition processes, while 85 percent report increased in user engagement. Now, the interesting thing is, out of those reporting significant growth in revenue of 6 percent and more, 91 percent of them said that there's a strong C suite level support for AI initiatives, and 54 percent of those said that they have invested in dedicated generative AI teams. So let's go through this information for a second and see what it means. First of all, the numbers are Insanely high. As I mentioned, I work with these organizations regularly. Some of these organizations, I don't know the specifics, but large corporations that are my clients, I know what they're doing. These numbers make absolutely no sense to me. I think they're way too high. And again, I have a feeling that they have to do with the fact that it was commissioned by Google cloud. I also have a feeling that. The results might be distorted just because the people they surveyed. So if they surveyed companies that are currently doing AI implementation in Google cloud, and that wasn't mentioned specifically in the report that I read, then obviously we're looking into that segment of the companies that are actively doing these things versus the entire world. Of companies. That being said, there is two very important things to learn that has to do with what I mentioned in the end to see success in this field of AI implementation. On a company wide initiative. So I'm not talking about Jill or Joe or Sandra found a really cool tool that helps them on the day to day. I'm talking about a company wide implementation. There are three things that must happen. One is C suite support. You need actual buying from the leadership team to actively push this forward. And I see a huge difference between companies I work with that do this and companies who don't just in the speed and momentum and success that it generated for the organization. The second is training and education. We talked about this before these companies that invest in training and education actually see much better results and you can go back to episode 86 of this podcast called Four Ways to Train Your Team on AI. It is a great episode where I dive into four ways that you can train your team to be able to use AI more effectively in a business context. And then the third component is the dedicated AI team. If you want to see success with implementing AI across your business, you need to start an AI committee that AI committee needs to have people from the different departments of your company in order to provide the inputs, but also so you can have champions in each of the departments for the implementation of AI. I have other episodes where I talk in depth about exactly how to build a committee that will actually be successful in helping you implement AI in your business. But these are, as I mentioned, I think the numbers are way higher than the average in the world, but it raises important components that you can start working on today in order to see that kind of success, maybe in the future in your business. I want to pause the news just for one second to share something exciting from my company, Multiplai. We have been teaching the AI business transformation course to companies since April of last year. We've been teaching two courses every single month. Most of them are private courses. And we had hundreds of companies take the course and completely transform their businesses with AI based on what they've learned in the course. But this course is instructor led by me and it requires you to spend two hours a week, four weeks in a row at the same time of day, working with me and my team in order to learn these kind of things, which is may or may not be comfortable for your time schedule and other commitments. So I'm really excited to share that we now have an offline self paced version of the same course. You can log into our platform and there's a link in the show notes. So you don't have to look for it or try to figure out or remember what I say. Literally just open the platform, which you're listening right now. And there's a link in the show notes, which will take you to get to the course. The course is eight hours of video of me explaining multiple aspects on how to implement AI in your business, going from an initial introduction to AI. If nothing. All the way to hands-on experimentation and exercises across multiple aspects of the business from data analysis to decision making, to business strategy, to content creation. Literally every aspect you can use AI in your business today, including explanation. Tools and use cases for you to test as well as a full checklist in the end of how to implement AI successfully in your business. So if this is something that's interesting to you and you don't have the time to spend time with me in an instructor led environment, you can now do this on your own. You can now do this on your own and drive significant efficiencies to your business using the step by step course that we now have available and now back to the news. Staying on the topic of the importance of AI education, a very interesting thing happened in California this past week. so the governor Gavin Newsom has signed an executive order that is going to drive a partnership with NVIDIA To train a hundred thousand California residents on how to use AI. The goal is to focus on students, educators, and workers in order to support job creation and innovation based on AI understanding. And it's a very interesting partnership between government and a private sector. So in this case, it's bringing NVIDIA's resources, including its curriculum, certification software, and all the capabilities that NVIDIA brings with it, together with government agency support for education across the globe. As I mentioned, both education, private sectors across education and workers and individuals in order to get a state wide in order to increase the statewide level of AI literacy. I think this is absolutely amazing. In addition, California state will support or will support early stage AI startups to create innovation zones and job places around the area of AI. So first of all, I'm not surprised it's happening in California. California has always been the hub of tech in the world and probably the hub of AI, at least in the Western hemisphere. But I think this is a very important initiative, and I hope to see more and more of those. And the cool thing is, it doesn't have to be at a state level. The same thing can happen at a county level or even at a school level where you can partner with industry in order to do this. I'm now in the process of partnering with several local universities in order to provide education to the students Regardless of what they are learning in the university right now, you can do the same thing in your company, right? You can partner with me or with other people who provide similar services in order to build a structure that will allow you to Reskill and upskill your employees in order to get better results from AI in the future Which are not guaranteed if you're just going to get them the tools and hope that they'll figure it out themselves. A similar example is happening right now in india, We pro, which is a gigantic Indian tech service and consulting company are increasing their partnership with Google cloud, which we talked about a few weeks ago. They have launched their, we pro AI three 60 initiative, which in addition to providing their customers and themselves access to everything, Google Cloud and Gemini capabilities, they're developing innovation hubs for training and collaboration with industry in order to create best practices, tools and frameworks that businesses can use in order to successfully leverage the infrastructure that they're going to be providing. I hope you see the theme in this episode. The technology itself is not going to get you far. And in many cases, it's going to waste you a lot of money and a lot of time. It has to come together with education and developing tools, frameworks, and processes that are going to be ever changing because this technology is changing all the time, which means you need a group, a committee that will drive this change and we'll continue to direct it in the right directions in order for all your employees and your organization as a whole to enjoy the benefits of generative AI. There are a few other really interesting pieces of news that I'm going to go through very quickly, and they're going to appear in our newsletter. So if you want more details, you can go and find that, but there's a new coding tool that is taking the top position as far as results. It's called Ginni. It comes from a company called cousin, and it is awesome. performing much better on all the coding benchmarks. It is trained on over 50 languages and it does a lot more than just writing code. It knows how to fix bugs and build features and do code refactoring and even deployments. So very advanced tool capabilities, similar to Devin. If Devin just does much better than Devin at these topics. Another interesting piece of news is the launch of PromptPoet, which is a tool that was developed by Character. ai, which is now owned by Google, that helps build content, reach highly detailed and complex prompts to people with no prompting skills. And DuckDuckGo, the browser that is focusing on personal data security, now offering chatbots, allowing you to pick different chatbots and chatbots. Such as GPT 3. 5, GPT 4, 4 mini, Lama 3, and other open source models and run them as incognito, meaning you don't have to be logged in, none of your data gets stored, you can delete all the chats and so on. So if you want to focus on privacy and still have access to high end models, you can now use it within DuckDuckGo. There are other pieces of news, as I mentioned, you can sign up for our newsletter and get all of them, but that's it for this week. If you found this episode helpful and this podcast in general, I would really appreciate if you do two things. One is rate it on Apple podcast or Google or Spotify, wherever you're listening to this podcast, that really helps us reach more people. And that helps you play a role in doing better AI literacy in the world, which is what we're trying to do. And hopefully you are a big supporter of that. And the other thing is just share it with a few people you think can benefit from it. So you have your phone with you right now, obviously, because you're listening to this podcast. Just. Open the app, click on the share button, think of three, five people that you know, that can benefit from this podcast and just send them a message and say, I've been listening to this podcast. I find it very helpful and you can benefit from it as well. I will really appreciate that too. We'll be back on Tuesday with another expert episode, diving into how to do something impactful in your business with AI. And until then have an awesome weekend.

People on this episode