Leveraging AI

61 | $25M AI fraud that can happen to any company, Google releases their most powerful AI model, Open source models are making big waves, and many more important AI news from the week ending on Feb 10th, 2024

February 10, 2024 Isar Meitis Season 1 Episode 61
Leveraging AI
61 | $25M AI fraud that can happen to any company, Google releases their most powerful AI model, Open source models are making big waves, and many more important AI news from the week ending on Feb 10th, 2024
Show Notes Transcript

🤯 Will AI enable a single founder company to hit $1B valuation without hiring anyone?

This jam-packed weekend episode covers the latest AI developments from tech giants like Google, Microsoft, Amazon, and more.

🔥 Topics we discussed:

👁️‍🗨️ Google finally releases "Gemini", their most advanced AI chatbot

💬 Gemini beats GPT-4 on benchmark leaderboard

🤖 OpenAI working on real "Agent" capabilities

🛒 Amazon launches "Rufus" - an AI shopping assistant

🪑 IKEA tries using AI to recommend furniture

📑 Microsoft integrates Copilot into OneDrive

🚨 $25 million fraud using fake AI video call

👔 Altman speculates on the potential for a single founder, AI-powered unicorn


Learn the details and listen to Isar's thoughts and analysis on key topics in this fascinating episode.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to a not so short news weekend episode of the Leveraging AI podcast. Why not so short? Because a lot of really big things happened and a lot of interesting and troubling things happen all in the last week or so, and there's a lot to share and also a lot of thoughts that I have on some of these topics. At the end of the episode, we will discuss two very interesting topics. One that you have to be aware of because it includes a twenty-five million dollars fraud using ai. That can happen to any company today if you're not aware of it. And the other is some very interesting thoughts From Sam Altman on what the future of AI may lead in the world of business. So it may not be as short as some of the other episodes that I do on weekends, but I promise you it's packed with really valuable AI information. The biggest news of this week comes from Google. Google finally released their most advanced generative AI model. So just as a quick recap, google announced Gemini late last year, and they released two levels of their models. The one that is the smallest model that is intended for mobile, and then what they call pro, which was intended for just regular use and was powering Bard, they said that they're going to release the most advanced model that they were calling Ultra in Q-one. Well, they finally have done this this week. It's still called Ultra, but the actual formal name of the product powered by Ultra is called Gemini Advanced. They also rebranded, Bard as Gemini. So now Gemini is Bared. There is no confusion between the model and the actual product, And it's an extremely powerful model. It's multimodal to an extent, meaning it knows how to read images and write code and generate text and images, and it also has some side products that was released with it that can also generate music. So they're definitely building additional capabilities. They have released an app for it as well that was initially released on Android operating system, but I believe it's already released on iPhone. And if not, it probably will be in the immediate future. It will be integrated slowly with everything you know, from Google, it will be integrated into Google Workspace applications, such as Google Sheets and Google Docs. It will be integrated into Google Cloud, meaning it will replace Duet in those platforms, which I must admit, Duet was really underwhelming so far, and so I'm actually excited to see what it is going to do. Now to use the advanced model of Gemini, you need to pay a subscription to Google for what they call one AI premium plan, which costs not surprisingly, 20 bucks a month, which is the same amount that you would pay for the premium models from the other competitors such as Claude and ChachiPT. But you do get two full months of free trial to test this thing out, which I highly recommend doing because I can guarantee you, you will find things Gemini does better than ChatGPT4- Turbo. And you'll find things that ChatGPT-4 Turbo does better than Gemini. And you'll have to decide which one you want to keep or do you want to keep both. So playing with them regularly over the next two months for free makes a lot of sense to me. With the very little experimentation I had time to do since the release till now, I'm impressed with its ability to reason and understand context Create content. I haven't done any more advanced things with it, such as analyze a lot of data and so on, which I have been doing with ChatGPT for a while, but this is definitely planned for the next couple of weeks and definitely for the next couple of months until I run outta the free version and then I'll need to decide what I'm gonna do. Most likely I will keep both and continue trying both in parallel. I was not impressed with its image generation capabilities. I still think that both mid journey and chat GPT generates better images. I also found it not to want to generate a lot of images. I requested for different reasons, and in some cases I was able to convince it to continue actually create images, and in some cases I was not. But so far not great results on the image generation. Really good results on text generation and I have not analyzed data analysis and stuff like that, which I will and I will keep you posted. Another, by the way, very interesting piece of news that came just before the release is that Gemini Pro, which was powering Bard before they released their top version, aka. BARD has overtook GPT-IV to become the second highest ranked chatbot on the LMSys board. Now, those of you who don't know the LMSys board, it's probably the only board of generative AI tools that actually matters. And the way it works, it's a blind test. You go to that platform and you can input different kinds of prompts or things you want it to do, and it will generate two results from two different bots that you don't know what they are, and then you give it feedback on which one you prefer. And tens of thousands of people do this every single month, and that's what drives the ranking. So it's not a specific benchmark that the model can be trained on. This is actual people using actual use cases, ranking blindly, which model they prefer. So right now the top dog still is GPT-4 Turbo, and number two was Bard, but this was before they've upgraded it to Gemini. Ultra, which should be more powerful and may or may not overtake GPT-Four Turbo as the top dog in the generative AI world based on actual real world usage. The one thing I will add that you need to pay attention to before you dive all in and give it all the data that you give to your OpenAI, ChatGPT, Claude, et cetera, is I suggest you read their data privacy and disclaimers. It is providing Google a lot more access than the other platforms are providing. Now, if you're like me and you are in the Google world anyway, I don't have much to hide because my documents are in Google, my emails are in Google. I browse on Chrome and I use Android. So they basically know everything about me, so I have very little to hide from them anyway, other than obviously client's data, which I'm not going to try in the system, at least in the current setup. On ChachiPT, I've applied not to train the model on my data. And so over there I have no problem. Claude does not train on my data to begin with, and so the only problematic issue I have right now is with Google Gemini. And I assume that will change sometime in the near future. Otherwise lot of people like me will not share the really important data, which means it's gonna become not as impactful for businesses, and they're gonna desert the model. So I think they will change that, or at least we'll allow you to opt out over time. Several pieces of news from Google Research side. Those of you who haven't seen google's new Lumiere AI video demo. Go check it out. So it's L-U-M-I-E-R-E Lumiere. It's a new way to do video diffusion model, which instead of rendering frame by frame is rendering the entire video layer by layer at the same time, which creates much more coherent motion between the different frames, much better than all the other models. It's really impressive, and I'll be really surprised if that thing doesn't at a certain point in this year join the Gemini tools as a way to create videos straight within Gemini. By the way, right now, there is no way to access it, not even as an open source trial, et cetera. It's just a demo that they've done for research purposes. Another interesting announcement from Google Research came in the announcement of what they call mobile diffusion, which is a new approach for text two image generation on a mobile phone. All the existing models. Require a lot of computing power and take a while in order to render images and mobile diffusion can render relatively low resolution images on a phone in less than half a second generating pretty impressive results. So again, if you wanna see the results, go and check out the research paper again. It's called Mobile Diffusion. It's on Google Research website and you can go and see what it's doing, but to generate. Fair quality images in less than half a second on a mobile device is very impressive. And that's, I think, is the future of a lot of these because we will want to generate these things on the fly for multiple usages, and not just when we're in front of the computer. And so this is another angle where we're going to see these models evolve, allowing us to run very specific capabilities locally on mobile devices and running much faster and with less computer demand. Now from Google to the second biggest piece of news that came out this week that is really exciting and really scary at the same time, which is that information was leaked from OpenAI, that they're working, developing real agent capabilities, meaning an AI software that will be able to control and run other software, whether on your computer or your mobile device, and completing very complex tasks based on the user needs. As an example, things gathering entire company data and connecting it to your CRM or creating trip itineraries, including booking flights and booking a car and finding the hotels and so on, or doing an automated outreach for specific people based on their LinkedIn profiles and company data online, etc. On one hand, this is really exciting and really interesting, and the idea is obviously not new. There's been discussions about agents capabilities for a while. The reason I'm saying it's scary is to do that, you will have to give those platforms access to multiple pieces of software. And as we know, those AI models sometimes do weird things that are unexpected. In addition to doing things that are unexpected, these systems hallucinate, meaning they make stuff up and do something that is not exactly what you meant and does not exactly align with the feedback or the data that they have. And so how will we be able to trust these systems enough to give them access to critical things that we're doing without any supervision? Now, I know there's multiple companies out there today that are developing tools on top of generative models that allow to increase the chances that they're not hallucinating increase the accuracy of the data that they're using dramatically, and I assume that's gonna be part of the solution. There has to be some kind of a fail-safe mechanism as well to be able to stop it from taking over your computer. Or if you want to take it to a really big extreme, think about different malicious code components that could be injected into your computer that would just wait for that model to be able to control something and then the virus or the malicious code will take. Control over that model and now it's controlling your computer, your phone, the software that you gave it access to. So I see a lot of issues from data security and from a system that's doing not exactly what you meant going rogue. But that being said, I don't see that stopping it meaning like a lot of other things in this AI world that are moving faster than we can actually understand and control. I see this moving forward. I see this if it's working, being. Probably the biggest game changer as far as productivity in human history because you will be able to create subtasks for itself to complete a greater task while understanding the context, the data, the connections, and so on. And so I see this thing moving forward. I think it will be a very interesting thing to follow to see what are the safety mechanisms that are put on top of it, both for accuracy of execution as well as from a security perspective, whereas this thing either on its own or combine with some kind of computer virus taking over your computer or your phone or your company servers and doing stuff that is completely unintended. From these two giants to other two giants that we don't talk about a lot in this context, but one of them for sure is doing a lot of AI things for a very long time. And that's Amazon. The other is IKEA, and we're gonna talk about both these use cases in a second because they're related. Amazon just announced Rufus. Rufus is an AI shopping assistant. It's going to run within the Amazon app, and it will power. A shopping advisor that is trained on the entire Amazon catalog, including user reviews and all the information that they have. And it will allow us, the users, the Amazon shoppers, to answer product questions, to make recommendations, to do comparisons, to facilitate discovery. To compare different products from one another to suggest purchases based on an occasion or a need or specific requirements that we have. Really being a shopping assistant, that is based on all the data that Amazon has Now, Amazon is not new, obviously, to large data and ai. Everything we do in Amazon from their pricing model to what things you're seeing, to the ads they're running to what they recommend to us to purchase or to purchase, in addition to what we're looking at right now or similar products. All of this is an AI engine that's running behind the scenes. They obviously also running AWS and they have a lot of AI models running over there as well, so this is a very logical step in the right direction. It just shows us where this world is going as far as implementation and application. This is a very. Useful tool, assuming it's gonna work well, right now it's launched only as beta, so a small group of US users, but they're planning to roll this out in the US to all the mobile app customers in the next few weeks. So if you're an Amazon shopper like me, you'll start seeing Rufus and you'll be able to interact with it and see whether it's doing well or not. Knowing what I know about Amazon, I will bet that it's gonna be a very useful tool for those who will be willing to start using it. And over time, I assume most of it will start using it. And this might become over time, our primary way to engage with the Amazon app and more likely moving forward more and more software and applications. We'll just want to talk to instead of type and search and do all the stuff that we're doing today.'cause it just makes a lot more sense. Now a different company that have taken somewhat of a similar approach that doesn't sound to be as successful so far is IKEA. IKEA just launched a GPT on the OpenAI GPT store that allows you to search and get advice on how to shop for IKEA stuff based on images from your house. So the idea is you can take an image of a section of your house an IKEA bot will recommend specific furniture for different sections of your house. That sounds like a brilliant idea. From the feedback that I've seen online so far, it's not working that great and it's mostly sending you back to the IKEA website in order to research on your own. But I think the direction is the right direction, meaning it will be more than just a search tool for the products. It will help you understand what can work well in the scenario you have right now in the IKEA scenario, this might be a coffee table next to the couch you already have, or the right cabinets to fit the size of your room, etc. But in any other cases, like shopping for clothing, it will offer you specific matching clothing to your style based on clothing that you have right now in your closet or based on what you're wearing right now, etc. I definitely see. Assisting AI helping us in different actions that we're doing today in ways that we don't even anticipate yet, but will definitely help us in every shopping experiment that we're doing, and most likely in a lot of other aspects of our business and personal lives. We spoke about Google and their rollout of Gemini across everything. Google, so G Suite, Google Cloud, et cetera. Microsoft has been in this for a while in their partnership with OpenAI and Microsoft Copilot, which is their brand name for everything. AI in their platforms has also announced some big updates to Copilot. Maybe the biggest one this week is that Copilot is going to be integrated into OneDrive and it will allow you to talk to all the data you have on OneDrive without actually searching and opening any documents. It will support a wide range of formats such as Microsoft Office and PDFs and text files and so on. It will come with a a separate kind of license that will be a paid co-pilot, Microsoft 365 subscription for another 30 bucks a month, which probably is gonna be on top of what you need to pay for the co-Pilot Pro, which is 20 bucks a month. But that being said, if you. If you have a lot of documents on OneDrive and this is gonna save you even an hour a day, that's a no-brainer. And if you're running an organization, you have a lot of people using OneDrive, then definitely it's a no-brainer to help people save time just by asking questions. This also is going to solve a lot of issues on what infrastructure to use and how to integrate it in a lot of it and security questions to a huge amount of companies which already use. Microsoft OneDrive for all their documentations and business data. And so this will become more or less a seamless thing to do, just allowing you to use Co-Pilot across everything that you're doing, which will be extremely powerful if you are already using the Microsoft ecosystem this new feature of Microsoft Copilot is supposed to roll out around May of 2024, but might move either forward or backwards on the timeline based on when it's exactly ready. Microsoft also made some really nice updates to the user interface of copilot across both web and mobile. It looks cleaner. It suggests different prompts for you. It designs images better and you have more control of what it designs as such as colorizing and blurry and blurring and styling different images. Overall ongoing updates to the copilot capabilities, which Microsoft are pushing with everything they can. Since we spoke about Google's research, let's talk a little about Microsoft Research. So Microsoft has been progressing with a concept they called LASER, which is an acronym for something spelled L-A-S-E-R. And the goal there is to boost LLM accuracy. It changes the number of weights within the LLM model, actually reducing the number of weights and the size of weights. And surprisingly, by reducing the number of weights and the size of weights, they're raising the accuracy of the model. They've tested it with several open source models they saw the accuracy growing by 22 30% for some specific tasks, so not across the board, but definitely a huge improvement. In some of them, it went up from about 70% accuracy to ninety-seven percent accuracy, which makes a very, very big difference to know that the answer you're getting is not true two out of three times, but. Probably true. Going back to what we said as far as agents, these kind of capabilities of knowing that the answers are most likely accurate versus eh, they might be accurate, makes a very, very big difference for every usage, but definitely broad implementation across everything that we do in the business world. Another player that I really like that announced something interesting this week is Perplexity Perplexity. If you don't know what it is, and we've talked about it several times in the show, is if Google Search and ChatGPT had a baby that will be Perplexity, it's basically a AI-based. Search engine that I absolutely love, and I find myself doing more and more of my searches on that versus on Google, which tells you that it's actually doing a very good job because it literally gives you the answers instead of you having to go through X number of results in order to figure out the answer for your own, they've just announced co-pilot mode. Toggle button that exists on your interface, and if you toggle it on. Perplexity will try to understand exactly what you're trying to figure out. It will ask you some follow-up questions through a simple menu that you can choose from, those questions will clarify further for perplexity exactly what you're looking for and will hence give you even better, more accurate results. Going back to the concept of assistance across different things, across different things we're doing. This is assisting us, the users, to have a better, more accurate search than we started with based on perplexity's understanding of the data and its context of what we're actually searching for. And by asking us clarifying questions, it allows to get. Better answers the first time around versus having to do additional queries. I find this absolutely brilliant. I really like it if you're on the free model, you only get a few of these a day if you're on the paid version. You can use as zoom as many of those as you want using the co-pilot mode. We talked a lot about the big companies and the closed models. But a lot has been happening this past week in the open source world as well, while the GPT store is gaining a lot of traction and more where people are creating GPTs and using their own GPTs and other people's GPTs, Hugging face announced that they will allow users to create custom chat bots very quickly in what they say, quote unquote, two clicks using a new tool called Assistant, you'll be able to create these and make them publicly available on Hugging face. Those of you who don't know what Hugging phase is, Hugging Phase is the largest open source repository for AI models. It's available to anyone and you can find literally anything you can imagine over there. all of them, you can get access to the code and on some of them it's available for testing that is hosted either by Hugging faith themselves or a private person using Hugging. Phase two allow you to try different models. Now it still does not have some of the functionality that the GPTs have, such as the ability to upload data to it or the ability to connect through different APIs through schemas with GPTs have, but for a basic usage of creating a complete flow of a chatbot, you can do that and you can pick from a variety Of different open source models such as Lama, II, or Mixtral. Both are very capable open source models. Speaking of Hugging face and open source models, Hugging Face has its own leaderboard that is ranking open source models based on six common leading AI benchmarks, and as of this week, there is a new top dog that has passed all the existing open-source model. It's called Smog 72 B, smog like the dragon from Lord of the Rings. It was released by a startup named Abacus ai. It was built on top of Alibaba's, Quinn-e-seventy-two-B model. It has passed top models like OpenAI's, GPT-III on multiple benchmarks. It's the first open source model to ever exceed an 80 point average score across these 60 different evaluations. It's actually scored eighty-eight, when the second open source model is at seventy-nine point something. So a big jump forward in open source capabilities. Abacus is saying that they focused their tuning of this model on reasoning skills, and hence it has better math, logic performance than most models out there. Another interesting open source news comes from Allen Institute for ai, also known as AI-II. They've unveiled a new model they call Olmo. The interesting thing about this model is that they've released it, including all of its components, the code, the weights, the training data, the checkpoints, basically everything they've used in order to create this model was released with the model itself. That obviously allows. More flexibility and more capability to anybody who wants to take this open source model and build more stuff with it. A lot of companies who release the open source do not release a hundred percent what they're doing, and that's the most open, open source can be. So combining all these open source news together, you understand that a lot is happening in the open source world. The models are getting bigger, they're getting faster, they're getting shared with more people. That drives more innovation, and it's going to be a very close battle between that and the closed models by the big tech companies. The beauty of this for people like us that we'll have a lot more to choose from, whether for personal usage or for company usage, or education usage, whatever the case may be. We have more and more really powerful models to choose from. The scary side is obviously that these really powerful models are open source, meaning anybody can take these models and train them for specific tasks that could be problematic or even completely illegal or dangerous to the humankind. And since there are no guardrails and you get the code, and in some cases, as I said, all the different capabilities in order to change the model, it allows people to potentially do harmful things with it. That being said, this is the world we live in. There's no going back from where we are. And especially that there's some really big players that are releasing these open source models, including Meta and including even Apple This past week I Apple that we don't hear about a lot in the generative AI world, even though they're a huge player in that industry as far as their capabilities on their phones. There are a lot of rumors of what they're going to release on their next developer day, which is probably gonna be packed with AI news and definitely on the next iPhone release, but for now, they have released a research paper. That has developed a new way to train on synthetic data that allows to train three times faster while getting 10% better accuracy on the data. So those of you don't know what synthetic data is. It means data that is generated digitally instead of real data from the outside world, which a lot of the models do some of the training with synthetic data because it's easier and faster to generate and gain in order to do some of the model training. They're suggesting that this new method will dramatically improve the efficiency and the performance over the current existing methods. this means that company using this will be able to train models faster, to be more accurate with less efforts, which is something all these companies want. Now, the other thing Apple released this week is what they call Magi, which is spelled M-G-I-E, which is an open source model that they've released that allows to manipulate existing images by giving them. Text prompts on what you wanna change in the image. This engine understands what's in the current image. It understands your prompt, and it tries to do the best that it can to change the image in order to make the changes that you wanted. And you can either manipulate the entire image, such as adding lighting or clarity or applying different filters or you can use it just to make specific changes to specific desire edits in sections of the image. I've played with it quite a lot in the past few days, and I must say that the overall image updates are pretty good. Like you can take an image and change the lighting and change the direction of the lighting and change the time of day from day to night to evening. You can apply different filters and it actually provides pretty good results. On the flip side, changing things within an image, changing a specific section yielded mediocre results, at least in the test that I've done. But this is now available as open source on GitHub for you to get the code or as a test on Hugging face. And I will share the link in the show notes so you can go and try it yourself. Now, speaking of open source and generative capabilities, the AI race in China is as fierce as it is anywhere else in the world, and some of the biggest progress have been made there about video generation in this past week. So Tencent unveiled an updated version of a model that they've open-sourced, that is called that Dynamic Crafter video Diffusion model that now can generate 640 by 10 24 resolution clips. From either text or images or the combination of the two. So similar to the models that we know here in the Western world, such as Runway and Pica, and that's a huge jump because their previous version could only generate videos in resolution of 320x512. So this is four times a resolution and that's a very big jump on a very short amount of time. This comes as all the text giants in China are racing to provide these video models, and this include ByteDance and Baidu and Alibaba, they're all running very, very fast on AI capabilities in general, and specifically video generation, as I mentioned multiple times on the show, 2024 is gonna be the year of AI video, and it will be extremely surprising to me if by the end of this year we will not be able to generate videos that will look completely realistic, just like the images from me journey are right now. At least some of the leading platforms. The interesting thing from the China approach is that all these companies are pushing more and more open source strategies in order to get global connection and to get developer appeal for people to quote unquote help them in the development of their models. More or less the opposite of what you expect Chinese companies to do, which was always very closed and secure. They're now sharing a lot of these models as open source and gaining traction in the global developer community. I. And from that to, two very interesting pieces of news that happened this week. One is really disturbing, but you need to be aware of. And the other is just a unique insight to what the people who are leading the charge of AI are actually thinking of the future with it. We'll start with a scary one. A finance employee on a large international corporation earlier this week transferred$25 million to a fraudulent account after a fake video call with executives from his company, including the CFO. He received a message requesting to do that, which he found suspicious and he didn't do it. But after jumping on a video call with the fake executives from the company that convinced him that everything was legit, he made the transfer. The reason this is so important to know is you heard me say that time and time again. We cannot know, period. We cannot know right now what's true and what it's fake if it's coming in, any kind of digital communication, whether it's written or voice, or image or video and that includes live video. So you could be on an actual video call with people that you know. It could be your CEO, your CFO, your boss, your commander in the military, your spouse, your children, teachers. Anyone and it might not be them. Why is that important? It's important because you have to A, know that and B, you need to put safety mechanisms on both your personal life and your professional life in order to prevent this. As an example, no money is getting wired to accounts that you're not aware of without going through several steps, which I'm actually surprised this company didn't have in place before, but they must have it now. Another thing that I suggest companies, specifically senior leadership, specifically when big amounts of money or HR issues are involved, as well as on your personal life, have a safety word, have a passcode that gets updated on your phones regularly, that nobody has access to, other than a short list of employees, just like we do. Two-factor authentication. Something that will allow you to verify that the person you're speaking with is the actual person that you think you're speaking with. On the personal life, there've been several fake kidnappings where people who are fraudsters were able to convince parents that they kidnapped their kids by allowing them to talk to the kid that confirmed that they've been kidnapped, and then they wired the money to the quote unquote kidnappers that didn't even, were not even in the area and did not do anything to the kid. They just knew the kid is not around at that time of day. And so having a secret safety word that only your family members know, can very quickly verify whether you're talking to your child or your spouse, or your father or your mother. It doesn't matter. Just by asking them, okay, what's our secret keyword? And if they don't know, they're obviously fraudsters. And in a company, as I mentioned, it could be more advanced capabilities that have to be put in place to prevent these kind of cases because the bad people already figured it out, as I just mentioned. And the last interesting piece of news from today comes from no other than Sam Altman, the CEO and the co-founder of OpenAI, the company that gave us ChatGPT in a conversation with Alexis Ohanian, who is the co-founder of Reddit? Sam Altman mentioned that he speculates a lot about when AI will enable a single person company to reach a billion dollar valuation without hiring a single employee. He even mentioned that in his group chat with his CEO friends from Silicon Valley, they have a bet on what's gonna be the year that that's gonna happen. That there's gonna be a unicorn valuation for a company of a single person. Now, will that happen or not happen? I don't know, but just the fact that people that are the leaders of this industry, that pushing this whole thing forward, think it's possible, means it. Might be, which would've been unheard of and unthinkable before the age of ai. There's already several unicorn status companies with a very small number of employees that have developed different AI tools. So the direction is very, very clear. We'll be able to do a lot more with a lot less and get significantly higher valuations with significantly lower overhead. This needs to be a very big red flag to larger, slower organizations who will have to somehow adapt to much smaller companies who can generate similar or better results faster. That's it for this week. On Tuesday, we will be back with another interview this time on how to bridge the gap between personal implementation of generative AI to a company organization-wide implementation of generative ai. How do you create the right healthy environment in which AI can thrive within an organization? Definitely the topic that many, many people are struggling with right now. So check us out on Tuesday and for now, have an amazing weekend.