Leveraging AI

59 | Can AI help develop biological weapons? Taylor Swift fake nude photos, an open source model as powerful as GPT-4 and many more AI news from the week ending on Feb 3rd

February 03, 2024 Isar Meitis Season 1 Episode 59
Leveraging AI
59 | Can AI help develop biological weapons? Taylor Swift fake nude photos, an open source model as powerful as GPT-4 and many more AI news from the week ending on Feb 3rd
Show Notes Transcript

Do we really know what's real in the AI-powered world? 🤔

AI is advancing rapidly, bringing many benefits but also raising concerns around fake content. In this week's news roundup, we explore key AI updates and their implications.

Topics we discussed:

👥 Meta-prompting allows switching "experts" within a chat for better results

💲 Microsoft's growth driven by AI, but enterprise AI adoption still low

💻 New coding LLM delivers significant progress  in AI code generation

📢 New open source model leaked - claims to be as good as GPT-4

🎥 New tools to easily create studio-quality avatar videos

🖼️ Google rolls out AI image generator integrated across products

🛡️ Software helps protect art from AI training data misuse

You should try out Heygen or Synthesia  AI avatar image generators that were mentioned in this episode if you need to create training/customer service/product walk-through videos.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to a Short News weekend episode of the Leveraging AI Podcast. I am Isar Meitis, your host, and as every week I have a lot to share with you. I always start with something that ties directly to one of the things that takes a lot of my time right now, which is teaching the AI Business Transformation course and. I'm not here to promote the course, but if you're interested, there's a link in the show notes, but a research from Stanford University together with Open AI has proven that breaking down long complex tasks into smaller tasks within a single chat, what they call meta prompting, allows you to keep the context and the framework of the large language model within the same chat while changing different experts between different segments of the chat, meaning you can use different quote-unquote expert models within the LLM to perform different tasks within your one long project And this process gives you much better results, and per the research that achieves even better results when combined with the code interpreter to do a wide range of tasks. So the suggestion here, if you have long tasks, to keep them within the same chat, but keep quote-unquote switching experts across different segments of the chat to achieve best results From this research to the largest company in the world right now, which is Microsoft. Microsoft just had its earning call last week, and from that we've learned that their cloud services revenue grew by 30%. This is staggering, especially that they were pretty far behind AWS previously, and this is obviously driven a lot by the AI services that they're providing and they're huge bet on OpenAI. To prove that AI is driving this. 6% out of the 30 came directly from paying for AI services. But remember, to run these AI services, you also need Azure Cloud. So AI is definitely playing a very big role in Microsoft's growth, both in market cap, but as well as in actual sales and revenue. To make that even more extreme in a survey that was done by organization called The Machine Learning Insider with Enterprise AI adoption has found that despite the crazy hype around AI right now, enterprises adoption remain very low at around 10%, when only 10% have successfully launched GenAI solutions within their businesses. Surprisingly. Within the bigger sectors, the slightly more advanced in adoptions are financial services, banking, defense and insurance, and behind our education and automotive and telecom. and I say surprisingly because these are the more regulated industries, but all of them, as I said, on average, are very far behind, at least the hype forty-six percent of companies. Reference infrastructure issues as the top barrier to adoption of these models within their organizations. And eighty-four percent admit they need more Gen-AI skills within their employees. The reason I'm telling you this is first, so you don't feel that you're behind, at least not behind the large enterprises, which by definition are gonna move. Slower. But the other reason is that there's a huge place, a for additional growth in Microsoft. And I'm not here to make any investment recommendations, but just letting you know what I'm finding. And the other thing is that investing in AI training is key to success in any organization. Right now, staying on the Microsoft topic, the CEO of Microsoft has been very loud about the recent issues with Tyler Swift's images that circulated the web. Those of you who don't know intimate, fake images of Tyler, Swift has went wild online, especially on X last week until they were removed, that spurred a lot of responses from a lot of really important people, included the Biden administration, and as I mentioned, Satya Nadella. He basically says that there has to be more guardrails put in place, and the way he suggests to do that is to establish norms, laws, Doing that through a partnership between makers, and the tech platforms in order to control and govern AI systems. While Tyler Swift is a big name, and I obviously don't want her or anybody else hurt by fake AI capabilities, this is just the tip of the iceberg, we as of right now, have no ability. To know what is true and what is not, from the stuff that we see online, including on personal channels like WhatsApp and so on, because we don't know what our friends are going to send us. And that's obviously presents a huge problem to anything from general communication that we have to swaying elections or laying off people or really hurting personal people by sharing stuff that is completely not true about them. And this could happen. To your kids at school. It could happen to you at work and it could happen on a much larger scale, and that's a problem that obviously needs to be addressed. I'm glad to hear that. Influential people like Satya Nadella and like the administration are paying attention and hopefully will start moving quick because the technology right now is way ahead of the regulation and controls around this technology. On another somewhat alarming news from the AI industry, Anthropic admitted that they had a leak of data to a third party on January twenty-Second, the data included customer names and credit balances, but no bank or payment information. But that being said, that again, raises concerns on the abilities of these companies to maintain our data safe. And with the growing amount of usage, it's becoming a bigger and bigger concern. From the big names like Anthropic and Microsoft. I wanna move to the open source world. Two interesting pieces of news came from the open source world that is moving as fast as the big companies and closed source ai. One of them is that a open source, large language model specializes in creating computer code called Alpha Codemium has released a new model that surpasses its predecessor Alpha code that was created by DeepMind. So they took DeepMind's code and significantly improved it to what they're saying is now a big step forward to the ability of these platforms to generate code better than humans so on various different coding benchmarks, including human review, the AlphaCodemium generated code has yield very good results. And those of you who listened to our news episode from last week knows that this huge progress in code creation is already leading to big layoffs in the tech companies that can now generate more code with less people, while still maintaining the quality of the code. I anticipate that this will continue happening. The other thing that it allows, it allows smaller players, small companies, to now do things in coding that they were not able to do before because they just did not have the resources to hire more people. So there's obviously. Pros and cons to this whole thing, but overall, this is the reality. We live in more and more tasks that could be strictly done by humans, can now be done by ai, and we will see more and more of that happeninG. Staying in the open source world. Let's talk about Mistral. Mistral is an open source, large language model from France. I've talked about them several times. In previous episodes, they've been releasing very powerful open source models and there's been a leak on hugging fish to a new model called Miku, which has now been confirmed is from Mistral. Mistral CEO himself has confirmed that this is an early adapter test version of their new model that was released by quote-unquote, an enthusiastic customer. Now, the other thing that he mentioned is that this new model may surpass the capabilities of GPT-IV. This means that there's going to be an open source model that is going to be as powerful as GPT-Four, which is currently the most powerful generative AI out there other than GPT-Four Turbo, which has some minor benefits. This does several different things. First of all, it means that there's gonna be open source access to extremely powerful models. And again, I don't see this stopping anytime soon. The other thing that it's going to do is it's probably going to put pressure on. OpenAI, Claude, all the high-end paid models to come up with new capabilities beyond what they have today, potentially accelerating the release of GPT-Five by OpenAI. But it's definitely giving options for individuals and for sure organizations who do not want to depend on OpenAI for their offering and want to keep their data safe, can now run a model that presumably is as good as GPT-IV Contained, within their environments, their cloud environments, while guaranteeing that the data stays safe. Since I mentioned OpenAI, let's touch a one more interesting point from them this week. openAI just shared that they did a study that tested whether ChatGPT could assist in creating biological weapons. So one of the fears of these very powerful models is that they can be used for negative things and not just positive things. The way they've done the research is they've put together a group of. Professors and experts in biological weapons and in biology in general, and they have split them into two groups. One group was trying to figure out how to generate biological weapons just using internet search, and the other was using GPT-IV to achieve the same goal. What they found is that GPT-4 provided only what they called a mild uplift in accuracy versus just using the internet baseline, and it also produces a lot of misleading responses during the research process. What that basically means is it means that as of right now, ChatGPT does not provide a significant benefit to somebody who wants to use it to create biological weapons. That doesn't mean it's not providing a benefit from doing other harmful things, and it also doesn't indicate anything about future models. So these models keep on getting better and better, as we just mentioned, both from open source and closed source groups, nobody can guarantee what GPT-V, or the next Claude, or the next Cohere or the next model from Mistral are going to be capable of doing so. Good news for now. Not sure if it is good news for the long run. And from negative and scary news to some exciting news from maybe the hottest topic in AI world right now, which is video creation. So Synthesia, which is a company that allows you to create videos of an avatar purely based on text. Created a new tool that allows you to create what they call studio quality videos directly from text files or web links. I. This expands on their existing capability of just using text within their model, which means now you can go and convert anything on your website to an Avatar video. You can convert any document you have to an avatar video without any additional effort. The AI does not read what's on the website. It actually reads what's on the website or the documents, and within minutes generates a script that, describes what's in there, and generates the video that will share the information from that source. The benefit of that is obviously that there's proof that video gets higher engagement and retention across training, education, marketing, and so on, and so the ability to take content that you already have, whether it's for internal training or for external usage, and turn it into a video now becomes more or less seamless. For those of you who haven't used these tools, either Synthesia or HeyGen or one of their competitors, they are extremely capable in this video generation and if you have a need to create any kind of training or marketing or product description or product walkthrough videos, I highly recommend you check them out. I'll put the links. In the show notes as well. And moving from video to another topic that's on fire, which is image generation. More and more of those is coming out, but this time it's coming out from Google. Google rolled out what they're calling ImageFX. It's a text two image model. It's available through Google's Labs experimental platform, which anybody can access, but just by signing up to it. It's also available through BARD, through a model they call ImagineTo that is based on image effects, is now available through the Bard Chatbot, which means you can now go to Bard and create images just like you can do with DalE-III on ChatGPT. The other interesting thing, going back to Google's, and again, Microsoft implementation across their offering is they're now infusing this model across other tools that they have, like ads creation and cloud and workspace. So this ability to create images is going to be available in any place. You'll need it across the Google offering. An interesting add-on from the Google image Generator is if these images show up on search, Chrome will be able to identify if these photos were AI-generated, but it will only be able to identify images that were created with this Google tool. So step in the right direction. Yes. Does it solve the problem of knowing what's fake and not and what's not? No. But this obviously adds another tool to the already huge set of tools of creating images such as Midjourney and Dall-E 3 and meta and stable diffusion and Leonardo, and many more. So there's a lot to choose from right now if you want to create images with ai. And another very interesting news that ties up to that is the flip side of that. There's obviously a very big problem and a lot of lawsuits of people who are creators who are pissed by the fact that these large language models has trained on their data. To help fight that. Researchers at the University of Chicago developed a software that they called Nightshade is a free software that alters the pixel level of artwork created by different creators preventing or at least making it a lot harder for AI models to train on this data. To show you how big the demand is, even though most people are not creators, and hence they don't have a need for that. This app was downloaded 250,000 times in the first five days since it's launched on January 18th, and it keeps on getting this level of downloads since. Combine that with the fact that more and more websites are blocking AI engines from crawling the website and collecting data, and you understand that the next level of models we'll need to train in some other way than the original models that we have today. That obviously does two things. A, it Protects the data that should be protected, which is a good thing. But the bad thing that it does is it really prevents competition from getting to the same level of these models because these models had an unfair advantage of training on all that data, which will now not be available to the new models. While I'm really happy about the fact that there might be ways to protect the stuff that we create from being consumed and recreated and regenerated and manipulated by ai, I think there's a very big issue from a competition perspective by the fact that the big models we know today had access to that and newer models will find it harder to compete because it will need to find different sources of data through licensing and so on. That's it from news from this week. On Tuesday, we are releasing an incredible episode that is gonna take you step-by-step through the most advanced sales funnel using AI that I have ever seen. And I see a lot of them because this is what I do for a living. So if you're interested in growing sales in your business or you wanna learn about this topic, don't miss our session on Tuesday. And until then, have an amazing weekend.