Leveraging AI

83 | Is AI an existential threat? Moderna and OpenAI partner, Microsoft launches Phi-3, AI Index report by Stanford and more AI news for Apr 27

April 27, 2024 Isar Meitis Season 1 Episode 83
83 | Is AI an existential threat? Moderna and OpenAI partner, Microsoft launches Phi-3, AI Index report by Stanford and more AI news for Apr 27
Leveraging AI
More Info
Leveraging AI
83 | Is AI an existential threat? Moderna and OpenAI partner, Microsoft launches Phi-3, AI Index report by Stanford and more AI news for Apr 27
Apr 27, 2024 Season 1 Episode 83
Isar Meitis

AI truly transforming the workplace and business practices today.

From healthcare advancements to automation, this episode dives deep into the tangible impacts of AI innovations on various industries.

In this session, you'll discover:

  • The latest findings from Stanford’s AI Index Report and what they mean for your industry.
  • How private investments are accelerating AI research and applications, particularly in healthcare.
  • The critical discussions surrounding AI ethics, regulation, and employment impacts across sectors.
  • Strategies for reskilling and upskilling to leverage AI for career advancement.
  • Insights into the partnerships between tech giants like OpenAI and healthcare leaders like Moderna, and what it means for the future of medical treatment.

Prompting guide 101 ebook

Don’t forget to subscribe to "Leveraging AI" on your favorite podcast platform. Share this episode with fellow leaders looking to make informed decisions about AI in their organizations, and rate us—it helps more people discover our content!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

AI truly transforming the workplace and business practices today.

From healthcare advancements to automation, this episode dives deep into the tangible impacts of AI innovations on various industries.

In this session, you'll discover:

  • The latest findings from Stanford’s AI Index Report and what they mean for your industry.
  • How private investments are accelerating AI research and applications, particularly in healthcare.
  • The critical discussions surrounding AI ethics, regulation, and employment impacts across sectors.
  • Strategies for reskilling and upskilling to leverage AI for career advancement.
  • Insights into the partnerships between tech giants like OpenAI and healthcare leaders like Moderna, and what it means for the future of medical treatment.

Prompting guide 101 ebook

Don’t forget to subscribe to "Leveraging AI" on your favorite podcast platform. Share this episode with fellow leaders looking to make informed decisions about AI in their organizations, and rate us—it helps more people discover our content!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to Leveraging AI, the podcast that shares practical, ethical ways to leverage AI, to improve efficiency, grow your business and advance your career. This is Isar Meitis, your host, and this is a short weekend news edition of the Leveraging AI show. every single week, there is a lot to talk about, especially that last week, I didn't really focus on the news, but more focused on the MIT conference that I was in. This episode includes some very important things you need to know, including some really scary stuff coming from the CEO of Anthropic, which is the company behind Claude and that's at the end of the episode, but let's get started. Stanford university has released their AI index report. And in this report, they're covering multiple aspects of what they're seeing in the AI world. And it will give you just the headlines, but I will link to that report in the show notes. So you can go and check it out and read all the details. It's a very interesting report. I would say there's nothing surprising, but a lot of good and useful information. So the first thing that they're mentioning is that there's a huge surge in AI research Which means that we're going to continue seeing ongoing innovation and improvements In everything AI, because so many people are investing in deep research on these topics. They are tying this together with the rise in private investment. So more and more money is funneled into AI related initiatives from the private sector, which obviously fuels both the business aspect of this, as well as the technological and research side of things. The study also mentions notable expansions of AI applications in healthcare Both in research as well as actual applications which hopefully will lead to better treatments and maybe cures to diseases that we don't have solutions for right now. More about this particular topic is going to be later in the episode when we talk about the partnership between OpenAI and Moderna. They also talk a lot about the increasing dialogue about AI ethics and regulations in governments and what are the current limitations and how that might affect compliance and acceptance in the future. A lot more about that later in the episode across several different aspects of what's happened this week. And the last thing they talk a lot about employment and the impact on employment. They're saying that AI could automate a lot of tasks, which is going to impact employment in multiple sectors. They're also talking about the fact that AI skills are in very high demand, that people who will reskill and improve their abilities in AI will dramatically increase their chances of keeping their jobs and making a lot more money in the near future. Future and in the later future. Overall, again, nothing really surprising and very much supporting of the fact that if you want to secure your financial future, you better invest in AI learning. I'm actually releasing an episode in a couple of weeks that talks about various ways to train yourself and other employees in your company. So look for that episode in about two weeks time. Now, speaking about potential impact on workforce and things that we're doing today and how they're going to change the information, which is an amazing online magazine that you can sign for 30 something dollars a month, and it's worth every penny. They're literally releasing the best, most relevant information right now on multiple topics, including AI. They wrote an article about the coming future with agents and what they're claiming is that Microsoft together with open AI and Google deep mind are preparing AI agents designed to automate complex tasks for both enterprise and consumer applications. The idea is that these agents, which we talked about in the past, can be given a task and then on their own, define the subtests that needs to be done, assign it to other agents and go through the entire task from research to analysis to implementation and execution of these tasks that could be highly complex and could be across multiple aspects of the business from marketing to sales to HR and so on. On the consumer side, these things could research vacation and book your travel and accommodation and so on. Basically any task we have can be automated with theSe bots. They're talking about a major shift in the AI industry as a whole. So the leading organizations who are developing AI solutions are shifting from focusing on RAG, which is retrieval augmented generation, basically the chatbots that we know to autonomous agents that can actually take Actions in our world and do tasks for us. They're all working towards that. And the first signs of it will probably be released either later this year or in 2025. So the very near future and the implications of that on the workforce and on the world that we know are profound. And I don't think anybody knows exactly what they're going to be, but that's the direction that everybody is pushing towards very aggressively. On the flip side of very advanced large language models, there is also a push to generate small models that can do some things very well. One of these new models is a model that was just released by Microsoft. They call it PHY 3 Mini. It only has 3. 8 billion parameters, which is relatively small to all the latest models that were released. But despite that, they are claiming that it's as good as GPT 3. 5, which has significantly more parameters. Nobody knows exactly how how many? Because OpenAI never said, but it's at least an order of magnitude bigger. Now, the interesting thing is how they train this model. So they train this model on a unique quote unquote curriculum that is inspired by the way children's learn using bedtime stories. What does that mean? It means that they have used a different large language model to write simplistic versions of the reality. So significantly less vocabulary, shorter stories, shorter sentence structures, as if they're writing it for kids. And they use that data to train five, three mini, which is. Apparently doing very well. They're also planning releasing two larger models. One is going to call Small with 7 billion parameters. And one's going to call Medium with 14 billion parameters and expanding the different capabilities so people can choose the right model for the right task. Same thing are done by Google and Anthropic and Meta that are also developing their own smaller models. So I think in parallel to the progress that we're seeing to push the envelope on the large language models and agents. As I mentioned earlier, we're also going to see more and more smaller models that provide cost effective solutions for specific tasks. Boston Dynamics Maybe the most advanced robot company in the world, even though there's a lot more competition happening in the past six months, there's still probably in the lead. They just released a really interesting, scary, cool, depending who you ask video off their latest release robot, the latest robot is the first that is electrical and not hydraulic, and it's allowing it to be a lot more agile and capable than their previous Atlas models. The video is somewhat creepy. It's showing the robot laying on the ground and then standing up in a really weird way, like rolling up and it's worth watching. It's a short video. Just look it up on YouTube and you'll be able to find it. But the interesting thing about this robot is it's more agile, faster and more capable than the previous robots. And the company that owns Boston Dynamics, which is Hyundai already announced that they're planning to test these robots as part of their automotive production lines over the next few years. So this is Turning from, Oh, this is a cool toy. I wonder what we're going to do with this to something that's actually going to change production lines. And then as these things become cheaper, probably day to day work at our house and hospitals and so on. So expect to see these humanoid robots pop up in different places, probably within this decade. Now the company that used to own Boston Dynamics And probably regretting selling it is Google. So Google used to own Postal Dynamics and they don't anymore, but they have released two very interesting resources that I recommend you all check out. One of them is a blog series that's called Prompting Guide 101. It's an ebook that can help you understand how to write prompts better. Nothing out of the ordinary or surprising. It breaks everything into four standard parts formula. The first one is persona what's the role that the AI needs to play. The second one is the task. What does the agent needs to do? The third one is the context, basically providing information and examples to carry the task. And the last thing is the format. This is something that I teach in details in my courses, but this free resource from Google that is like 45 pages long can help you understand because they're giving a lot of examples for various types of aspects of the business, from marketing to customer service and things like that. So definitely worth checking out. In parallel, Google just released a course that they're coining AI Essentials, and you can take that course on Coursera for$49. Now, the course helps business people understand on how to develop ideas and content, how to make informed decisions using and analyzing your existing data, and how to speed up your existing daily tasks and make them more efficient. Things like drafting emails and summarizing documents and so on. I haven't talked about Taking the course yet, but I'm planning to do So before, so if you want a professional opinion on what is the course and how much it's worth, I promise to give you a quick summary. Once I finished taking the course, it's a self paced course. As I mentioned on Coursera, I teach courses myself. My courses are taught by me, so they're not self paced. They're either in person for a specific company on location, or they're online on zoom, or And they are eight hours of frontal courses. So I'm very curious to see what Google is sharing on their course. So as I mentioned, I'm planning to take the course and share with you all of my findings. Now, since we're on the topic of Google deep mind, which is Google's AI research arm Has made big progress in researching the learning capabilities within prompts in large context window models. So as we shared in the previous episode, Google Gemini Pro 1. 5, which does not run their regular models. You can find it on their testing environment. Again, open and free to the public has a 1 million token context window at a very high level of accuracy of retrieving information from that context. We know they're claiming in 97%. So what they found is that the more examples you give these models, the better the results are going to be. And when they're talking about lots of examples, they're talking about hundreds. of thousands of tokens of examples as part of a single prompt will get you significantly better results than if you just give it no examples or just a few examples. Now, again, that's something that I'm teaching in my course, the fact that giving examples to these models, Are dramatically improving the results of the tasks, but I've never actually pushed it to anything close to what they're talking about. So they're claiming that what they call ICL in context learning, basically the ability to add more and more examples, what's called in the professional language many shot prompting versus zero shot prompting when you give it no examples, few shot prompting when you give it a few examples, and many shot prompting when you give it a lot of examples, they're taking it to the extreme, and they're saying that the impact on the results is very significant because the model can actually learn from the content within the prompt, even if it's information it did not have in its original training content. Where does this lead us? Based on the fact that probably all the models are going to push the boundaries of these context windows and potentially even create a situation where it's a rolling context window that is basically endless, we will be able to train models, quote unquote, on the fly per our needs by drafting these really long prompts, and we can probably use the models themselves to create the examples based on data we already have. So think about allowing the model to have access to your data on your cloud, and then asking to create a very long prompt that will create these examples in order to create better results. That's probably the direction that this is all going. Another interesting release this week comes from Adobe. So Adobe released Firefly 3. Firefly is their AI image generator and AI image modification tool that they released a while back and they just released version number three. It is available as part of the Photoshop suite and also on the Firefly web app that is free. They are claiming that it has improved understanding of complex prompts and scenes, enhanced lighting and text generation capabilities, superior rendering of typography and photography And so on and so forth. So a much better model. I've done some testing comparing it to mid journey. I must admit, I still like mid journeys results better, but it's definitely a big improvement over the previous Firefly models. The biggest benefit of firefly versus is mid journey is the fact that the Firefly content is based on images that Adobe owns the rights to, meaning it was trained on information that doesn't infringe on any copyright material, at least maybe because one of the pieces of content it's straight on per them is AI generated images, which they do not say what was the training information used to create those AI generated images, which to me means we are data laundering some stuff that we are not allowed to use just like everybody else. But instead of just using that to train the model, we use that content to create AI images that then we use to train the model. Now, Adobe's Firefly generation capabilities are in the same pricing that were before, even though the model is upgraded and it's going to cost you 4. 99, basically 5 a month to do that. Significantly cheaper than the other paid models, such as Mid Journey. That being said, as I mentioned, I still like major news results better. Obviously we cannot do one of those news episodes without talking about open AI. The really interesting news about open AI this week is that they have shared that they have a partnership with Moderna. Moderna is obviously one of the most advanced medicine companies in the world. They're the company that has led the development of the COVID vaccine based on the mRNA mechanism. And in their partnership with OpenAI, they're saying that they're going to deploy OpenAI enterprise across the entire organization, empowering every function in their business. Moderna cEO Stefan Banzel said that he believes that Chachapiti and OpenAI's work will change the world and that it pushed them to re evaluate the entire business process and reimagine it with AI in mind. Their goal is to achieve a hundred percent. Adoption and proficiency of AI in the entire company in every single department in just six months to do that, they've developed transformational programs that combining individuals and collectives and structural change management initiatives, including training internal champions and leadership engagement. This is obviously a very aggressive step forward. But I think we will see more and more companies go down that path once they understand the power and the transformational opportunity that AI represents to specific industries. Definitely medical development is one of them, but there's many others and companies who will figure it out and will invest in transformational change and literacy of AI to their employees will come up ahead of everybody else in their industry. To give you a quick example, what this looks like within just two months, Moderna employees has developed 750 GPTs, which are mini automations that you can build on the paid version of chat GPT. So 750 of them was built in just two months to automate various processes in Moderna. Another company that we talk about a lot is perplexity. As I mentioned several times before, perplexity is as if Google search and Chachapiti would have a really beautiful baby. So it combines the best of both worlds of being a large language model, as well as a search engine. They just raised 62. 7 million with a valuation of over a billion dollars from some very known investors, such as Jeff Bezos and Databricks and NVIDIA And a few other big names. So definitely a company to follow. I use perplexity now for probably more than 50 percent of my searches. The rest are still in Google, but it's a very powerful and capable tool. I had the opportunity to meet their CEO last week at MIT and the future that he's describing is definitely compelling and they're doing it at a very cost effective way. And the last, but very big and Exciting, troubling piece of news from this week comes from an interview that Ezra Klein from the New York Times has held with Dario Amadei, the CEO of Anthropic, the company behind Claude. In this interview, which I highly recommend you listen to, it's on Ezra's He talks about multiple topics. He's talking about the fact that AI capabilities are exponentially improving and that major breakthroughs are expected in the very near future. He talks about the laws of expansion that he has found very early on when he was an employee, one of the first employees of open AI. He was part of the team that developed the first GPTs before he left to start Anthropic. And what he's saying is that it was very obvious to them very early on that the more compute and the more data you're going to give the models, the better they're going to get without any clear leveling off of that scalability. To put things in perspective, he's saying that the first models that they were training cost them about 10, 000 to train and that the model that Anthropic is training right now, which is probably cloud four, because it just released cloud three is going to cost them a billion. And that the next model that they're probably going to train after that is probably going to cost them three to 5 billion to train. And he still believes that this exponential growth is possible. If you're adding more compute and more data to these models, they're going to keep on getting better. He's also talking about the research that they've done on how persuasive these models can be in swaying people's opinions. And now I'm quoting Claude three Opus, which is the largest version of Claude three could be changing people's minds on important issues. The largest work version of our model is almost as good as a set of humans we've hired at changing people's minds. And this is the current model is as good as top people who are a psychologist and scientist in trying to shape people's opinions. That's obviously very scary because it means anybody who has access to these tools, which is anybody can use them to sway people's minds on multiple topics in order to shift the opinion of billions of people towards what they think. But that's not even the most scary part in the interview. The most scary part in the interview is Ezra is asking him about safety. And Dario shares that they have what they call a S I, which stands for a I safety level. And there's four levels of that. ASI one, two, three, and four ASI one is model models that represent no risk. ASI two AR models that can generate some risk that can most likely be contained. And he's claiming that the current models that are available, meaning they're cloud three and probably GPT four. And. GPT 5, as we know, is coming in the very near future, and nobody knows exactly what that entails. And the other large models, such as Gemini from Google, are at ASI 2 level. But he's saying that ASI 3 and ASI 4 represent significantly higher risk. So ASI 3 represents serious risk of misuse in generating biological weapons and cybersecurity, as an example, and ASI 4 Could represent a potential catastrophic risk to the human race by destabilizing geopolitical situations Or other impacts on our society. He's saying that ASI three is coming most likely this year or in 2025. And that ASI four models potentially can be deployed between 2025 and 2028. It can 2035 is next year. But even if it's 2028, it's three years from now. We might have models that represent existential threat to our way of life, and yet they're working on them and deploying them. And when Ezra was pushing him to see what they're doing about it, he did not have a good answer. He was really dodging that, basically saying, Everybody is working on this, so it doesn't matter if we do this or not, somebody else will. That, to me, is a really bad answer when you're dealing with something that you are one of the most smartest people in the world on this topic, and you know it might represent an existential threat to the way we live. I'm sad that I have to end on this not very positive note, but this definitely should ring every alarm in every government or international body like the UN to start taking a lot more serious actions to put controls on what can and cannot be released. And maybe it's okay to develop these kinds of tools in closed labs that are very well secured, just like biological weapons are developed today. There are labs that are doing really crazy things behind closed doors that nobody has access to other than very few individuals. And it's under very close monitoring from different safety agencies. and the reason to develop them actually makes more sense than these biological researches, because it can also generate a lot of benefits to humanity, but I think releasing those things to the wild are nothing short of irresponsible and stupidly dangerous. And I really hope that governments and like I said, large international groups, and hopefully in partnership with all the leading teams that are developing these things to cap the capability of the models that are going to be released into the wild. That's it for this week. We'll be back on Tuesday with an amazing episode on how to create advanced and extremely capable automations within your business. If you find this episode or this entire podcast helpful, please rate us on your favorite podcasting app. Yes. Pull your phone right now. Yes. Right now, as you're listening, pull up your phone. And rate us on your favorite app and share it on your favorite platform with other people to benefit from it as well. That's it for today. And until next time, have an amazing weekend.