Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
42 | Amazon and Microsoft anounce new AI capabilities, The First Church for AI is established, and other exciting AI news from this past week
Who's Really in Control of AI's Future?
In this action-packed news update from December 2nd 2023, we cover:
👨💻 Sam Altman reinstated as OpenAI CEO
🔎 The mysterious event that led to leadership change
🏢 Microsoft gets board observer seat
🚀 Intel launches efficient new AI model
📈 Google's multimodal model beats benchmarks
🎥 Pika AI generates realistic videos
♾ Meta researchers remove LLM context window limits with new architecture
🕺 Alibaba's Animate Anyone makes images move
☁️ Amazon launches AI assistant Amazon Q
🌆 Amazon releases its image generator - Titan
🚂 SageMaker Hyperpod for efficient LLM training
✏️ Microsoft adds DALL-E to Paint and other services
📝 Bing Copilot gets GPT-4 Turbo upgrade
📚 Australian schools get AI ethics framework
🧮 AI tutoring boosts test scores
😱 Fake AI influencer earns real money
💡 Church established to literally worship AI
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a short News Weekend edition of Leveraging AI. This show is going to be released on December 2nd and cover news from the past week or maybe a little more than that. The first piece of news that we have to go back to is the. Saga that continues around the changes at the top of open AI. As we've shared last week, Sam Altman is back at the position of the CEO Greg Brockman is now the president. There has been changes in the board, the nonprofit board of open AI. With three members off the board in one observer from Microsoft. So the Microsoft representative that was not there before has been added. He has no voting rights, but he can actually see what happens. And in an interview Sam Altman had this week, he shared that the first goal, one of the big goals of this board is to choose a bigger board. So there's going to be less chances of one or two people going rogue and changing the future of the company. Altman also shared in an interview that his departure or his forced departure of the position as CEO had nothing to do with safety research concerns. Nobody exactly knows yet, but he did share that they are working on getting a third party investigation that will identify and hopefully share with all of us exactly what happened. He also mentioned that they are planning rapid AI progress despite the accusations and the fears of unsafe developments or that out of control developments that may or may not have led to releasing him from his position as CEO. And the question around the current structure of a nonprofit that is controlling a for profit organization. I believe is that question. He did not exactly answer that question, but he left it open for us to understand that this may not be the case moving forward. My biggest take on this is that sadly, I think the charter of open AI to develop a safe AI solution that puts the well being and benefits of the human race in front of profits of his investors is not going to exist moving forward or is going to be dramatically limited moving forward. Next set of news comes from several different companies and it's going to touch on new interesting technological development and advancements in the world of AI. The first one comes from the open source world, and Intel just released a new large language model called Neural Chat 7B that hints that it has 7 billion parameters, which is actually considered a relatively small data size for a large language model, neural chat, which utilizes a new technology that Intel that's called DPO, direct preference optimization. And it allows this large language model to handle tasks like writing and reasoning and coding in a high level of sophistication that beats many previous open source models on multiple benchmarks. It also uses and leverages Intel's Habana Gaudi 2 hardware for the large scale training that they're doing, Which comes to show a few things. One is that more and more companies and research are coming out there, allowing to create large language models on new types of hardware that are cheaper it also shows that the integration of technology, architecture, and hardware will excel any large company that has the capability to bring all these things together, we'll be able to create bigger efficiencies than we know today in how these models run. And because this model is so efficient, it has taken the top spot on hugging face leaderboard, beating the previous top model, that was the Mistral 7 B. So again, open source model. Anybody can grab it much more efficient on a smaller scale, running a more efficient hardware. Another big announcement this week on new advancements In AI models was released by Google. They call that model Mirosol 3B and they're claiming it is the most innovative multimodal model out there. They have developed it from the ground up on a completely new architecture and way to address multimodality, which enables it to handle much, much longer videos than was possible so far. Actually, almost unlimited videos can now be handled with this new architecture. It also is very good in open ended text generation and it has very good generalization skills, which are not common, especially on smaller large language models. As I mentioned, it has only 3 billion parameters And it performs much better than larger models on common benchmarks Beyond beating these benchmarks. It also showing very high level of accuracy on many questions and many tasks that it's performing. And it comes again to show that the technology Of AI models is getting better. And in this particular case, not just language, but across real multimodality from the ground up. As you probably know, Google is in the process of working on their next large model that they call Gemini, that we already know that is going to be multimodal all the way from the ground up. It was supposed to be already released, and they've now postponed it, I think, through February. But that may not be the case as I mentioned last week, I don't think they will release this before they know it's at least as good as GPT 4 and now GPT 4 Turbo, because so far everything they've released within BARD and their tools within the G Suite has not been impressive, and they cannot repeat that again. A relatively small company made a huge splash this past week with its announcement and its release of his tools to the public. The company is called Pika Labs, and they've launched a tool called Pika 1. 0 after six months in existence. So relatively fast. This is a really powerful aI video tool, it knows how to create videos from text or from images. So similar to things we've seen from runway, but they've launched it very, very quickly and they're showing very capable high end video generation skills. And if you search anything about Pika on any platform out there, you will see numerous videos that already exist. The company right now has a wait list to get access into its web platform. But you can already start using it on discord, similar to mid journey. So if you want to play with that, you can go ahead And sign up for it on discord. It's a very powerful and very promising video tool We have seen multiple advancements in the video side from leaders like Runway and Stable Diffusion Through the past few months and I think in 2024, we're going to see some huge breakthroughs where I believe that we'll be able to create full length videos in whatever style, whatever length and whatever resolution we want to a level that it will be very hard to distinct between that and videos that were taken with videographers and lighting and so on. This is obviously going to revolutionize an entire industry of creation of content with video. Another fascinating and really important development that has been released. Is a research done in a collaborative effort between researchers from Meta AI, MIT and Carnegie Mellon, and they've developed what they called streaming LLM. Large language models has what's called a context window. It's the amount of data that you can either upload or extract from the large language model, and it has been limited, And like everything else in the AI world, there's been an arms race on who has the largest context window. Originally, GPT 4 had a context window of 8, 000 tokens, then it became 32. At the same time, roughly, Anthropic announced That their context window will be 100K tokens. Then ChatGPT came out with GPT4 Turbo, which has 128, 000 tokens. And then almost immediately after Anthropic announced a 200, 000 token window. But all of these, A, are limited, and B, as you grow the context window, you start losing the accuracy of the model. Some more in the beginning, some more towards the end, some in the middle. But they have serious limitations. Well, streaming LLM extends the large language model to basically be unlimited. They were able to show with this new architecture and approach that they have taken LLAMA2, which is a common large language model that was released by MetaAI and process up to 4 million tokens and more compared to the 200, 000 of Anthropic and 128, 000 of GPT 4 Turbo. So as you can understand, this is an insane breakthrough that would really allow Unlimited data usage with large language models. This is currently a research, but this group has released an open source, the Python library that make streaming LLM easy to apply, and it's available and hugging face to anybody to use. So I'll be extremely surprised if this does not become available within probably weeks, but definitely months across at least several different open source models. the last big piece of news I want to share from a technological advancement comes from the Institute of Intelligence Computing at Alibaba. They have released what they call animate anyone, which is an image to video character animation. And what it does is really, really cool. You can take an image of anything, either an actual person or a character from a cartoon or a doll or anything that you can grab an image off and then record a movement of an actual person either moving or dancing or jumping or doing whatever action you want and apply that movement to the image of the character that you started with. So this is a very, interesting approach that will allow creating. Highly realistic, intricate and accurate movements of any figure that you want that could be real or imaginary or a cartoon and have it move exactly like a person because it literally grabs the movement off the person and applies it to this character. This, is another open source effort that is now available on GitHub. So I expect this capability to also become available across multiple platform, first and foremost on open source capabilities. really cool stuff. Go check it out. It's definitely worth looking at it. They have some really cool demos on their paper, and I will share the link on the show notes. By the way, the links to all the articles that I mentioned and all the new things that I mentioned will be shared on the show notes of this episode and on every future news episodes that I share on this podcast. The next segment of the news comes with several big announcements from some of the biggest leaders in the industry on new capabilities that they're releasing. We'll start with Amazon. Amazon held their reinvent event this week, and they had some huge news that came out of that event across multiple things related to AWS, which is Amazon's web services. But some of these announcements had to do with AI capabilities. The first and maybe biggest one is that they've releasing Amazon Q. It's a new generative AI powered assistant that is designed to work and be tailored to all the data businesses have hosted across all AWS capabilities. So companies can now solve problem and generate content and ask questions and hold conversations and take actual specific actions related to all the data that companies hold on their AWS repositories, which for many companies, that's all the data that they own. The idea is obviously that employees from the company can ask questions that will collect data across completely separated. siloed content repositories and summarize them together and create reports and write articles and take action. All of this based on company data that is protected from the outside world. One cool feature of that is that when you use Q It will also give you references and citations to where it found the information so you can actually trace back the specific documents of where the data came from to verify that the data is actually accurate, which is, I think, incredibly powerful, at least in the beginning when we know these systems still have hallucinations, and this will help you verify that the data that you're using is actually accurate. Another big announcement from Amazon is that they've released Titan image generator as part of AWS. Titan image generator is now available for preview on the Bedrock platform, which is their AI infrastructure for everything on AWS And like other tools from MidJourney and Dall-E 3 and Stable Diffusion, it can create images from text description, or starting from an existing image and then modifying it from there. What they've added is they've added an invisible watermark on these images which will allow them to be tracked as AI generated. What they did not share is exactly how they're addressing the big issue of IP rights and how they're going to compensate the creators that their content have been used in order to train these models, et cetera. But they're definitely making a big progress in the generation of content as part of the AI efforts. The last big thing that they've announced is something they called SageMaker Hyperpod, which, is available as part of AWS, and is a solution that will allow companies to train large language models based on their needs and their data significantly faster and more efficient than ever before. So it's an infrastructure for companies to create new Fine tuned and trained models on AWS including new hardware that they have announced. So now they have two new versions of their already existing chips that will allow companies to train models even faster and more efficiently than before. This is obviously mostly relevant to large enterprises who have the capacity of both research and money in order to run these kind of models, but it will enable any organization who have these resources to train models in a much faster and more efficient way. And this obviously connects to all the other stuff that I mentioned before. So it will enable companies to create really high end, tailored AI solutions for their needs. In a much easier and faster way while running it with significantly less effort. And the next company we're going to talk about is Microsoft. Microsoft, as we know, is the partner of OpenAI. And they're using the ChatGPT technology in all of their solutions. And they're making new announcements, basically. almost every day on how these are going to be deployed. This week they've announced that they're going to start using GPT 4 Turbo within Bing Copilot. One of the biggest benefits of that is that Bing so far has been limited to a context window of only 5, 000 tokens, which is extremely low. And as I mentioned earlier, GPT 4 Turbo has a context window of 128, 000 tokens So it has been really problematic to people who have long chats or want to use more data to use the copilot. And now that problem is going to be solved if they really deploy the full GPT 4 turbo capabilities, which time will say they haven't said exactly when that will come out and will it be identical to GPT 4 turbo? Another big announcement that they made is that they've added Dall-e3 image generation capabilities into their common paint product that is available to any Microsoft Windows 11 users. allow users of paint to generate images by just entering a text description and create a visualization. And that will be combined. With the existing capabilities of paint that has been one of their longest lasting products. So the combination of an old style editing tool together with a image generation from text capabilities will be very powerful. And I'm sure a lot of Windows users will be glad to use it within their operating system. They're also planning to integrate Dall-E3 into Bing and Copilot and their other services as well. So we will be able to see those image generation capabilities deployed across everything Microsoft. Next segment of the news is going to be about education, which I don't touch on a lot, but Two very interesting news topics related to education came out this week. First of all, Australian government came out with a new framework that aims to guide responsible ethical use of generating AI tools in schools and universities. The idea is obviously to make sure that students and the whole learning process and the schools themselves benefit from using AI capabilities. And that will obviously enable society to benefit from that Because people coming out of the schools will know how to use these tools in an A, efficient and B, ethical way. This document outlines the oversight and procedures and usage and different policies that will define how educational institutions can use different AI and generative AI capabilities, and even suggests How to develop curriculum that is incorporating generative AI. I think this is extremely important in the US all we've seen is the opposite. We've seen schools and universities adding regulations that prevent students and teachers and professors. From using these capabilities in the school because they fear of what the implication might be. And I think the right approach is exactly what the Australian government has taken, which is to define how to use it the right way. So we can actually educate students on the tools they are going to use when they get out of the school because everybody's going to be using these schools in society and business. I really hope we'll start seeing similar approach either locally or on a state level or even better from a federal level in the US as well. And to show how much this is important and new research that was done with a large experiment of over 1200 participants using GPT 4 explanations to understand SAT math problems. And there has been several different variations of the experiment. Some of it allowed students to see explanations from chat GPT to the math problems, only after they tried solving them themselves the first time. the other group was shown GPT explanations they even started solving the problems. And both groups have scored significantly higher in the test they have taken after seeing these explanations compared to the group that has not received any assistance from GPT 4. What this comes to show Is that large language models have a clear path to helping students learn any materials faster, better, basically being a private tutor that is available to anyone at any hour on probably almost any device in order to help them learn whatever material they want to learn. I think we're in very early stages of that, but I think it's highly promising and I really hope we will learn as humanity. to use these capabilities in order to teach us to be better instead of to replace some of the skills that we are doing today. And I will finish today's news with two somewhat geeky and weird news from the AI world. So the first one is that a Spanish. Influencer agency has developed an AI influencer. Her name is Aitana, and it's basically a fake image generated influencer that now has more and more followers, and now she's making thousands of dollars monthly for that agency for an influencer that doesn't exist in real life. The most she's made in a single month is 11, 000 for sharing different content on her feed. It's a little crazy when you think about it that a fake character they're not even trying to say that is a real person. Everybody who is following her know she's fake and yet she has 122, 000 Instagram followers that are willing to take actions when she shares different things on her chat. By the way, the agency says that they created her because they're sick and tired. Of the ego and issues and lack of professionalism that they've seen from human influencers. Now, this raises a lot of questions of ethics and labor opportunities and so on and so forth. But the reality as of right now, it's there and it's happening and they're not the only agency that's doing that. So I think we'll see more and more of those. The benefit is obviously you can create these. influencers or these people exactly to the taste of the audience that you want that would follow it. So it can be in the right race from the right location in the right age with the right features, et cetera. In order to attract the relevant followers, you don't have to find an actual person and you can control exactly when they show up, what they do, how they do it, et cetera. As I mentioned, this is a very interesting and questionable from an ethical perspective, and it's the reality that we all will need to learn how to live with. The good news is that in this particular case, they're at least sharing that she's AI made. I have zero doubt in my mind that similar things will happen, whether in election cycles or for marketing without telling us that these are not real humans, which raises even bigger ethical questions. And the last piece of news for this week that is really wacky is Anthony Lewandowski has established what he calls the Church to Worship Artificial Intelligence. He's basically saying that since in the future And potentially the near future AI will be so powerful that would be significantly smarter and more capable than humans. It's basically means it's some kind of a God and hence it needs to be worshipped. His idea that since it's inevitable that AI will be quote unquote in charge of what's happening in the world, it will make sense to let people a way to connect with it without fear through the idea of a church. I find this idea completely crazy, but the. Really crazy thing here is because it's a church, he can get away with a tax exemption that churches get here in the US and so he can develop his following to this crazy idea without paying any taxes for it. So another weird and interesting loophole that raises a lot of questions from an ethical perspective But as I mentioned, this is the world we live in This is reality and we will have to figure out how to deal with that. I will be releasing another regular interview episode on Tuesday as always, but I hope these news Help you stay educated and up to date with what's happening in the AI world. If you're not yet subscribed to this podcast, pull up the phone right now and hit subscribe on whatever podcasting platform you're using. And if you're on Apple or Spotify, I would really appreciate if you give us a five star review and share it with other people who can benefit from it. And until next time have an amazing week.