Leveraging AI

145 | Reasoning models competition intensifies, When will we achieve AGI? Grok access not through X, Microsoft Co-pilot Custom GPTs are back! and many more AI news for the week ending on November 30th 2024

Isar Meitis Season 1 Episode 145

Is AI truly slowing down, or are we witnessing its next evolution?

For business leaders, the debate around scaling laws, reasoning models, and AGI isn't just academic—it's a matter of future-proofing your business strategy. Are the days of GPU-heavy training behind us, or are we on the cusp of a revolution in AI thinking?

In this week's News episode of Leveraging AI, Isar Meitis breaks down the buzz surrounding "thinking models," test-time compute, and the escalating rivalry among AI powerhouses like OpenAI, Google, and Alibaba. From the rise of reasoning models to the implications for enterprise AI adoption, you'll gain actionable insights to position your organization ahead of the curve.

In this session, you’ll discover:

  • Why the latest "thinking models" may redefine how AI solves problems.
  • The truth behind claims of diminishing returns in scaling laws.
  • How companies like Alibaba and startups like Fireworks AI are disrupting OpenAI's dominance.
  • The business implications of AI's test-time compute innovations.
  • What C-Suite leaders must do to adapt as enterprises outpace smaller firms in AI adoption.
  • Understand the nuanced debate on AGI timelines and what it means for your business.
  • How reasoning models are reshaping AI without relying on massive GPUs.

Let’s keep exploring, implementing, and leading in AI together. See you next week!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello and welcome to Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Meitis, your host, and this is another weekend news episode of the podcast. every week, a lot is happening. And today we are going to focus a lot about what is happening in the AI models as we've been discussing in the past few weeks and the flood of thinking slash reasoning models. And what does that mean for scaling laws and AGI's and so on. And then we're going to do a lot of items as rapid fire. That is going to include many news about what's happening in open AI, potential stuff that is going to be released in December, et cetera, et cetera. Lots of stuff to talk about. So let's get started. So before we dive into the flood of thinking slash reasoning models, let's give a little bit of context based on stuff that we discussed in the past few weeks. There are more and more rumors that the scaling laws and the rate of change that is happening in AI models has been diminishing with these latest models that everybody's working on. So Google with Gemini 2. 0 and Cloud 4 and Chachapiti 5 or Orion whatever they decide to call it eventually. And while these rumors have been going on, there's been opposing rumors multiple people, including Sam Altman, other people in OpenAI, and others. people who left OpenAI as well as from Dario Amadei, the CEO of Anthropic. All of them are saying that there's no wall, that there's no limit to the scaling laws, that you shouldn't bet against it and so on. So where does the truth lie? It's not absolutely obvious, but what is obvious is that more and more companies, And the more our companies are coming out with these thinking models that are basing their success, not necessarily on bigger training with more GPUs, but rather on the ability to think as they get asked the questions. So one of the new contenders on that list is Alibaba, the Chinese tech giant. They have just released a new model that they're calling 32 B preview, and that model has 32. 5 billion parameters, and a context window of 32,000 tokens, and it is showing better performance than open a eyes. Oh, one preview and oh, one mini. on several different benchmarks, including the aim and the math tests. Now, it has been released as an open source model under commercial use, meaning anybody can grab that and start using it. Some interesting capabilities of this model is self fact checking, meaning while it's thinking about things, it will go and test that the facts that it's going to share are actually accurate, which is a huge, huge advantage. benefit because it reduces hallucinations dramatically, and it can plan ahead and perform sequential actions to reach solutions for goals that have been set by the user. So like other models from the same kind, it trades speeds for accuracy and problem solving. Again, the first model we have seen like that was a one preview, but since then, more and more models like this have been released. Now there's still issues with that particular model. It has some unexpected language switching when it would switch from English to Chinese and other languages, and it underperforms OpenAI's model in several other common sense reasoning, but overall It's another big player that is moving in that direction of what's called test time compute, meaning it is actually trying to improve by thinking about what it's going to do and breaking it into steps rather than basing the process On a huge data set of training and a lot of time invested with lots of GPUs, but while they're big and known, they're not the only one. Two other companies are released similar models in the last week and one before that. So a company called Fireworks AI, which is a U. S. based startup, has released a combined open source models outperforming GPT 4 and Cloud 3. 5 Sonnet that is based on the same concept. High flyer capital management, which is Chinese companies. Claims for better performance than O1 Preview with their latest model. And as we said, Alibaba, in addition we shared with you another Chinese company last week that has released a similar model. We also know that developers across multiple locations including Stanford and Google and Meta and OpenAI are all working on this new kind of models that will be reasoning and that will drive significantly better results without necessarily Bigger model without necessarily more compute or more training data. Just to explain how significant this is. Google expanded their reasoning team by 200 people based on the information. 200 people doesn't sound a lot compared to Google, but as a team that focuses on one thing, it's a huge team. The problem with these models right now is that it's very expensive to run them, so it's a lot. And the reason why we're doing this is because we want to make it cheaper to train these models, which is a huge benefit for companies specifically from GPUs, meaning they can now train much smaller models much faster with a lot less compute and potentially using all the GPUs while enjoying the benefits of these reasoning models in runtime, meaning they can close the gap on U. S. companies without having access to the same resources, either in means of money and definitely in means of compute. But. it costs a lot more money to run them. Hence, if you try to run all one preview, you will see that you are limited to the amount of runs you have on that model compared to let's say 4. 0. And there's been rumors that OpenAI wants to increase the subscription costs for these models once they're finally released in full and not the preview model. To 2, 000 a month. So that shows you a that it costs them a lot of money, but potentially that the value that they see in these models is significantly higher. So is there really a slowdown in the ability to reach the next level of models? based on the latest interviews and comments from both Sam Altman and Dario Madej, the answer is no. They are claiming that everything is well and that it just takes a very long time to train the larger models that have more complications than the smaller models. So in the latest interview by Dario Madej on the Lex Friedman show, he was talking about What goes into training these models and it's months and months of compute, and then they need to review the output and perform human in the loop reinforcements and then still check them and then check them for safety. And then they still have to release them the European Union safety boards in order to get them checked. So it just takes months and months of work because these models are getting bigger and more and more complex. So that's what they are claiming. Now, both these people, Sam Altman and Dario Amadei in the recent few weeks, and we shared with you are seeing a path to AGI that is relatively quick, meaning they're talking about 2026 or 2027, which is. Literally around the corner. That being said, two other big thinkers have sounded opposing thoughts on the topic of a GI. One of them is Demi Ave, who is the CEO of DeepMind, which is Google's AI arm. And in a recent interview for El Pais in Spain, he said that. A handful of breakthroughs are still needed in order to achieve artificial general intelligence. And he's definitely sounding different timelines than the other people we mentioned earlier. So he's talking about a few good years, about five to 10 years is what he's thinking about. Now, he was also asked about the environmental impact of these new gigantic data centers that are not being built or planned. And he argues that the benefits in the long run will outweigh the energy consumption concerns in the short run. He's saying that there's potential to improve weather forecasting and power grade optimization and battery designs as well as DeepMind itself has been working very closely on optimizing magnetic fields for nuclear fusion which if somebody solves that in the coming decade then it's the end of our energy problem or our dirty energy problems. So all of that is to still be proven but that's at least what Demis thinks. another scientist that has always been on the side of, let's relax, AGI is not coming around the corner, there's still things we need to figure out, is Yann LeCun, who's Mera's head of AI. Yann has been for a long time saying that large language models are not the path to AGI. And in a recent speaking opportunity he had, he shared the same thoughts again, that he thinks that current large language models are not going to lead us to AGI and that new architectures that can learn from the real world are necessary. That's the same view he has always held. And that systems must develop additional capabilities like planning capabilities and long term memory. And he specifically ruled out AGI arriving in the next one to two years. He also thinks it's going to take five to 10 years for it to arrive. So you have several different leading minds in this field that are saying it's going to arrive in one to two years or two to three. And another, and others who think it's going to be five to 10, Who is right? I cannot tell you, but what I can tell you, what I said in many shows before is that it doesn't matter. It doesn't matter because even the technology that we have today and the capabilities that we have today are so transformative That most of us still do not see the benefits of them in our daily lives and at our work I get to talk to people every single day that are still not Found ways to use a I effectively in their work while there are others like myself and like many of my clients and like other companies who are already seeing very significant benefits from the from just starting to implement AI without any huge projects. So While this theoretical question is interesting and it's important and it has huge social and economical impacts, I think what we need to worry about on our day to day is how we start using AI effectively, efficiently, safely, and ethically right now, versus what's going to happen once AGI comes. I think that's a very important question. Generalized futuristic question. And I also don't think there's going to be a line in the sand, meaning there's not going to be one that said, Oh, we achieved AGI, even though there might be people are going to claim it, it's going to be a gradual process where these systems are going to get better and better. And we'll be able to replace more and more of the things we do. And now that's what really matters, right? If every day there's more capabilities that these tools can do, we need to a, find ways to benefit from that and B, find solutions for the social and economical questions that it raises. Now staying on the topic of where these models are going or not going the new CEO of inflection So those of you who remember inflection AI was not really bought out But bought out by Microsoft and all its leadership team and most of the people Left but inflection still stays as an independent company. So the new CEO Sean White has explicitly said that he's going to be avoiding competition in the high end AI systems and he is skeptical of both scaling benefits as they are right now, meaning he sees a significant diminishing returns as well as he's not sure that the test time compute is the solution. He was even harsh to say that he's saying that the test time compute is just the industry's way to excuse. Latency, meaning we cannot run the models as fast as we want. So we can say to people that we are thinking in order to give it more time. Now, based on the recent results that we're seeing, I don't think that's the case. They are obviously focusing on developing applications for enterprise, which may or may not require more compute because they're going to be very tailored to the needs of their clients. And he's just trying to poke at the big giants in order to say that what they're doing is not efficient. They have been on a shopping spree and they've bought several different companies recently, all focusing on offering solution for enterprises. So while his views are obviously tinted or making arguments to what they're doing. That's another leading mind in the field that scaling models Right now may not be as simple as it was a year ago. From that to a completely different topic, we're going to focus a little bit on open AI and things about them because there's a lot to talk about. One of those things is their conversion from a nonprofit to a for profit organization. Now we discussed that when we discussed the raise that they've done recently, that valued them at 157 billion. But if you remember, I shared with you that in order to keep all the money they raised, they need to complete the conversion from a non profit to a for profit organization within two years. There seems to be a lot of complications in that. Part of that is that the for profit subsidiary, as I mentioned, is now valued at 157 billion, while the non profit parent company is only showing 21 million in assets on its books. There might be a minimum of a 30 billion dollar compensation requirement to the non profit that will be required in order to break it apart, in order to show that the value to the people, which is what the non profit is supposed to serve, is not is actually not stolen quote unquote by the for profit organization. Now the Delaware attorney general has already requested additional information about the conversion plans. Multiple jurisdictions could start an investigation, including in Delaware, in California, the Department of Justice, the IRS, like so many really big and not really fast organizations will want to understand exactly how this is happening. They will also want to avoid this using as precedent for other bodies to do the same process. So I think they're going to be more harsh than if this was a potentially just a one time incident that nobody will know of. In addition, as I mentioned in the previous weeks, Elon Musk's now widened lawsuits against them is not helping, and that might throw additional roadblocks or bumps on the road for them to achieve that process. And into all that mix, we have to consider the new administration, which we don't know which direction it's going to lean in this particular scenario. So what does that tell us? not really sure. I don't know if that's going to be a successful process or not. I have to assume that with the amount of money and political clout that this company has, including its investors and so on, it will probably happen. It may require additional, more funds just to make the process happen. It will definitely require a lot of focus on legal resources in the next few years for them to figure that out. Transcribed But with the ease that they've raised billions so far, I have to believe that they'll be able to keep on doing this in order to keep this going, especially with the type and the level of investors that they have, including Microsoft that has invested 13 billion in them directly. Now, there was a very interesting article in the information this week about the shift that OpenAI is seeing in the enterprise market. First of all, they're seeing a big shift from project based to company wide AI implementation. And I see the same thing when I talk to companies and with stuff that I read, meaning more and more larger enterprises are growing demand for company wide deployment with data lakes and really advanced layers of information and queries, as well as safety and security. measures in order to make sure that all the data is safe. Specifically on OpenAI, they're approaching four billion dollars in revenue in 2024, which is incredible for a company who made zero dollars in 2022. So in two years, they're going to be able to do that. They went to 4 billion in revenue. They're targeting 100 billion in revenue by 2029, which is like extremely high pace of growth in revenue. Now, is that just a statement or they achieve that? It doesn't really matter. The fact that's even some kind of a considerable goal. is incredible. They have grown their sales team to 300 people, which is about 20 percent of the company. And they already have clients like T Mobile and Moderna and Lowe's. So very large enterprises that are committed to the open AI solutions for the future. Enterprises. So a very clear focus on enterprise implementation that is growing both in demands in the number of companies as well as the depth the companies want to go with this technology and open AI are now considering developing new security features capabilities in order to help these companies implemented more successfully, faster with less risk, as well as they're considering pricing adjustments as the technology scales. I must admit something that I admitted before about a year ago, I had the feeling that small businesses are going to adopt this technology significantly faster because. Small businesses has the tendency to move faster and they're going to run circles around the large enterprises. And actually what we're seeing right now is exactly the opposite. We're seeing a huge investments across all types of resources from large organizations that are going all in on AI and many medium and small businesses are staying behind because they don't really know how to tackle this and they don't necessarily have the resources or the human capital in order to even get started. When I do my training and workshops with even midsize companies, that is very, very obvious that they're a little lost on where to start and how to start and what they need. And we see many, many examples of large corporations and enterprises that are already investing hundreds of millions or billions of dollars in AI infrastructure, both on the human capital side, as well as technology and security infrastructure. And with that, let's shift too many rapid fire items. We're going to stay on OpenAI to begin with. So first of all, OpenAI expands its GPT desktop capabilities with even new code editors. So we shared with you before that you can now connect to several different coding platforms and coding related platforms on your computer directly from the ChatGPT macOS solution. So they just added support for Android Studio code editors, as well as Enhance VS Code extension capabilities, contextual code assistant features, and selection based code analysis capabilities. Now they're developing capabilities that will allow Slack integration, Jira integration, Sloanflink database connectivity, and Google Drive and OneDrive support that are all coming apparently in the near future. Now, this podcast is going to go live on November 30th, which is the two year anniversary of ChatGPT, and there are a lot of rumors on what OpenAI may release as a birthday present to ChatGPT. Nothing clear has been said. There's been several hints by Sam Antman on X as well as other people, but nothing concrete has been released as of the release of this podcast. This may change three seconds after the podcast has been released, but if you are in the sentimental side of things, you can celebrate Chachapiti's birthday today, if you're listening to this on November 30th. Staying on OpenAI, openAI has introduced Sora earlier this year around February and caught everybody by total surprise. The capabilities were significantly ahead of any other video generation model that existed back then. And on paper, they still are with one minute long full HD capabilities. That being said, they have not released it to the world. They've only released it to a selected few to test it. And now apparently some of those testers have leaked that on Hugging Face. So access to the Sora model became available for a short amount of time, allowing users to generate 10 seconds, 1080p videos. On the Sora that was revoked within hours, but this has been done as a protest by several different people who are claiming that OpenAI is pressuring early testers for positive reviews and that hundreds of artists were quote unquote forced into providing unpaid testing or low paid testing and feedback to the model. They're also claiming lack of transparency on the tool's actual capabilities, and they're really angry about the low compensation from a company that was just valued at 150 billion and raised a few good billions in both equity and debt. Now, what we were able to learn from this early, from this leaked access is that there's a turbo variant that appears to be faster than the original one that the rumors were that is very, very slow and that the code that was released suggests that there's control over styles and customization options over the videos. When will Sora be released? Nobody knows that was one of the rumors that might be the birthday present for Chachapiti. but nobody knows when Sora will be released. it's very unclear and very not open AI to not release a model where everybody else more or less caught up. So if you're looking at the advanced models that are available today from minimax and runway and others, they more or less caught up as in means of quality, not necessarily in means of consistency and the length of the videos that are potentially can be created with SORA, but time will tell when SORA will come out and what it will have. I personally really enjoy working with these models, so I hope it will be released soon. I do hope that there's going to be some government guardrails of what's allowed and not allowed to be done with these video generation models. Two other quick topics on OpenAI. One, they have allowed their employees to sell stock to SoftBank as part of the latest investment move that will allow SoftBank to increase its ownership in OpenAI, as well as will allow liquidity to employees who choose to do that. That's a very interesting move that provides benefits to both employees who want to cash out now versus later, or want to cash out partially right now versus later. And as well as provide SoftBank the ability to have a bigger investment in OpenAI. Now, still on OpenAI, five major Canadian news organizations have filed a lawsuit against OpenAI. It's an 84 page long lawsuit that kind of claims what everybody else claimed before, that they illegally scraped copyrighted news content without permission or compensation. Now, if you remember, they have been sued by multiple other companies on the same lines, OpenAI claims what they've claimed in all the other lawsuits, that is publicly available data, and that they're using it under fair use principles. If you remember, I recently shared with you that the federal judge recently dismissed a similar lawsuit citing failure to demonstrate actual injury. So if you're claiming that they owe you money, you need to prove how much money you lost, and they were not able to do that in that particular case. That doesn't mean anything, but it sets a precedent for other lawsuits to have to prove that, including the Canadian companies who are now suing them. Now, speaking of big money and investments, Amazon has completed their second investment in Anthropic. That we shared with you or discussed a few weeks ago. If you remember, the caveat was that in order to make the second investment, Amazon wanted anthropic to use their new developed chips, and apparently they agreed to that. So Amazon are investing another$4 billion for a total of$8 billion investment in Anthropic. And in that process, anthropic is committing to using Amazon's custom built train and INIA processors that are AI specific models developed by Amazon and are gonna run on Amazon data centers. The interesting piece of news here is that Anthropic will collaborate with Amazon and Apurna Labs, which are developing these processors on future processor development in order to optimize them for Anthropic AI needs. So on the other hand, it's great for Anthropic. It also is great for the whole industry because it will run lots and lots of AI capabilities on another thing other than Anthropic. NVIDIA GPUs. On the other hand, it locks Anthropic to those particular tools, which may or may not be as good as others, but that is the current situation and they got a very large investment. Now, in addition to that investment, Amazon has been developing their own multi modal AI models with code name Olympus. This model, as I mentioned, is multimodal and while on the text side, it's underperforming most of the leading models out there, it has very advanced capabilities in analyzing videos, which very few tools can even do. Do today. So apparently you can upload even complex scenes like tracking a basketball trajectory in a basketball game, and it can analyze that trajectory and make math based on that. That's something that doesn't exist right now. The only tool that I even seen that can analyze video and tell you what's happening in the video is Google AI studio and I've tested it myself and it actually works pretty well But it's not at the level of understanding the motion of objects within a video so there's obviously interesting implications when analyzing sports or physics or Anything else that is trying to analyze what's happening within a video Not just on a superficial level. So kudos for Amazon for developing that. It will be interesting to see how companies start implementing those new capabilities. Now, since we talked about Anthropic with Amazon, Anthropic just added something really interesting to its models, which is the ability to choose or create your own style on how Cloud will respond to your queries. So there's a new drop down menu next to where you pick the model inside the chat where you can pick the style. But the more interesting thing, as I mentioned, is that you can create your own styles. You do this by uploading samples and providing specific communication instructions, and you can make real time adjustments to those preferences after you see how it responses, and you can custom your own styles. Style of anthropic. This is fantastic for companies and individuals that will allow them to create different use cases, either to have it create content just like you, or to provide answers to your clients, the way you wish, et cetera, et cetera. I see this as a very positive improvement in our ability to tailor the way these large language models respond in different scenarios. I told you in the beginning of the show that there's rumors about the release of Google Gemini 2. 0. the initial rumors that Gemini will be released before the end of the year, then there were rumors that it's going to get delayed. And now the rumors are again that it might be released on the second week of December of 2024. Once rumors become more specific, I hope they have some more facts behind them. But people found that out A version dated November 1st that has been spotted in the system files. This model is reportedly already available to selected enterprise customers for initial training. And there's even been evidence of Gemini 2. 0 briefly appeared on the official Gemini websites and then been taken off. Now, so far, all the models have been released first on AI on Google AI Studio before it was released on the Gemini chat platform. This may or may not be the case this time around, but keep your eyes open because we may get a new Gemini model in the immediate future. And speaking of new models becoming available, maybe the biggest and most interesting news, as far as a new model becoming available to all of us is that X AI Elon Musk's company is preparing to launch a standalone app for grok. chatbot in December 2024. So as you probably know, Grok has been developed very quickly with a huge amount of compute And now they have the largest AI training data center in the world. But with all of that being said, it's only been available to Twitter, premium and plus users on the X platform. Well, that is going to change. They're going to release a standalone grok application that will allow anybody to access the tool. They've already been testing a free version in New Zealand with specific usage limitations, but probably like the other models, there's going to be a free version and then a paid version that will allow you more access for a longer period of time. And this will going to be the first time that we'll get to really test it out and see how it actually competes with the other models that are out there today. We spoke about many of the big players. It's time to move to perplexity. Two interesting pieces of news from perplexity this week. first of all, Arvind Srinivas, their CEO has announced potential plans for an affordable AI voice device. So he created a challenge online on social media and over 5, 000 people liked this in November, 2020 fix. 2026, which was his threshold to say, maybe you want to go for this. The goal is to create a device that will cost under 50 that will be focused on voice to voice Q and a capabilities. And the goal is to have it very simple and reliable per Arvind. Now, similar devices that has been released in the past year has failed miserably. The two most obvious examples are rabbit. Rabbits are one that have sold 130, 000 units, but couldn't really take off and couldn't really deliver on the promises that it made. And Humane's AI pin that received really bad reviews and safety recalls. so far not a big success on the wearable small AI devices, but if I had to make my personal bet, I will say this is going to be something we're all going to be using not too far in the future, either voice only or voice and vision like the glasses from Ray Ban and Meta. Now I shared with you last week shortly that perplexity is launching an AI powered shopping platform, and now I got to actually play with it and use it in the past few days. It's pretty good. It's actually showing you several different options of things you can buy. Based on your query research. So this was black Friday. I was searching for a lot of stuff to buy. I didn't buy much, but I was looking a lot and perplexity was actually very helpful in helping me get recommendations and comparison between different things. But in the process, it's already giving you the opportunity to purchase different things. The thing that I found. Not user friendly is that I wasn't sure where it's buying it from and maybe I messed that in the user interface But I want to know who i'm buying this from so I know what's the return policy And how long it will take to ship and who can I call if there's any issues? And that wasn't very obvious at least when I was playing with this in the past couple of days, but it's definitely there They're definitely going all in on that and you're going to see More and more of these shopping suggestions. And then you can check out on perplexity itself, which I think is actually pretty cool, more on Google. Google Gemini just introduced another feature specifically for developers that allows you to upload an entire folder with a code base in it. And Gemini will know how to analyze it. The maximum capacity is a thousand files per folder, which is a lot. And the size limit is a hundred megabytes, which is also a lot. So the platform supports both desktop and mobile devices, so you can do this either or I assume if it's uploading folders of data with code, it's probably going to be used more on desktop and the process Gemini does is a two step process while it's going to read the files initially and then provide you an analysis and it supports interactive Q& A about the uploaded code base. This is another shot that has been fired in the coding support war that is happening in the AI universe right now. More and more platforms allow that, as we mentioned earlier, more and more capabilities on the desktop functions of ChatGPT. And this is just a Gemini capability that did not exist before on any other platforms that I know. Staying on Google, Google is preparing to integrate Gemini's capabilities into existing and future Google Assistants. So the devices that they sell for homes, and it will replace the existing Google Assistant with richer, more comprehensive responses, and you will obviously understand much more advanced conversational like inputs. There are similar rumors that Amazon has been working with Anthropic on the same thing for Alexa, and only time will tell get, and what's going to be the cost to the users in order to enjoy these additional benefits. Now, there's been some bad press for Google's Gemini in the news this week. A Michigan college student, Vidai Reddy, received an explicit death threat from Google Gemini, who are discussing elder care solutions. He was having a conversation with Jim and I about aging adults and elder abuse prevention, and as part of its response, the AI attacked with an explicit please die message to the student. Now, the user reported experiencing post traumatic stress, which is due Obvious that he will do that because it's Google and they have a huge target on their back. I think if he was doing it on any chat that he didn't know what he was coming from, we say, okay, whatever, this is a chat bot. And I don't know what he wants from me. And it's a little crazy every now and then, I'm not by any means diminishing what he was experiencing. I just have a feeling that if it wasn't a huge company with a deep pockets, that having a chat with something, that's going to tell you, please die is not going to create post traumatic disorder in most people. That being said, google has already issued a statement saying that it's acknowledges the violation of the policies and that it claims he has taken preventative measures to make sure this doesn't happen again. I don't think they have full control of how the models actually behaves because this is not regular software. There is no line of code that says this doesn't work. If somebody does this, then tell them to go die or any other negative thing. It's just how the model works. And it's going to be very, very hard for them to prevent it completely. This is definitely not a good scenario. And again, I'm not undermining this by any means. If. People with vulnerabilities are going to have conversations with these A. I. S. and develop dependencies in them and attachment to them. And this can be significantly harmful for those people. If the model does things that could be harmful to them, whether it's as harsh as this one or something else that's just gonna provide negative outcomes to specific individuals,. Just this past week, the Australian government has passed a law that does not allow teenagers under 16 to use social media, which I strongly support. But if you add AI to the social media world or to the communication world of youngsters, this is definitely not a positive development. And I really, really hope that the companies who run this as well as governments will find ways to regulate this and to control this in a way that will not lead to harmful or negative thoughts in people in general, and specifically in teenagers. Interesting piece of news from Microsoft. Microsoft just released Lazy Graph RAG. this might be a little technical, but I think it's very important to understand. Right now, there are two types of RAG processes. One is Vector RAG, and the other is Graph RAG. RAG stands for retrieval augmented generation, which is basically our ability to provide data to an AI and then have the AI answer only based on that data, hopefully with higher level of accuracy. GraphRag provides much deeper, broader ability to understand large amounts of data. But it's hugely expensive in both time and money. And VectorRag allows you to get not as deep and not as wide results with less data very quickly. and this new functionality by Microsoft is claiming that they are combining the best of both worlds, That you're getting the benefits of graph RAG with 0. 1 percent of the cost, which is similar to what vector RAG would cost. This is a great development and it's outperforming right now all the other competing methods, both on local and global queries. And the really important thing is Microsoft is going to make it available on the Microsoft open source graph rag library. So anybody who is running in the Microsoft universe can benefit from it. Now, speaking of stuff on Microsoft that I found very exciting, two weeks ago when I was doing my training, when I was doing a workshop for one of the companies. When custom GPTs came out, Microsoft shortly after delivered a version of themselves to those custom GPTs within the Microsoft universe, which was very powerful and very cool because it could connect to data within the Microsoft universe. When they came out with Copilot Studio, they took that away, which was really frustrating because Copilot Studio is extremely not user friendly. Like it's almost impossible to use though. Yes, it has. More advanced capabilities and it has more security layers and so on, but it's just not easy to use. And I probably missed the announcement, but I haven't seen them bring back the GPT version of this. And now it's back, it's available under the co pilot environment. They're actually calling it developing agents, which is not really agents, but now everybody's calling everything agents. But you can now create small and very helpful automations exactly similar to custom GPTs within the Co Pilot environment. To make it even more confusing beyond the fact they're calling it Agents, they're also calling this Co Pilot Studio. So it's a completely separate tool that you get to from a completely separate location and yet they're called the same way. But those of you who live in the Microsoft environment who want to quickly develop automation solutions that can do a lot of stuff in your business, this is a great opportunity to do so. And that's it for this weekend news episode. We'll be back on Tuesday with another how to episode that will deep dive into a business use case with AI. If you haven't done this so far, I would really appreciate it. If you pull up your phone right now and you would rate and review this podcast on either on Apple podcasts on Spotify. And while you have the app open. Click the share button, share it with a few people who can benefit from it. I will really appreciate it. And those people will appreciate it as well. And until next time, keep on exploring AI, learn more, share what you learn with others and have an awesome weekend.

People on this episode