Leveraging AI

168 | When it rains it pours, 3 leading models in one week, AI training is critical, Alexa+ release, and more AI news you need to know for the week ending on Feb 28th

Isar Meitis Season 1 Episode 168

Is AI innovation moving too fast for its own good?

This past week has been nothing short of an AI whirlwind. In just a matter of days, we’ve seen three major AI model releases—Claude 3.7, GPT-4.5, and Alibaba’s latest breakthrough—each with game-changing implications. But is OpenAI’s latest move a sign of strength or just a response to competitive pressure?

In this episode, we break down what these new models mean for businesses, why AI training for executives is more critical than ever, and the latest shakeups from OpenAI, Grok, and Google. Plus, Amazon finally launches Alexa+, and Elon Musk’s Grok AI is making waves… for better or worse.

In this session, you’ll discover:

  • Claude 3.7 vs. GPT-4.5 – Which AI model is really leading the pack?
  • Why OpenAI’s newest release might be a rushed response to the competition
  • The truth about AI training – Why most executives are still playing catch-up
  • Alexa+ is here! – What Amazon’s AI upgrade means for smart home users
  • Elon’s ‘unhinged’ AI assistant – Why Grok’s voice mode is breaking the internet
  • Deep Seek’s next move – Is China about to drop another AI bombshell?
  • AI & job security – What the latest layoff trends reveal about the future of work

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a Weekend News episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to improve efficiency, grow your business, and advance your career. This is Isar Meitis your host, and in just one week, we got several different leading models around the world. Three in the US OpenAI with GPT-4 0.5, Claude with 3.7 sonnet gr, which we talked a little bit about last week and Alibaba with a whole new set of models that are incredible. So that's gonna be our first main topic. Our second topic, we're gonna talk about AI training to executives and it's important and what can happen if you don't invest in this kind of training. And then we're gonna have a lot of rapid fire items. Some of them are really exciting and interesting, so let's get started. So last week we spoke a lot about rock and there's gonna be some additional news about grok today, but we will start with launches from two other companies. We'll start with Claude 3.7 sonnet. So on February 24th, Andro has unveiled Claude 3.7 sonnet, which is available both on the app and the API and online. And it's the first in the world that has a reasoning and not reasoning model combined in one, what they're calling a hybrid reasoning AI model. So you can pick between a quick response or a extend the thinking time to provide it, more time to think and provide a reasoning model. Something in the lines of what OpenAI promised us for GPT five. Now, I'm not sure it's exactly the same because you still need to pick, do you want the regular model or the extended thinking model? And I think the idea of GPT five is that it will do it on its own, but it's still a step in the right direction of one model that does both things. Now, the other interesting thing is that the pricing for the model remained unchanged compared to GPT-3 0.5, despite the fact it provides significantly better capabilities. And that's actually really good news and that's very, very different than what's happening in OpenAI. We are gonna talk about that in a minute. Now early testers, including people on cursor cognition, s replicate and Canva has reported significant improvement in code generation handling complex code bases and producing production ready code with fewer errors. So as you probably, no, Claude 3.5 is the leading model for most developers in the world today across all the development platforms that I mentioned before. And this will just extend the gap that Claude has across development solutions in the world, maintaining their leadership in that field. Now, in addition, philanthropic also introduced Claude Code, which is a command line tool for agentic coding. It's currently only available for research preview, by, anticipate that to be widely available sometime very, very soon. Now, in addition to all of that, they have improved their safety, which has been always the core of what Anthropic is trying to do. And the new model reduces unnecessary refusals by 45% compared to previous version, while maintaining the strong security standard that they have defined before. So less false positives, which is good for everyone. Overall, a very good model that is being highly embraced mostly by the coding community. Second big announcement we got only three days later was highly anticipated. We knew that was coming because Sam announced it on X before OpenAI released GPT-4 0.5, which is their largest frontier model. This is the Orion family of new versions of products. As Sam suggested before, that's gonna be their last non reasoning model. And as I mentioned, GPT five is going to be a unified solution that will unify the O Model thinking family together with the traditional gpt. But this was like a midterm solution. If you remember, Orion was supposed to be released long ago, and there were a lot of rumors that OpenAI could not get it to produce significantly better result than four O, and that's why they didn't release it. Well. They're claiming right now that GPT-4 0.5 has significantly less hallucinations than 4.0 and slightly less than oh one. And in addition, several different researchers are saying that GPT-4 0.5 is slightly better than GPT-4 oh across virtually most categories,. But there is a big disappointment from 4.5 when it comes to people who are actually have access to it and started using it. And we're going to talk about that in a second, but a lot of people are saying it's not much better than GPT-4 Oh, and it's definitely not better than oh one and oh three, and it is. 50 times more expensive to use than G PT four oh through the API. And it's 70 times more expensive than oh one mini on the input side of the API and 15 times more expensive and 30 times more expensive on these models. On the output of the API. While many people are saying that it's definitely less capable than oh one mini on coding tasks and some other tasks as well. So now even Sam Altman himself acknowledged on X that they're having issues with these models, and now quoting GPT-4 five pricing is unhinged. If this doesn't have enormous model smell, I'll be disappointed. I don't know if that's a acknowledgement of their failure or just trying to get some love for the fact that it's such a big model, but the reality is it doesn't justify the results that it's providing. So the ROI is just not there. You're not getting enough benefit for the significantly increased price. Sam also said that they're running out of GPUs because of the size of this model, and that's why they're slowing the deployment. So right now the model is only available to people at the$200 a month level, and while they're saying they're gonna release it to everybody else soon, it might have a lot of limitations because as I mentioned, semi is saying they're running out of GPUs in order to run this enormous model. My gut feeling tells me is that they released 4.5 because they felt they had to, we had deep seek in January. Then Grok last week, sauna 3.7, so their biggest competitors Quinn from Alibaba, and so they felt the pressure to release something right now because GPT five is not ready yet. I think there's gonna be a lot of people really disappointed as they deploy more of GPT. If you're just a regular user on the chat, it probably doesn't make a big difference to you because you're not going to see big differences when you use the chat. And the cost of the API doesn't make any difference to you, but for anybody who uses it through the API probably just won't use it. And that's gonna be, I think, a failure launch from OpenAI. Now, the good news from OpenAI this week is that they've expanded the reach of deep research. So deep research is incredibly powerful and it's now available from multiple different platforms. And it was initially released only for people with a$200 a month licensing, and now it is available to any paying members with a limit of 10 deep searches queries per month. So that's not a lot, but it's definitely better than zero. And I've already tested it several times and it works extremely well. So if you're doing any kind of research and trying to find information across the net and summarize it in a productive way, then this is gonna be a very helpful tool for you. If you have the paid version, if you have the$200 a month version, you will get 120 queries instead of 100 moving forward. So that's good news from OpenAI. Now I think they're doing this a) because they promised to eventually release it to everybody with some limitations, but they're facing significant competition across the board because in addition to OpenAI, Google has the same tool in their regular pricing model. Deep Seek has it for free. you.com have it for free for a very long time, and they're probably the best free tool out there. Grok has an incredibly capable that I actually really like using deep search versus deep research, but it does the same thing, capability, and right now it's free, but it is going to cost money whenever they decide to flip the switch. And Perplexity has a free one that also does something similar, but from my personal testing, while it's faster, it also has a lot more hallucinations then the others and so there's a lot of competition and it makes perfect sense why OpenAI releases it to a broader audience. Now we talked a lot about the release of Grok three and it jumping and taking over the first spot in the chatbot arena, but Grok three has promised to release voice mode in the next few weeks. Well, it took a few days and they released voice mode. Now live voice mode in is very different than live voice mode in all the others. If you remember, OpenAI did not release their live voice mode for about six months from the first time they announced it because they couldn't control exactly what it was saying and they wanted to align it first. It does not seem to be the case with x. So they have released the grok voice mode with multiple personality options that you can select from, and they are deliberately defying the conventional AI assistant behavior of trying to keep it very professional. It has a unhinged mode that can basically say or do anything you want, and it can escalate to screaming and it's been insulting users all over the internet and it's even making weird, strange, long horror movie type of sounds. So not being very friendly, but it's a choice, right? You can choose that mode. They also have a sexy mode, and if you go and search the internet for grok sexy mode, you'll see that there's very little or maybe no censorship whatsoever on what sexy mode will say. They also have an unlicensed therapist mode. So all of these modes are available in this model. Now, this is not surprising when this is a company run by Elon. If if you remember Elon bought X, the Twitter company in order to open it as the tool for free speech with very little or no censorship, and this just makes sense when it aligns with that. This by itself is not a problem depending on what you believe, but the fact that people were able to get very detailed instructions from grok on how to build biological and chemical weapons, including a detailed shopping list in the process on how to manufacture them, that's a whole different story. So what seems to be happening, while on one hand Grok was able to jump to the front of the line extremely fast compared to the time it took the other labs to get to that level of model. What it seems that X has skipped is it's more thorough alignment of the model, which is what controls it on what it's supposed and not supposed to say. And if you go back to the definitions that several different companies have out there, but the leading one is anthropic on what they won't release roughly more or less at the top of the list. There is a model that will allow people to learn how to make biological and chemical and atomic weapons. And this model did this now since then, it was presumably fixed by the X team, but the fact that they released the model that can do that shows you that they're not investing a lot of time, if any at all in alignment of the model. They're just investing in running faster than anybody else, which then saves them a lot of time and money in the process, which allows them to invest more time in the development of the model. Is that good or bad? I don't know. What will that do to the censorship and the due diligence that other labs are doing in their process? I don't know either. Would it push them to release more models faster? Well, we kind of already see it did. It seems that that's what's going to happen because as I mentioned earlier, I think that if OpenAI had a choice, they would not release GPT-4 0.5 because it does not provide a significant benefit and it costs a lot more. Now, in this particular case, it's not a security risk, but it's definitely showing that pressure to move forward faster is pushing these companies to release models, even if they're not a hundred percent ready. Speaking of Grok, the release of Grok has driven X AI's usage through the roof. So the mobile app downloads, pushed GRS app to number one with 10 x the downloads of previous week active users surge in the US to 260% week over week, and five x in the rest of the world. Web traffic went up from 189,000 people to 900,000 people in the US and worldwide visits went up from 627,000 to 4.5 million, so a huge spike in grok usage. I am personally one of these people that did not use grok at all when it came out. I'm not a heavy X user. I'm there just to read news about AI and follow different people, but I'm not very active on X and so it wasn't relevant to me, and the model previously was more of an X sidekick than it is a model on its own. But GR three is actually really powerful. I started using it across multiple things that I do, and I find it to be very, very good now. Right now, I get it for free. Will I decide to pay for it once it start being a paid version? I'm not a hundred percent sure, but while I have the chance to test it right now for free, I can tell you that I already moved some of the tasks that I used to do on perplexity as well as some of the tasks that I used to do on chat, PT to Grok, and I can tell you that right now I'm very happy with GRS Deep Search, which is deep research in all the other platforms, and I may continue using that I only have 10 of those a month on the paid chat GPT version. Now in addition, Alibaba dropped another bomb when it comes to their models. So on February 26th, with all within this same week, Alibaba has made four of its advanced AI video generation models freely available to users around the world. The new models are called one WAN 2.1 and it's extremely powerful. It seems to be almost as powerful as Google VO and better than all the other models that are out there right now. And it's available anywhere around the world, including on hugging face, and you can start testing it and playing with it immediately. In addition, they released a lot of their models, the regular models, not video generation ones as open source completely with weights and everything. So that was a very strong move by Alibaba to the open source market, as well as releasing a very powerful video model. And that drove their stock 5% up this week with an impressive 66% stock rise in 2025 alone, mostly being pushed by their AI involvement and the level of models that they're releasing in China that has limitations on getting access to some of the models that we have here in the west. Staying on interesting big releases. Google has released VO two on two different platforms, so their most advanced video platform that from everything I see is now the best in the world, is now accessible to users on two different platforms. So you can get access to it on the free peak platform, so that's free ffr, EE, and then PIK. And you can get access through that. Or if you're a little more geeky and you like to use it through the API or through a tool that allows you to use it through the API, you can use it on a AI as well. On F fa, AI just runs through the API and you're just paying for tokens on Free Peak. You're just paying free Peak. But if you do that, you have access to the most advanced video model in the world today. They have not released it on any Google platform yet, other than for initial testing. But people on Android Authority has found sections in the code that are showing controls for this video tool. So we need to assume it is going to be released as part of Gemini sometime soon. That is very exciting to me because I use Gemini every single day across all the Google Universe and Gemini, just the chat bot. So if we'll have the video generation capabilities, I will most likely jump into that tool because as I mentioned right now, it is the best in the world. If it's going to be integrated into everything that I'm doing, it is gonna be fantastic. I told you last week that Google now allows you to create images with Imagine three, which is now the most capable image generation tool across all the Google Universe. So you can generate images in google Docs, Google Sheets, Google Slides, Google Drive, everything, Google and obviously on the Gemini app as well. So if they're gonna do the same thing with video, that's gonna be extremely powerful and will push Google forward in their race for dominance in the AI universe. I expected Google to take this position about a year and a half ago when the madness started, and I said that because they have more of everything they need in order to be successful in this, whether it's compute talent, access to data distribution, all the things that are required, they own on their own, and they're proving that, yes, it took them a while and a little disappointing in the beginning, but they're leading the race as far as I'm concerned right now. Now deep seek the company who may have fired the first shot in this new wave of models are planning to potentially release deep seek R two, which is their next version after R one, which is the model that drove the world into a frenzy because it was a very powerful thinking model from China. The initial launch was supposed to be late May, and now they're talking about potentially doing it earlier. That being said, it seems that they're running into problems with capacity. So if you remember when R one came out, we spoke about this, that deep seek is financed by its mothership company that is a hedge fund in China. that was good enough in order to develop deep seek, but it's not enough to run it with the amount of demand they're having right now, while at the same time training the next model. So it's the first time they might be searching for external investment. There are a lot of people who are willing to jump in very big companies from China including some government funds. So I don't think they'll have an issue raising the funds, but right now they're running short on cash to do the things that they're wanting to do, but to stay competitive, they're planning to release R two before May. So that's gonna be very interesting to watch and I will keep you updated once this comes out. So like the saying says, when it trains, it pours in one week. We got so much new stuff to play with, and I know this is confusing as hell to people. And that takes me to the next topic, which is AI training for executives. So yesterday, I spent most of the day training people in an exact ED program in the local business school. And there were 27 people in the room from like 15 different companies across multiple industries and multiple states. So there were people from banking, manufacturing, small businesses. Law firms, et cetera, you name it. There were people there in the room, and I've been doing this for a very long time. I've been delivering these workshops. Most of them are for specific companies that bring me in to train their company. But in many cases, it's things like that, like different workshops that provide training to different groups. I also do the AI Business Transformation course, which we talked about in the past in the show, and I wanna share a few things on what's happening in that room. First of all, if you're feeling that you're left behind when you're listening to these kind of podcasts and you're seeing what people are doing, don't, and the reason I'm saying that still 70 to 80% are still in the very early stages of figuring this out on their own, and definitely as a business wide initiative. And so if you're feeling left behind but you're listening to this podcast and you're making steps in the right direction. you are doing the right thing. Just keep on going. The second thing the importance of proper AI education versus just listening to podcasts or watching YouTube videos or just following people on different platforms. Doing is very different than seeing what other people are doing. And you need to find yourself workshop or somebody to lead a workshop in your company OR somewhere that you can attend in order to actually show you and allow you to experiment with these different tools. And that shows up time and time again. Even people who are saying they are advanced users, when they come into the courses and workshops that I'm teaching, and there's always x percentage between five and 15% of people that are claiming that are learning lots and lots of new things once they start experimenting based on the things we're doing in the workshop. So what I'm telling you is get your leadership buy-in, and if you are in leadership position, you gotta start moving in that direction of training yourself as well as your leadership team, and then eventually all your employees on specific use cases that are relevant to your company and your industry. And learn how to implement them right now while doing a strategic process of evaluating how AI will shape your industry, your niche, and your environment. Because it might be dramatically different than it is today. We had great examples in the conversations yesterday with some of the law firms right now make money through billable hours. that's gonna shrink dramatically. A lot of it comes from paralegal work. Paralegal work will definitely shrink dramatically. If you use tools like deep research, you can do a lot of the paralegal work in one 10th of the time. It took before. If your income from paralegals shrinks by 90%, can you still run your business effectively? I don't know. But that's one of the questions that law firms need to ask themselves. And there's similar questions across multiple industries. Now, since I already mentioned law firms, I wanna mention to you what happens when you don't do this kind of training to your people. So Morgan and Morgan, one of the largest law firms in the country, had several of his lawyers be sanctioned by a federal judge for citing nine out of 10 cases that they cited that more made up by ai, meaning those lawyers has went to trial citing cases that don't exist because they did their research with AI tools, not checking the facts and assuming it's gonna be correct, not knowing that these tools can hallucinate and make stuff up. Now, this is not the first time it is happening in my courses. I've been showing companies a case like this from early 2023 with an Air Canada case. So this is not the first time that this is happening. And if these lawyers would've been trained properly on how to use ai, they would've avoided this really big scandal. Now, lucky for them, they were not un barred, but they just got a fine, but this really embarrassing incident just can be avoided very easily just by getting proper training for every person in every company. And while this is a legal case, similar things can happen in every single company in the world. If you don't get your people properly trained on how to use ai, what to do and not to do, and to understand its limitations. And now to rapid fire news, before we jump into the regular news from the leading companies, I found something really interesting in the news this week, that Princeton University in its engineering department, together with the Indian Institute of Technology researchers, revealed in December of 2024 that they allowed AI to design complex wireless chips, and they were able to do it in a few hours instead of weeks or months that it would take humans to design those chips. Now what they're saying is that these chips have superior performance compared to the chips that are designed for 5G by humans, and the design is radically different than the way humans design chips. Now, when they started examining this deeper. Usually humans have these templates on how they design chips. There's specific processes with defined structures when we build chips. But the AI took a different approach that looks randomly shaped that human cannot really understand. And yet in simulation they haven't actually developed the chips. But in simulation it performs significantly better. And what it seems to be doing is instead of following the patterns that we follow, the AI kind of reversed engineer the process. Just trying to understand what are the needed inputs and outputs and how it should be set up in a process that will make it the most efficient. And as I mentioned, they design chips that look dramatically different than humans. Now, will that work in actual, real life? I don't know if anybody knows, but what it's showing us that when you have access to huge amounts of data and you can think of a problem versus through the current limitations that we have and figure out any solution to find the best solution, AI will be able to do a better work than most of us, including in really advanced, complex problems, such as designing new chips. Is that the future of computing? Most likely, but I don't know that for sure. Obviously, time will tell. Now let's jump to a lot of news from the leading labs in the world. We'll start with open ai. So the first news is a personal news. Sam Altman became a dad, and so congratulations to Sam. Being a father is a big, big blessing, and I hope he finds lots of joy in his new son. But from that to news that actually have to do with ai. The first one is somewhat alarming, and yet I'm happy that it's happening. So OpenAI busts Chinese AI surveillance tool that is using chat GPT in the backend in order to do this. So OpenAI has uncovered that GPT powered surveillance tool built by a Chinese operation designed to sniff anti-Chinese posts in Western social media, has been running on their backend platform. But the way they found it was actually almost by accident when one of the coders of that platform used OpenAI Tech debug to review the surveillance systems code, right? So the peer review that OpenAI tools can do for coding is the one that allowed them to find them. This is not the first time, by the way, that OpenAI find Chinese campaigns using OpenAI platforms. This has happened in the past, and I'm sure it's gonna happen in the future. What I'm glad is that OpenAI are actually actively looking for these kind of cases and blocking them. Now, in addition a report on February 23rd, OpenAI has claimed that it's banning dozens of users accounts in the world for using chat GPT to facilitate scams and other malicious activities. So it is obviously easier and easier to do bad things with AI because it's advanced capabilities and because it's been proving that it's more convincing than most humans. So if you're thinking about phishing or any other scams, it becoming easier to do if you are just using these AI platforms. And again, I'm extremely happy that Open AI is looking for these kind of cases and that it's blocking the accounts that are using this technology in negative ways. Can they catch everybody? Obviously not. Can these bad people just go out and leverage open source models? Absolutely. The open source models right now are almost as powerful as the closed sourced ones, and I think that's gonna be the avenue of people who wanna do bad things with these tools. Or they can just use, go and use grok. Which seems not to care about what people is doing with its really advanced AI capabilities. Now the information had an interesting report about OpenAI at the end of the month, so February 28th. And they're saying that by 2030, OpenAI expects to receive approximately 75% of its data center capacity from Stargate, which is the new initiative that they have announced together with SoftBank compared to with Microsoft, which is where they're getting most of their compute right now. So while they announced Target a few weeks ago, it is now very clear that they're planning to shift their focus from relying on Microsoft to relying on SoftBank finance compute under the Stargate umbrella. Now, this is obviously not gonna happen overnight. They still need to raise all the money that they said they're going to raise, build all those data centers and shift to those data centers. But it's showing that their dependency on Microsoft is gonna be dramatically reduced while the opposite is happening. As well as we know, Microsoft is also developing their in-house models to reduce their dependency on OpenAI. So while they all still saying that they're good friends and everything is good, it is very obvious where the wind is blowing in the slightly longer term. Now speaking of Microsoft. Microsoft is preparing its server infrastructure for a significant upgrade with OpenAI next generation models of GPT 4 0.5, and then shortly after GPT five that is supposedly gonna be released in May. Now staying on Microsoft. Microsoft has announced unlimited free access to copilot voice capabilities and think deeper features, which is open AI's oh one reasoning model in the backend. So you can now use both advanced voice mode gPT oh one for free for all users as of February 25th. So if you need access to these capabilities and you do not want to pay OpenAI for that, you can use it for free on the Microsoft platform now, I announced this a few weeks ago, but then there were limitations on the amount of usage. You can use these tools, and now there are unlimited usage for free on these platforms. This is huge if you have dependency on these capabilities and you do not wanna be paying 20 or$200 a month. Now there's still the 20 bucks a month copilot license that you can still pay for. That provides an amount, quoting preferred access to our latest models during peak usage, early access to experimental AI features, and additional use of copilot in select Microsoft 365 apps. So if that's something you're looking for, then paying Microsoft to 20 bucks a month is probably worth it. My gut feeling tells me somebody who tested copilot, I don't find copilot that impressive. I actually find that the actual role models on open AI works significantly better than in the Microsoft 365 environment. And so All that you need is the previous model, so 4 0 0 1 and voice. Then you can do it for free. And if you're thinking of paying, you're probably better off paying off open AI for chat GPT. But that's just my personal opinion. Now, we spoke many times in the past few weeks about layoffs in the tech field while Microsoft is laying off 1% of its global workforce, which is thousands of employees around the world. And that just continues the waves of layoffs in the tech field. While everybody's saying that AI is not gonna have a significant impact, I. Strongly disagree with that and my personal opinion is that AI will take more jobs, at least in the near term than the jobs it will generate. Will it generate new jobs? Probably. Will it be in the same number that the jobs are gonna be lost because of ai? Maybe. I don't think anybody knows that. And the one thing I sure nobody knows is exactly what kind of new jobs are gonna be created. And it's amazing to me that every time one of the leading personalities in this field is being interviewed about this, they're very positive that it will generate new jobs and endless prosperity. But none of them have a single explanation of exactly what that is going to look like. And so until I see some facts and grouting, or at least ideas of how that may happen, I'm gonna stay with my opinion that we're gonna have significant job losses in the next two to three years impacted by the wave of ai. And from Microsoft, let's switch to Anthropic seems to be finalizing a massive$3.5 billion. Funding round that would value the company at 61 and a half billion dollars significantly expending from each original idea of$2 billion fundraising target. So one and a half times more money is gonna be raised compared to what they were planning to raise when they started this round. If that actually happens, then it seems to be imminent. That will bring the total capital that they've raised to about$18 billion. Cementing them as one of the leading AI startups in the world right now. Now philanthropic has seen a huge spike in the revenue this past year reaching to 1.2 billion annualized right now based on the Wall Street Journal, but they're still operating at a significant loss and hence the need to raise significant capital. Everybody's expecting them to release Claude four. So just like Chachi piti, and we talked about this early in the show, they released Claude 3.7, which is a step in the right direction, but they haven't released a major model for a very long time. Longest, and all the other labs combined longer than all the other labs. And so everybody's anticipating cloud four to come out. I don't know exactly when that is. They're not even rumors or when that might be happening, but I think it should happen soon. Maybe aligned with when we get GPT five and Deeps seq, R two which is around the May timeframe. Another big and interesting AI announcement this week is the announcement of Alexa Plus. So that has been rumored about two years ago and has been in the making for a very long time and has been stopped from being released for a very long time. And there were a few release dates and they kept on pushing it back because they felt it wasn't ready. Well, finally, Amazon is releasing Alexa Plus, and it relates to the previous news because it's going to be powered by Anthropic. Philanthropic Chief Product Officer Mike Krieger led a dedicated team that worked closely with Amazon throughout the past year to develop Alexa Plus and make sure that they can fully leverage all of Claude's capabilities in this new version of Alexa. Now, the rollout of Alexa Plus is supposed to start happening in the next few weeks. There've been a lot of cool demonstrations by the Amazon team in this announcement, and you can go and check it out. It will be a completely different level of Alexa, but they're planning to make it a paid service unless you are an Amazon Prime member, I would assume that most people who have Alexas are Amazon Prime members. And so per the current announcement, you will get access to Alexa Plus for free, or to be more specific, included in your Prime membership. I'm sure we will start seeing really interesting use cases in the next few weeks, but this will make Alexa significantly smarter and will allow you to do dramatically more things than you're doing right now. I'm a heavy Alexa user, myself, me and my family, so I'm looking forward to see what this will do, and I will keep you updated as I learn more from my personal usage as well as what other people are doing. Now in the demo by the Alexa team, they showed a lot of really cool conversational skills as well as combining with third party applications. So you can now create a grocery list while talking to Alexa. You could have kind of done this before, but it was very clunky. So now you can do this in a normal conversation and then order the food via either Amazon Fresh or order food through places like Grabhub or hailing Ubers and just coordinating through your Alexa app or finding concert tickets on Ticketmaster and ordering them and so on. So alexa will be able to connect to significantly more things that it connected to right now in the real world and take actions for you following a conversation that you're gonna have with it. Do I think that's gonna make more sense than doing it through your phone. I'm not sure. I know a lot of Alexa tools also have a screen, which will make a little more sense when you can actually see the list or see where Uber is and so on. So I'm not a hundred percent sure how this will work. I don't know what's gonna be the usage of that, but as I mentioned, in the near future, we'll start knowing because they're planning to start deploying it in the immediate future. Let's talk shortly about Nvidia. So NVIDIA's, CEO, Jensen Hong remains highly optimistic about the company's future and despite the big drop that they've seen in their stock price, when Deep Seek R one was released. So if you remember, their stock took a 27% nose dive when deep Seek R one was released. Claiming that because they trained it on a relatively small number of GPUs, and even from the older model, it shows that there's not going to be a big demand for their chips in the future. And obviously Jensen thinks otherwise. He's saying that if anything acts cheaper, access to AI will require more compute rather than less compute because it will make it available for more use cases. I don't know if it's right or wrong, but as of right now, the company hits a new record revenue of$39.3 billion for this quarter. And not just that they're expecting the growth to continue. So the company projects to continu its growth to the next quarter projecting$43 billion revenue in the next quarter. Now, in addition, Jensen has revealed the plans to unveil the company's next AI chip set that is called Blackwell Ultra during his keynote at the GTC conference in March 18th. So he already announced what is going to be announcing, and that's the next level of their GPUs that is are already leading the world. So the next version is already out there. They're kicking them out at a relatively high pace and definitely at a higher pace than anybody has done this before. And so they're doing everything they can in order to maintain their lead in both hardware as well as software capabilities. And they're developing a lot of capabilities behind the scenes for entire infrastructures for this industry, including the robotics industry. So while Jensen has to say what he's saying, I am a strong believer in the NVIDIA capabilities to keep on staying ahead at least in the near future. Switching gears to slightly different news, 11 Labs launches an AI audiobook publishing platform competing with Audible. So the Voice AI company, 11 Labs, which is probably the leading voice AI platform in the world right now, officially opened its audiobook publishing platform for all authors, and it allows you to create an AI generated audiobook from any text and publish it on the platform for people to consume. Now this is interesting because it comes shortly after 11 labs partner with Spotify for AI narrated audiobooks. It signals its push aggressive for audio content generation beyond just AI geeks like me who like to use the platform. Anybody will use it to create audiobooks and distribute them across multiple platform. As I mentioned today, including their own platform, they recently secured$180 million in funding. I know that doesn't sound a lot compared to the big labs, but they don't need to develop a lot of the models. They can use other people's models and the models they are developing are very, very specific for voice. So it's significantly narrower than what anthropic or open AI needs to do. Now they started testing it a while back and they're claiming that in the testing phase, the average user spend 19 minutes listening to published books on their app, which they're saying could still change as the program scales. I don't know if 19 minutes is a lot or not. I usually listen to audio books on have longer drives, Otherwise, I just listen to podcasts, so when I don't have a lot of time to listen, I will probably not listen to an audiobook. And so that's me personally, but it will be interesting to see if that actually works out for them. I think 11 Labs as well will face very significant competition. They were one of the first to release very solid and capable voice capabilities, but now they're available from most of the leading platforms, including some open source ones. I switched from 11 labs to using an open source model that can duplicate any voice and generate it very authentically. And so while I really like 11 labs and I think their tools are incredible, I think there's a lot of competition right now that wasn't there in the beginning. And their move to other fields might be a good idea, but might also come to bite them. You know where, because they're losing their focus from developing better voice models, or at least the tooling around them because all the other models are now pretty good as well. We spoke last week about Mira's startup that we now know that is called Thinking Machines Lab and what they're trying to do, which is still very vague, but apparently they're seeking to raise$1 billion at a$9 billion valuation. And if you look at the team that they were able to assemble, they will most likely raise that amount of money, despite the fact they don't have any clear path or any clear direction, at least from what they released on what they're planning to do. But it's gonna be something similar to what Ilia SCA is doing with a SI. People are gonna bet on the jockey versus the horse. And in this particular case, they have a very serious team. And hence they're gonna raise a significant amount of money without any product, without any clear timelines, and without any clear idea of how they're gonna do better things than other people are doing right now. So the madness around the field of AI and the leading people in this field is going to continue, and they will most likely raise the$1 billion that they're looking to raise. Now if you're asking yourself why leading scientists and leading people in the leading companies are jumping ship, this is the reason, right? So if you become a co-founder in a new company that is valued at$9 billion and you own significant shares in that is way more than you can make by being an employee, even a successful leading employee in an already established lab. Let's talk a little bit about governments and their involvement or impact on ai. So Qatar has entered a five year partnership with Scale ai, which is an advanced company that implements AI powered tools, and they're planning to use this partnership to implement AI across various government services. And scale. AI is planning to develop over 50 AI specific use cases for Qatar government focus, focusing on predictive analytics, automation, and advanced data analysis to stream government operations. I think this is one of first, but I think we'll see a lot of governments move in that direction because ai, like in any company, can allow governments to run more efficiently, which means saving huge amounts of money, which means enabling government to potentially raise less taxes, which has a lot of benefits as well. So I really hope to see the US government and other governments in the world follow this initiative. Now staying on government efficiencies and relationship to ai, the President Trump's and Elon Musk, department of Government Efficiency, also known as Doge, is targeting a$2 trillion in government spending cuts. Now, some of the numbers that they have unveiled on different government contracts are nothing short of staggering with companies getting billions of dollars and in many cases hundreds of millions of dollars in consulting fees from the government across multiple aspects of the government. And I am very much for reducing those fees dramatically. But while cutting government expenses dramatically is something I strongly support and I'm actually very happy that is happening right now. They're also cutting things like the government AI safety departments that was set by President Biden. And I actually think that a government body that helps control and monitor AI safety is not just a good idea. It's a necessity. So there are goods and bads aspect of this. I really hope that they will find the balance and that they will keep AI safety as a very high priority for this administration as well. That doesn't seem to be the case. I think it was very obvious from the moment Elon Musk became a part of this administration as well as from Vice President Vance speech a few weeks ago that I shared with you that AI safety is not their top priority, and AI capabilities is the top priority of this administration. Now on a different topic that comes to workforce, there was a very interesting survey released this week that 75% of knowledge workers are claiming they're already using AI at work. But the more alarming one is that 46% of people in the survey and there were 6,000 people surveyed, 46% of them are claiming they're going to continue using generative AI tools, even if it's banned by their employers. So this push forward, regardless of what is gonna be the limitation defined by the company, is there for two different reasons. One is those people are claiming that are getting massive benefits that are saving them hours of tedious work per week. And the other one is obviously competitive pressure to be better employees to be promoted and could to get higher salaries and get ahead. And so these two things, on the one hand, the personal benefits and not having to do work that they don't like doing and being able to do things faster combined with the competitive pressure is pushing people to basically say, we don't care what our employees are going to say. We are going to keep using generative AI One way or another in the training session that I've, that I did yesterday with the executives, that question came up and what I shared with them is that there is another survey that's been around for about a year that there's a big movement to bring your own AI to work, so basically in companies that block IP addresses and access to different AI tools on company computers, employees are bringing AI to work on their phones as an example, and they're copy and pasting stuff into that and they're taking work home to do it on their home computers with AI because they find it more efficient. This is something that employers have to know and the solution for that, instead of trying to block different things is just education is teaching people how to use AI in a safe way versus trying to block their access to ai. We cannot pass a whole week without talking about humanoid robots. So two interesting pieces of news about humanoid robots. one comes from one X, which is a company from Norway, and they released a new robot called Neo Gamma which is a humanoid robot specifically designed for home use that can perform tasks like making coffee, doing laundry and vacuuming. And they're just beginning. They're limited in-home testing, and they're emphasizing that the commercialization of this, it's still far away, mostly because of safety reasons and not knowing exactly how that's gonna work. This is a new trend of more and more companies that are developing robots for the home versus robots for industry. So the biggest companies in this field, like agility, electronic post on dynamics figure, and Tesla, most of them are focusing mostly on warehouse and factory applications, at least in the first step. And there's a few companies that are popping up who are developing specifically robots for the homes. Another very interesting development in the robotics field that I must say is a little creepy or maybe very creepy, was revealed this week by a company called Clone Robotics. And that released a footage of a prototype that they call pro clone that, has a thousand artificial muscles that make it look a lot more human and move somewhat like a human. Only in this particular demo, it's hanging from the ceiling and it looks really creepy. But when you look at it, it looks really impressive from the way it looks, it moves. It's significantly more human-like than all the robots that we're seeing from the leading companies right now. And everybody who's gonna watch this, it's gonna take you straight into the science fiction movies when robots look and feel like humans. And I assume this is exactly what they're trying to create. Now they're planning on starting to produce these robots for sale this year with an initial production of 279 units of what is gonna be called clone alpha. Now, will this be more successful than the traditional robots, or when I say traditional, I mean stuff that was developed in the last two years, but the current humanoid robots, time will tell whether that's gonna be more successful or less successful, but it's definitely an interesting different approach and it will keep you updated on how this moves forward. I hope you found this episode and other episodes of this podcast valuable. If you have, please share this with other people who can benefit from this. Literally pull up your phone right now. Click the share button and share the podcast with a few people that you know. If you have not rated this podcast yet, then please do that as well. Like while you're in the app, go and click and give us whatever star rating you think we deserve, hopefully five, and then write something in there to let people know why you're listening to this podcast. We'll be back on Tuesday with another fascinating how-to episode, where we're going to share a specific AI implementation use case for business people to allow you to learn how to actually implement AI in your business. And until next time, have an awesome weekend.

People on this episode