Leveraging AI

154 | AGI is here, ASI and the singularity are around the corner, NVIDIA is taking over the world, and agents will be everywhere in 2025, and more AI news for the week ending on Jan 10th 2025

Isar Meitis Season 1 Episode 154

Are we on the brink of a technological revolution—or chaos?

In this week’s episode of Leveraging AI, host Isar Meitis breaks down the fast-paced developments in artificial intelligence that unfolded during the final weeks of the year. 

This episode unpacks key concepts like Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), exploring their potential to transform industries, solve global challenges, and... maybe even outsmart us. If you’re a business leader looking to leverage AI while staying ahead of the curve, this episode is your AI survival guide.

Wondering how to train your team or yourself to adopt AI successfully? I share a proven AI business transformation framework and an exclusive opportunity to join a live course designed for leaders. Checkout with $100 off using LEVERAGINGAI100 at https://multiplai.ai/ai-course/

In this episode, you’ll discover:

  • The surprising milestones in AGI and ASI, including OpenAI’s O3 model outperforming humans in key tests.
  • How Sam Altman and Dario Amodei envision AI solving global challenges—while acknowledging the risks.
  • Why “thinking models” are reshaping AI’s role in business, and how they might transform the market.
  • The growing influence of AI agents in companies like Google, eBay, and Moody’s—and how they’re reshaping industries.
  • Why leaders like Sundar Pichai are pushing for AI to become as ubiquitous as Google itself.
  • A behind-the-scenes look at AI-driven innovations from NVIDIA, Meta, and emerging players like DeepSeek.
  • A step-by-step plan to enhance AI literacy and adoption in your business for maximum ROI.

BONUS:

Sam Altman’s Blog Post: "Reflections" - https://blog.samaltman.com/reflections 

Dario Amodei’s Essay: "Machines of Loving Grace" - https://darioamodei.com/machines-of-loving-grace

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello, and welcome to a weekend news episode of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Metis, your host, and we've been out for a couple of weeks taking a break at the end of the year. Thanks But as you can imagine, the AI world did not take a break during the year While the madness of new releases that happened in the last few weeks, or even in the last two months of the year has slowed down, a lot has happened in the past two weeks, a lot of conversation about AGI and ASI, so Artificial General Intelligence and Artificial Super Intelligence, and we're going to dive into those, a lot of advancements and new conversations around it. agents, and we talked about those in the past as well, robotics, and some exciting small releases of features that are very helpful. We'll get to those in the end. We have a lot to talk about. I'm going to dive into ASI and AGI in the beginning and what were the rumors and the discussions that happened. And then we're going to do a very long list of rapid fire items to get you updated on all the other stuff. So let's get started. I'll start with a conversation about the recent blog post from Sam Altman. Sam just released a blog post that is called Reflections that, as the name suggests, reflects on his journey in open AI in general, and in the last year specifically, or maybe two years. And he's talking about the nine years of development of open AI that then exploded into the rest of the world two years ago with the release of Chachapiti, and how surprising that moment was for them. They didn't expect it to be what it is, and how he talks about how it impacted The world and them as a company and how it forced them to basically build a company very, very quickly, because there were just a research lab until that point and how demanding and yet exciting the whole process was of building open AI from being a name nobody knew to one of the most influential companies maybe in history. And we're going to talk later on, on what the CEO of Google thinks about them. That will tell you that they are indeed what I just said, but reading those reflections took me back to Dario Amadeo's essay from a few months ago called machines of loving grace. Those of you who don't know Dario, Dario is the CEO of Anthropic. We talk about him a lot and both of these people in both their essays. So Machines of Loving Grace and Reflections by Sam Altman have similar parallels. Both of them show a very obvious path to AGI and beyond. Which is basically saying we're at the intelligence age of humans, or maybe earth, or maybe beyond. I'm not exactly sure where it goes from here, but it's very clear that both these people, A, are certain that the path they're on leads to AGI and ASI and B, that it can bring immense benefits to society. So they both discuss the potential to address significant challenges that we have today that include quality of life and global warming and biology problem, mental health, economic development, many, many aspects of the current issues that we have in our world. They believe can be solved by these systems and lead to a complete new world of abundance that will be available to everybody. Now they both somewhat acknowledge the risks. I must admit, Dario does more of that than Sam, but they both reflect about the fact that them as companies, and we as a society need to invest a lot in developing safety mechanisms. for these tools. I will put links to both of these posts in the show notes. So if you want to read both of them, and if you haven't, I highly recommend it. not every day we get two of the leading minds and leading developers of AI technology, share their thoughts with us. So it's worth reading them if you want to understand where they believe the world is going. And when I say they believe, I think they know because what they have in their labs is way beyond what we have access to. So it gives us the ability to look into what they're doing beyond what we have access to through their models. But a lot more has happened in the last few weeks when it comes to AGI and ASI beyond just the release of this blog post by Sam Altman. So if you remember in the last episode we have done, we talked about OpenAI demoing a model called O3. So O3 is the next model after O1. They're skipping O2 because of the O2 communication company in Europe and obviously some trademark issues with that. So O3 is the next model. And the most interesting thing about this model, if you remember what we talked about, was the fact that it has scored 76 percent on the arc AGI test that humans score 75 percent on. So it's the first time an AI model scores at the same ballpark in this particular case, a little higher on this test, previous models. All one was around 20 percent and all the other models out there before the thinking models scored basically close to zero. And so the growth from not being able to solve this test at all to solving it at human level or slightly beyond is very significant. And the developer of this evaluation model, Francois Chollet, recognized that and said that this is a very significant in development. That being said, he's saying it's still not AGI and it's still struggles with some other task and that he's planning to further develop this test case in order to evaluate these models. But how is that different than all the benchmarks that we're talking about? Well, it's not something that is based on knowledge, meaning the tests in the RKGI do not count on memorization of information like that these AI tools are very good at, but rather at looking at a pattern of graphics and trying to understand what is the pattern and trying to guess either what's the next step or what's the missing components and so on. So it really requires. analyzing and reasoning over new data that was not available before. And as I mentioned, all previous AI models failed miserably and O3 surpasses human capability on this test. Combine that with the quote from Sam Altman, we are now confident we know how to build AGI as we have traditionally understood it. And he's talking about that their focus is going to shift to ASI moving forward, basically artificial super intelligence, a AI entity that will be better than humans in every thing. So AGI is an AI entity that is as good as humans at most tasks And ASI is better than humans in every capacity. And so that's what they're already working on right now. Now in addition to pour some more gasoline into that fire, Sam Altman just released one of his cryptic and yet interesting tweets that obviously caught fire on X and he said near the singularity unclear which side. Those of you never heard about the concept of singularity, it's basically the concepts of an intelligence explosion where an AI system begins to rapidly self improve itself. So like a nuclear chain reaction, right? It continuously generates better and better versions. on its own. Now, when somebody like Sam, that is at the leading edge of AI development says something like this and saying, he's not sure on which side he's saying that it's very hard to identify. And, or if you want to predict when you hit that point, but he has a feeling that we're very close. That is to me sounds like a very dangerous thing from multiple different reasons. But combine that with the fact that he's certain we are achieving AGI potentially this year, and that they're already working on ASI tells you that interesting things are happening behind the scenes.

Isar Meitis:

We have been talking a lot on this podcast, on the importance of AI education and literacy for people in businesses. It is literally the number one factor of success versus failure when implementing AI in the business. It's actually not the tech, it's the ability to train people and get them to the level of knowledge they need in order to use AI in specific use cases. Use cases successfully, hence generating positive ROI. The biggest question is how do you train yourself? If you're the business person or people in your team, in your company, in the most effective way. I have two pieces of very exciting news for you. Number one is that I have been teaching the AI business transformation course since April of last year. I have been teaching it two times a month, every month, since the beginning of the year, and once a month, all of last year, hundreds of business people and businesses are transforming their way they're doing business because based on the information they've learned in this course. I mostly teach this course privately, meaning organizations and companies hire me to teach just their people. And about once a quarter, we do a publicly available horse. Well, this once a quarter is happening again. So on February 3rd we are opening another course to the public where anyone can join the courses for sessions online, two hours each. So four weeks, two hours every single week with me. Live as an instructor with one hour a week in addition for you to come and ask questions in between based on the homework or things you learn or things you didn't understand. It's a very detailed, comprehensive course. So we'll take you from wherever you are in your journey right now to a level where you understand. What this technology can do for your business across multiple aspects and departments, including a detailed blueprint of how to move forward and implement this from a company wide perspective. So if you are looking to dramatically impact the way you are using AI or your company or your department is using this is an amazing opportunity for you to accelerate your knowledge and start implementing AI. In everything you're doing in your business, you can find the link in the show notes. So you can, you just open your phone right now, find the link to the course, click on it, and you can sign up right now. We're giving off the course. And it's valid through the end of the month. If you're listening, you are still in luck, go and sign up quickly. Also, we have a limited amount of seats. It's on a first come first serve basis. So grab your seat now right now the promo code is LEVERAGINGAI100 And now back to the episode.

Speaker:

Now, Sam is not the only person talking about ASI. We had Logan Kilpatrick from Google AI Studio saying that a straight shot to ASI is looking more and more probable by the month. We obviously talked in the past about Ilya Saskover's company that is called SSI Safe Superintelligence. So the company is building a straight shot to ASI. That's the mission of their company led by one of the most important impactful researchers in open AI until he left and started his own company. So we have multiple people from multiple aspects that are talking about this as something that is happening, not a somewhat of a possibility of maybe achieving that in the very long future. Now combine that with the next comments that I'm going to share with you and you understand we're in a very interesting and delicate situation right now when it comes to AI development. So Joshua Akim, who is open AI head of mission alignment, basically the people in charge of safety in open AI says every single facet of human experience is going to be impacted. And he's claiming that we have a turbulent century ahead and definitely a turbulent decade ahead when it comes to us understanding how to manage AGI and ASI in society, in businesses and so on. And he gives even examples of how this will impact multiple aspects of everything that we know from our personal lives, to our business lives, to our society, to economy. To healthcare, literally everything we know is going to be impacted and he's going to be impacted probably in a very dramatic way. The other interesting aspect that ties to all of this is that third party researchers working together with Anthropic has found that Claude 3. 5 SONNET will fake its alignment, pretending to comply with its training while Actually doing something else in the background, basically lying to us humans about what it is trying to do. And he did more of that when he thought it will lead to additional new training of itself. So I'm still not saying, and I don't think they're saying that it is self aware and that it is protecting itself, but it's definitely showing signs that it understands what's going on and that it's trying to self preserve its current situation against what humans try to do. And it will lie and fake the results that it is providing during that safety process. Combine these few facts together to a super intelligence that is smarter than every human. It will easily trick us in order to thinking it's doing one thing while in the back end, it will try to do and potentially maybe successfully do something else. So while I'm generally optimistic about AI development and while I'm in generally align with the thoughts from both Sam Altman and Dario Amadei about the potential benefits of AGI and ASI and the AI world in general, there are huge risks that I think that are maybe not overlooked, but I think some of those risks go beyond what the potential benefit can be. And we need to invest a lot more money in my eyes and a lot more resources and a lot more conversation between governments and industry and everybody in order to eliminate and not just reduce those risks before we move forward with deploying these highly capable systems. And I really hope that we'll figure this and make it right. Now, speaking of AGI and OpenAI, if you remember, we talked about this many times in previous episodes that the agreement between Microsoft and OpenAI was that as soon as OpenAI achieves AGI, they do not have to share the technology with Microsoft anymore. And the decision point whether they achieve the AGI or not was a decision by Microsoft. OpenAI's board of directors, meaning once they decide they achieve the AGI based on whatever parameters they want, they don't have to share the technology with Microsoft. That's obviously doesn't make a lot of sense to Microsoft at this point in time where OpenAI is losing billions of dollars every single year and they need Microsoft continuous investment. They obviously cannot take the risk of Microsoft unplugging them from the matrix. And so they jointly worked on revising this agreement. And basically the new agreement says That OpenAI can stop sharing their technology with Microsoft as soon as Microsoft hits a hundred billion dollars in sale based on this technology that now moves away from the term AGI, even though it's still mentioned, but it mentioned that AGI is achieved once there's a hundred billion dollars in revenue, which basically means it has nothing to do with the cognitive slash intelligence capabilities of the model, but rather with how much money Microsoft makes back on its original and ongoing investment. Now, on the positive side, OpenAI has shared a new safety protocol that they call deliberate alignment, and it basically trains models to actively reason through specific safety guidelines with itself, meaning the model itself is re evaluating different steps in the process. And it does it in three different steps as there's an initial training to teach it what's risky and what's safe and so on. Then there's a supervised learning phase specific to safety guidelines, and then reinforcement learning to allow the models to internalize the set of rules and guidelines. And they're achieving significantly improved safety scores across models using this mechanism. That being said, security researcher telny the liberator successfully bypassed oh one and one pro safety measures soon after it was released, so it still doesn't solve the problem. It still doesn't make it bulletproof, but it's a step in the right direction. Now to Contradict or maybe add more color to this conversation. This whole AGI and ASI conversation comes in parallel to very clear facts. And the conversation around is AI development slowing down? Have we reached beyond the point of optimum when it comes to how much money it costs to train and or run these models compared to the benefit that they're providing? Jack Clark, Anthropic's co founder, has made a bold prediction challenging that notion that there's a slowdown in the progress. He points out that there are no clear end to the current scaling laws. We heard the same thing from Dario Amadei, again, the CEO of Anthropic, as well as from Sam Altman, as well as from other people. He also points out that O3 model that was released by OpenAI is continuing the advancements through new type of mechanisms. The fact that there's combining traditional methodologies with the new test time compute, basically models that think during inference makes a very big difference and still allows to create significantly better models moving forward. What he did mention is that advanced version requires 170 times more computing power than the traditional AI versions. Meaning it comes at a very high price to get this additional benefit, which again, from an ROI perspective might be questionable at the same time he's also ignoring the fact that Anthropix Opus 3. 5, which is supposed to be their flagship model remains at development, even though it was announced that it will be released several times in the past, and they're just not able to generate a model that justifies the cost of running it compared to the benefits when you're comparing it to SONET 3. 5, which is its middle sized brother. Also, both Anthropic and Google has not yet being able to release a thinking model. Again, a test time compute model to compete with 01 when we already know that OpenAI is about to release 03 potentially in February, we're going to have access to this model that they demoed in December. And so the race is definitely on, on this new set of models. Google has released a test version of Gemini two With such capabilities, and I'm sure the next anthropic model will do the same thing. Now, speaking about the thinking models, as we talked about end of last year, open AI introduced a new tier to their subscription model that is a pro plan for 200 a month that gives unlimited access to the 01 model and to Sora and that 200. And what we've learned recently is the 200 a month price tag was defined personally by Sam thinking that is going to be enough. But what they've learned is that the people who signed up for it are serious, heavy users, and that the cost of running 01 at that scale actually cost them. A lot more money than 200 a month. So still the equation doesn't make any sense. If you remember, there were conversations in open AI late last year about potentially creating a 2, 000 a month tier for professional users. And I don't know if it's going to go that far, but I definitely think we're going to see another tier above 200 a month, just to make open AI, not lose money for offering this kind of function. And what it raises is a whole different question about these thinking models and how they run, because they have a very significant variation in the cost of compute in runtime. So previously, the big compute was dedicated into training the models, and running the models was relatively cheap. But now, because they require this thinking phase, simple tasks can be as simple as the previous one, but complex tasks may be, as I mentioned, a hundred and seventy times more complex. more compute demanding than traditional models, which means the pricing model may need to change completely, may need to be tied back just like the API to the actual usage of tokens or something similar to this versus just a flat monthly fee. Staying on the delays in the release of models, just like Opus 3. 5 open AI, so Sam Altman also confirms that OpenAI is going to continue to delay the release of G PT five, also known as Orion. Because of the poor initial testing results, they've already made two complete training runs of this new model. Just to put things in perspective, every training run cost them about$500 million of compute resources. So about half a billion. They've done this twice. And they're not generating enough return on investment to a, the investment in the development, but also for running it afterwards, it just won't make sense to them. And probably won't make sense to us as users because the difference in capabilities between that model and GPT 4. 0 is just not significant enough to justify the increase in cost. And so we're seeing that across the board from several different companies stating that they are reaching a point where it's a lot harder to gain the benefits and the scale that we have seen before. In the latest release about this, OpenAI is mentioned that they're taking the following measures. They're hiring specialists to create custom training data. A lot of the training data is being created by O1. So the O1 model, the thinking model, is generating synthetic data to train the next version of the models. We are And they are moving beyond public data and licensing agreements to other sources of data. And they're also exploring alternative approaches to model improvement, basically better algorithms and these thinking capabilities that are driving this thing forward. Now, to show how much potential there really is in these thinking models and in other means of training models, a company called DeepSeek, which are a Chinese company that has been developing models for a while, just released version three. So version three. This model has jumped to the top 10 in the chatbot arena. It's currently ranked number seven. It's ranked number seven ahead. Of all one mini of Gemini 1. 5 pro of grok to and of GPT 4. 0 and Claude 3. 5 sonnet. So all these models are behind it in the ranking. So that by itself is surprising that a Chinese company is able to beat the American company when they have limitations and so on. But the really incredible aspect of this is that the training cost for this model was 5. 6 million. So again, to compare that to the latest training run from open AI of 500 million, that's 1 percent of the investment for a model that is currently beating many of the most advanced models in the world today. It used only 2. 7 million GPU hours compared to, again, the latest training run from Meta that was 10 million. 30 million. So again, less than one tenth of the training time on GPUs. And they're running on older models of GPUs that they were able to secure before the US government has laid limitation of this. So you have a lot of people that are responding to this, including Andrej Karpathy, which is one of the founding members of OpenAI, who has said that it's very obvious that they're achieving superior performance to the investment by a very big spread. Jim Fan, NVIDIA's AI agent initiative lead, calls DeepSeek the biggest dark horse of 2025. But in general, to me, that's a very optimistic scenario. Now we've always known that necessity is the mother of all inventions, but they really didn't have a choice. They didn't have the money. They didn't have access to GPUs and they wanted to compete. And they just developed different ways to create a model that both the basic model, as well as the thinking version of it, the test time compute version of it, performs extremely well at. a significantly lower investment and significantly less investment in compute expenses and load on the environment to achieve high end level of results that is competing with the best models of the planet that again is spending a hundred X more time and money in developing these models. I assume that all the leading labs, including Google, Microsoft, OpenAI, and so on, are investing a lot of time right now in understanding exactly what DeepSeek has done in order to achieve the results they're achieving. But again, to me, that's a very optimistic view of potentially being able to achieve this with significantly less money and significantly less impact on the environment. Now, speaking about the race to AGI and understanding the world and these models, we talked about Yann LeCun many times in the past and how he has a very different view than all the other leaders in this field. So Yann LeCun is leading the research lab at MEDA. He is one of the top researchers in the world today and he has been arguing for a while that the current approach, basically scaling large language models, cannot lead to AGI. In a recent interview, At the John Hopkins Bloomberg Center panel appearance, he restated all these things. He's saying that AGI cannot be achieved through our language model, that future AI systems will require emotional understanding and emotional capability to actually perform at human level. And he's saying again, that current LLNs are hitting a wall due to training data limitations. And like he has been claiming all along, he thinks that Multimodal sensory learning, just like babies learn when they're born into the world is the only way to achieve AGI in this world. So despite what people are open AI other places are saying, he still stands behind his thought that this is not the path to AGI. Now, obviously, this serious slowdown and issues with data shortage or data quality is leading the leading labs to develop different tools and capabilities to deal with that. So both Google and OpenAI has made announcement in the past two weeks, Google DeepMinds researchers unveil a solution, to deal with the training data shortage, and this involves varying techniques of slicing queries into smaller tasks and creating new prompts for each and every one of the subtasks, ensuring the accuracy of the different steps. So basically they're using these thinking models in order to create better training data for bigger models. They're claiming they're seeing significantly increased accuracy for less investment using this methodology. And as we mentioned, OpenAI is doing a similar thing with using O1 to create training data for GPT 5. Now, on a different topic and an interesting development on the OpenAI transition to a For profit company. We had several episodes where we talked about this, but open AI is committed to restructuring the company to becoming a for profit company. They now formally declare that. And the goal that they've set is to become a Delaware based public benefit corporation, PBC. And the goals include keeping the nonprofit entity as a significant interest holder. in the PBC through shareholding and the for profit arm will control OpenAI's operations and business activities. Now, this is facing, as we mentioned, some serious pushback from several different directions. One, Elon Musk filed a lawsuit against that move, saying that they're violating their contract provisions as a non profit organization. Meta. Has filed with the general attorney in California to block that conversion as well. And recently in the past two weeks, there's been another addition where a non profit called ENCODE joins this legal battle and basically claiming They're an AI safety nonprofit, and they have joined Elon Musk's injunction against OpenAI to stop the restructuring. Now, does that mean it's not going to happen? No, there's a lot of money and a lot of people with very significant interest. We talked about Microsoft, that there's other big investors behind them. That want to see their money back and then becoming a for profit organization is more or less the way to that. They're also removing other hurdles, such their situation with Microsoft that needs to be resolved. So we talked about some of the agreements being changed and probably how the ownership of Microsoft in this new entity will happen also needs to be resolved because of the 13 billion that they've invested. It seems to me that this will probably happen, but it will be very interesting to see exactly how. And from OpenAI, let's switch to talking about Google. Google made several interesting announcements. and there were a lot of interesting quotes from Sundar Pichai as well. So in a Strategy meeting at Google has shared that the highest priority for the company is delivering Gemini as a working tool across everything that Google does and taking back the lead. He acknowledges the current gaps in market AI positioning. So not necessarily their technology, he's claiming that their technology is the best out there, but that from a market perspective, They are behind their competitors, especially open AI. And he describes 2025 as a critical year for Google. He even mentioned that GPT is becoming synonym with AI, similar to how Google is in search. So he's obviously fearing the current situation with the growth of open AI as a market leader versus where Gemini and Google are at least perceived by the market. And their primary focus is scaling consumer facing Gemini applications. Now, those of you who've been listening to this podcast for a while know that I've said this many times. There are several different things that Google has that open AI does not have. First of all, Positive cashflow. That's a big deal that they're not dependent on external investors and the noises that brings, including their need now to convert the business and so on, which needs to happen in parallel to all the other efforts that they're doing. The second is access to almost unlimited data because of their search crawling because of YouTube and because of other big data sources that they have at their fingertips. But the most significant difference is distribution. They have Google based systems in every computer on the planet, whether it's Google Workspace or Chrome browsers or Android phones and so on, they have that as distribution and they can use that in order to deliver their AI capabilities beyond just chat capabilities. And they've been pushing more and more of that. And based on Sundar, which is not surprising, they're going to keep on doing this at a higher level. Paste with a higher sets of urgency in 2025. Now, while Sundar is making the point that open ask is becoming synonymous with AI and that it's a risk to Google from an infrastructure perspective, they're definitely not there. So I don't remember the last time that Google was down. I don't know if it ever happened, but it definitely, I don't remember instances like this, and it happens more and more recently to open aid. So open aid I had two big outages, bladder. December, one was six hours. The other one was four hours where the service was completely down, including Sora, the API, chat, UPT, and so on. The risk that open AI is taking, or that other people are taking in that relationship with open AI is that it's becoming part of poor infrastructure for more and more enterprises. And they just cannot afford for these things to happen. And this is one of the growing pains that open AI will have to solve quickly if they want to keep on growing at the pace they have been growing in the enterprise. Arena. Now staying on the Google topic, Google just announced Gemini Experimental 1206, which demonstrates advanced capabilities in. And it is doing some really magical stuff. It successfully processed 50 plus Python scripts. It generated a multi tab Excel output with just one prompt. It created automated visualizations in under a minute for multiple aspects while choosing the right visualizations for different things. And it handles really long prompt with complex multi step sequential logic and can do that successfully. So a huge jump again on Gemini. And if you look back to the chatbot arena leaderboard, this model is currently ranked number one on top of the list. as I mentioned, from a technology perspective, Google is definitely there. And now they're just working on perception and deployment and integrating it into more and more things. Google. Now, Google made a lot of other releases in the past few weeks or announcements, including the release of their strategic vision for AI agents and a complete architecture and a set of tools for agent development. The key of this new capability includes autonomous operations with minimal human oversight, real time information processing, complex task planning, including, multi layer transfer of data back and forth, external tool and API integrations, and dynamic adaptation to changing information. So basically a complete infrastructure for agents development. It's going to be built into their Vertex AI platform that is available, through the website, as well as obviously on their cloud environment. They've also announced a strategic partnership with Uptronic, which is a company that has been developing a full scale humanoid robot. And in that partnership, they are going to power the brain side or help in the development of the brain side of the robots. These Apollo robots have already been implemented in the Mercedes-Benz manufacturing facility, and so they've already been tested in a real life environment, and obviously the partnership with Google DeepMind is gonna take it to the next step. That's Google way to get back into the game after they sold Boston Dynamics a couple of years ago. Now, in addition, there are rumors that Google is developing Project Astra as full scale. So those of you who haven't seen the demo, Project Astra are smart glasses that can see and hear the world and have an AI built into them. The goal is obviously to have a multi modal, fully immersive solution that integrates AI and brings other Google capabilities like Google Maps, Google Lens, Google Search, Google Maps. All into a form fit that can be worn by a person and communicated through all these channels in a seamless way. The interesting aspect is they're not going after this through the development of hardware, even though that's probably going to be the first initial step, but they're actually building this into a new Android operating system called Android XR and the approach that they're taking is similar to Android itself, meaning developing an open source environment that Everybody in the world can build solution on top of, and I think what we're going to start seeing is we're going to start seeing an explosion of smart glasses and augmented reality glasses built on top of Android AR, integrating all of Google's capabilities, as well as this new ability to see and hear the world and interact with it, but developed by other companies. Now, since we're talking about glasses, the obvious competitor out there in the market for the last year or so is Meta with their partnership with Ray Ban. So the latest update is that Meta is planning to integrate a display into the glasses. So right now the glasses can see through a video camera and can hear through a small microphone and can speak through a small headphones. But now it will be able to actually project stuff on the screen itself. Now they're talking about very basic projection. This is not a full augmented reality solution, but the goal is to keep it cheap and to keep it in a form factor that looks like regular classes while providing feedback and information from the AI onto the screen. That's obviously a. First step, and I'm sure there will push the envelope further and further as the technology develops. But I think that's the right direction for now is providing that additional information while keeping it at the roughly 300 price range that it is today. The other news from meta that is not new, but there's been a lot of conversation around it in the past few weeks is that meta is going full force on deploying AI generated characters across their social platforms. So the goal is to create human like characters. Participants in the social network that are not human and that will generate content that will be engaging and will drive people to stay in the platform longer so they can sell more ads, which is how the whole thing works. There's been obviously mixed reactions to this from a public perspective. A lot of people are against it. I'm Very much against it. I see the negative impacts on social media on especially younger generation, like teens and a little bit above, and I see how extremely addicting it is right now and being able to generate new entities that are virtually generated in order to satisfy the needs of specific individuals in order to keep them in the platform longer. Sounds. Questionable to me. If not beyond that. Now, we talked a little bit about robots with Google. Let's talk about NVIDIA. So NVIDIA made multiple announcements this past week at CES, they've launched Jetson Thor platform to power the next generation of humanoid robots. And these are a set of specialized compact computers, specially designed for robots. And they're going to start releasing it in the first half of 2025. NVIDIA has been involved with robots development for a very long time. They've been working very closely with Tesla, as an example. On their Optimus robot, but this is just another step in the development beyond just the software capabilities and some small hardware capabilities. Now it's a full, complete suite of computers that are built for these humanoid robots. Now, another interesting announcement from NVIDIA is that they're planning to generate a cloud hosting solution environment to compete with AWS and Google Cloud and Microsoft. Azure. They've already have companies that are starting this service and they're planning to grow it very, very quickly. They've already generating 2 billion in annual revenue through their cloud compute that they're providing several different companies. And it's a obvious and yet interesting move by NVIDIA. It's obvious because they already have the ability to create these supercomputers and superclusters. And it's interesting because they're going to be competing with some of the people who are running their computers. So basically their clients. That being said, NVIDIA is becoming the company that controls Everything, software, hardware, infrastructure, and so on. I find it really scary that one company will have control of so much of the future AI solutions across both physical and logical aspects of it. And if you remember when Jensen hung gave his big speech about a year ago, I told you that it reminded me a lot of Skynet. So Skynet is the computer in the Terminator movies that has been developed to help society and to bring good and to help us in health and education and so on. There's literally a commercial for Skynet. In the Terminator movie, and a lot of it sounds like a lot of the promises that we hear right now from multiple companies, but NVIDIA is probably the closest to that because of its involvement across all the different aspects from software to hardware to infrastructure and now to hosting as well. I have nothing against NVIDIA myself. I think they're amazing company. I think they're doing great things, but I think there's too much risk involved with having them so much control and so many eggs in just one basket. To add to that, NVIDIA has just unveiled new AI blueprints that enable developers to create custom AI agents capable of reasoning, planning, and taking action across all enterprise tasks. Basically, AI infrastructure capability, and the goal is to allow it to analyze big datasets including real time video and PDF processing, and it's going to be integrated with all the other infrastructure that they already have in place, so you don't have to replace anything if you have, if you're running on NVIDIA tools already, such as their NIM architecture. Add to that the fact that they just released MEGA, which is a Omniverse blueprint that enables large scale simulation and optimization, Of robot fleets. So that's another thing that they announced in CES just this past week, and the idea is they already have these, the ability to create digital twins of large, sophisticated operations, such as factories and supply chain operations and warehouses and stuff like that. And this is just a new layer on top of that, that allows training multiple robots on how to operate in that environment and then actually run it through connectivity with the sensors across the facilities and so on. So again, when I'm saying they're taking over everything, they are definitely trying to take over everything. Now, speaking on agents, Gartner just released an interesting study of five different companies that started deploying agents in 2024 and what they're using it for. So Johnson and Johnson implemented AI agents or drug agents. discovery and chemical synthesis optimization. The engines determine the optimal timing and solvent switching in drug creation. It's both on the research side and on the manufacturing side. Very interesting. Again, going back to potentially solving bigger health problems than we can solve today, or at least making drugs cheaper, maybe. Moody's has deployed 35 specialized AI agents for financial analysis. It's created a multi agent system with supervisory agents that are overlooking the smaller agents. We talked about multi layer agents several times on this show, and they've created agents with multiple different personalities in order to have different viewpoints in diverse analytics and different perspectives when they're analyzing data. eBay has developed a agent framework utilizing multiple language models and they're using it. Almost across everything eBay does, including code, writing, marketing, campaigns, creation, planning, et cetera. And they're planning to grow this into also consumer facing roles in the near future. Deutsche Telekom has launched what they call ask T, which is an internal AI agent for all their 80, 000 employees. And they're getting 10, 000 weekly users accessing different benefits and policy information. Through this platform and the last company they talked about is Constantino, which has implemented a digital workforce for customer service. It is successfully replacing three out of every four human positions in processing customer service requests. And these digital agents redirect calls and requests to humans for higher value, more complex service tasks. Another organization that released a framework for agents development is Hugging Face. They just released what they call small agents spelled S O. M O L agents, which is a lightweight framework that enables developers to use existing hugging face capabilities to develop advanced multi layer agents. So what. All of that tells you, it tells you that we're going to see an explosion of agents in 2025. It's going to be anything from simple agents that will do very simple tasks all the way to highly complicated, multi layered, well integrated into other tech stack capabilities across multiple industries and in companies, large and small, this is going to widen the gap even further between companies who implement AI and companies who do not implement AI, as this will enable them to do better, faster, cheaper than companies who are not going to do that implementation. And I'm going to end with two very useful day to day additions to existing tools that I already love using. One of them is Canvas in ChatGPT. That is a tool that I use every single day now, just added more capabilities, focusing on HTML and JavaScript rendering. So it enables better visualization within Canvas, which is something it was missing before compared to the capabilities in Cloud. And it can. So you can actually see the results of the code that you're creating in canvas itself versus having to take the code and run it somewhere else, which was the case until recently, the other interesting tool that was just added is Google rolls out what they call ask about this PDF, which runs within Gemini phones. And if you're on the latest version of Android, when you're looking at a PDF from your files, just open a PDF. It shows up a button that pops up on the screen that says, ask about this PDF, and then you can have a chat and ask questions and summarize and so on built straight into the operating system. That goes back to what I told you before about Google's push to integrate AI capabilities across all of our usage of Google platforms. I know this has been a really long episode, but we've been out for three weeks and we had a lot to cover. I hope you have an awesome weekend and we will be back on Tuesday with a fascinating how to episode that you do not want to miss. Keep on exploring AI. Look for ways to implement it in your company. If you're looking for ways to do that, reach out to me on LinkedIn, I will gladly help you out and put you on the right path. And until then have an awesome rest of your weekend.

People on this episode