Leveraging AI

158 | ChatGPT Operator controls your browser, Perplexity assistant controls your phone, Stargate to invest $500B in AI infrastructure, AI develops new drugs in months instead of years, and more exciting AI news from the week ending on Jan 24, 2025

Isar Meitis Season 1 Episode 158

Are AI Agents and Robots Taking Over Jobs—and Extending Our Lifespan?

What if AI agents could book your flights, manage your work, and even double the human lifespan? From jaw-dropping advancements in autonomous agents to robots sprinting at Olympic speeds, this week’s AI news is both thrilling and unnerving.

Transform your business with AI? Don’t miss the AI Business Transformation Course. Use the promo code LeveragingAI100 for $100 off the next cohort starting February 17th. https://multiplai.ai/ai-course/ 

In this episode of the Leveraging AI podcast, we dive into OpenAI’s latest release, "Operator," a groundbreaking tool that mimics human actions to complete tasks on your browser. We also explore Perplexity’s AI assistant, revolutionary breakthroughs in robotics, and predictions from leading AI thinkers about a world where humans and AI coexist in the workplace—and beyond.

In this session, you’ll discover:

  • How OpenAI’s "Operator" redefines the role of personal and professional assistants.
  • The growing impact of robotics on industries—and what it means for blue-collar jobs.
  • How AI breakthroughs might enable humans to live beyond 150 years.
  • What business leaders can learn from Salesforce’s CEO about adapting to an AI-driven workforce.
  • Real-world AI applications transforming education, healthcare, and workplace efficiency.
  • The challenges facing AI adoption, from regulation to trust, and how to navigate them.

 If you found this episode valuable, please rate, review, and share it with others who want to stay ahead in the AI revolution. Don’t forget to connect with your host, Isar Meitis, on LinkedIn for ongoing insights into leveraging AI for growth and innovation.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello, and welcome to a weekend news episode of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Meitis, your host, and we had a completely explosive week when it comes to AI news this week. We are going to talk even more about AI agents, about infrastructure for AI in the U. S. Thanks About crazy advancements in robotics and about some incredible scientific discoveries that potentially can expand our lifespan by a lot. And so lots to talk about. We'll finish with a long list of really cool stuff, features, capabilities, models that have been released this week. But we have a few big topics to dive into. So let's get started. the first big piece of news that will lead us into the discussion about agents is that OpenAI just released Operator. Operator is something they promised and discussed late last year, and they finally have released it. What operator does is it takes over your browser to complete tasks. So you can give it tasks such as to book your flights or hotels or order food or find your place in a restaurant or a lot of other things it can in theory do this on any website because what it actually does is it understands the tasks you're giving it. It breaks it down on its own to sub tasks and steps that it needs to complete. And then it uses the keyboard and mouse to manipulate everything on the browser, basically like a human does, meaning in theory, it can do everything that we can do as humans with computers, which is Most of what we do to run the world in our day to day and so on. Now, the reality is right now, it's more focused at least as far as open AI has promoted in videos that they've shared into more personal life stuff. Meaning, as I mentioned, booking flights, ordering food, finding recipes, and then ordering the ingredients and so on. It seems to be very promising and the direction is very, very clear, by the way, right now it's only available to pro users. So the people who are paying 200 a month, once this thing evolves and it can really do things in our work. beyond just day to day basic stuff, it will be worth a lot more than 200 a month, because in theory it can do everything that we can do. And a few things I anticipate one is that we'll get the capability to train it on specific tasks. Basically, let me show you five times, 10 times, 20 times how I'm doing this. You will learn from that and then you will be able to. Do this on its own, regardless of what are the steps, which software I'm using, how am I bouncing back and forth between different things? And the other thing is that I'm anticipating that we'll get the ability to give it guardrails, meaning to define her software per. window per use case, what it can and cannot do so we can have more control and make sure that it doesn't do anything catastrophic or just whatever damages that it can create. But as a first step, this is really, really promising and really, really scary. Now, a few things I've already done and done right is that it requires your confirmation because before it takes any final steps like booking your flights or ordering food and so on, it will require your approval or participation in order to do things like entering passwords or credit card numbers and so on. So there are. Already several different guardrails put in place. It's also not saving and not training on any of the information it's capturing as screenshots, which is how it knows what's happening on the screen. So there are a few good steps in the right direction as far as safety and security. But there's still a lot of open questions on exactly what it can and cannot do, how we limit its access to things it shouldn't have access to, and so on and so forth, including anti phishing capabilities, because what if it follows stuff that it thinks is good and then it shares your information in places it shouldn't? Like, there are a lot of open questions that needs to be addressed, but it's a very, strong step in the direction of real autonomous agents that can really act on our behalf and without any technical skills. So there's multiple platforms out there today that allow you to develop multiple different types of agents, but for all of them, you need to have technical skills to an extent. Some of them more, some of them less either low code or even no code, some of them, but you still need technical skills. This thing requires nothing. You're literally just open it, ask it to do something and it goes and does it. And as I mentioned, I will be really surprised if it doesn't come with, train me how to do this thing. And I'll be able to do it for you sometime in the next few months. In parallel, Perplexity launched a new app that is an AI assistant. This app is currently available on Android, and what it allows Perplexity to do is to access multiple apps that you're doing right now and help you do the tasks in these apps. So examples could be, Getting an Uber to pick you up from a specific place at a specific time and take you to a specific location. Just by saying it, I need an Uber to take me to the airport at 4 30 PM and it will do everything it needs to do within the app to make it happen. Or find and play music from a movie that you don't even remember who was the artist or what was the name of the song, but you know, it was the theme song of a specific movie. It will find the movie and they will play it for you on Spotify. Now, in addition to just apps, it's also fully multi modal, meaning it can go and use your camera to see things in the real world and then take actions based on that, such as looking at a destination and giving you directions on how to get there or looking at a book and giving you quotes or finding you what else to read that is similar to that book, and so on and so forth. Again, this is another step in this particular case, integrated into our phones, operating apps versus operating a web browser, but the direction is clear. Very different, by the way, from OpenAI. This functionality from Perplexity is free right now. So if you want to give it access to do things on our behalf on your phone and try it as a super assistant on your phone, you can download it and use it right now. Now. To expand on that to what are the implications of these agents and ideas, there has been a very interesting panel this past week in the World Economic Forum in Davos that included some high figures such as Mark Benioff, the CEO of Salesforce, and Dario Amodei, the CEO of Anthropic and many other people. And they've shared a lot of really interesting insights to the future. Nothing really new from what they said before, but it was crystal clear from everything that we're talking about that the future and even the near future is going to be dramatically different from what we know today. So Mark Benioff, as an example, shared again that he thinks we're going to see a dramatic fundamental shift in the workforce composition from CEOs managing humans. To CEO managing humans and AI agents with probably a growing and growing percentage of AI agents. He was talking about some of the things that they're doing in Salesforce. So they are planning to transition support agents. Two sales positions because AI support agents are doing such a great job and they need less people doing that. And they can shift them to sales. They're talking about that. Several employee roles are being redeployed as AI handles some of these tasks. So a few things to expand on that. One is. That if a company is on a growth trajectory, if you can grab more market share, instead of letting people go, you can use people who understand the company, understand your product, and then Sarah services and have experience in working in your industry and just shift them to new roles, which is great. From the roles that AI can do for them right now. The other thing is on the more granular level is that it's going to be more and more a task oriented world versus a profession oriented world, meaning AI will be able to do specific tasks out of specific roles, and then people's roles will shift into other things as AI can do more and more of these tasks. And that's the lens through which you need to start evaluating your business, meaning look at the task level, figure out what AI can do today, what are we able to do in the next two years, and how you can get to the most effective outcome, Preferably by shifting employees rather than letting them go and allowing them to do stuff that will enable the business to grow faster and more efficiently. Now in the same forum, as I mentioned, Dario Amadei, the CEO of Anthropic was there as well. And he made some crazy predictions. One of them is that he believes that AI will enable doubling human lifespan within a decade. So within the next 10 years, allowing people to live to 150, 160 years old, which I find very, very extreme. Now he went back to his claim that he believes that AI system will surpass most humans cognitive capabilities by 2026 to 2027. He also mentioned that they're also developing a virtual collaborator AI agent for workplace tasks. So similar to what we just talked about. From both Salesforce and open AI and everybody else, agents that we've got to run within the workplace and learn and execute tasks that will replace people doing the same tasks. Now, the forum also discussed some challenges of what may slow down this incredibly fast AI revolution. One of it is physical world limitations, self driving cars and needs to be built and there needs to be potentially some additional infrastructure to make them work more effectively. Obviously, bureaucracy and regulation. hurdles will slow AI implementation down. Public trust. So I work with a company that provides call center solutions. They. As you may or may not know, you can now have voice agents replace human agents as call center agents and do most of the work pretty well as we heard from the CEO of Salesforce. And yet many companies, many clients of my client are refusing to let AI handle their calls on their behalf because they don't trust it yet. So there's going to be things that are going to slow down the actual implementation. But from a technological perspective, 2025 is going to be a year where Of extreme rapid change and as these agents evolve and become more capable and safer to use, they're going to have a profound implication on literally everything we know from our day to day, like ordering food or buying clothes or looking for a restaurant or making travel plans all the way to our day to day work in the workforce. In addition, I got to listen to a really interesting interview this week with Sam Altman on the Rethinking Podcast. And I got to listen to Sam Altman a lot because he does a lot of interviews and I try to listen to all of them to see how he thinks and what he believes. But this was a little different because the guy interviewing him is a psychologist who looks at human nature and how does AI reflect on that? And so some of the questions were different, but the bottom line is very simple to what Dario is saying, where he thinks that by 2026 or maybe 2027, AI system will surpass most humans at almost everything. He believes that while AI will be highly capable and smarter than all of us in the immediate future, it will have a relatively gradual impact on our day to day just because of the way the world works. So he is claiming that yes, we are close to AGI and that O3 is incredible and that if somebody would have told us three years from now that we're going to have a tool that powerful in three years, What would we think the impact is going to be on the world? And he thinks everybody would have said, well, it will change everything we know. And the reality is, well, very little has changed. Well, to an extent, I agree with him because of some of the reasons that I just mentioned that slows AI adoption down. But the reality is as people learn and as groups, governments, industries learn how to use this, it will have an impact. A very, very profound implication on everything we know, literally everything. Now a few really interesting things that Sam said that is worth thinking about. One he said that his child will live in a world where AI is smarter than humans, period. And there's not going to be any humans in the future that will ever be smarter than AI. And he's saying That's not going to be a problem because that's just going to be the way it is. Part of the conversation was about what's important for humans in the future. And he mentioned that the future work will focus more on asking the right questions rather than finding the right answers because AI will be able to find the answers for you. He said that human will still stay socially connected despite AI superior capabilities because that's our nature is to stay socially connected. And we are going to appreciate human relationships even further. And I'm a hundred percent agreeing with that. I teach that in my courses. I'm a huge believer that human relationships will become even more important, both on the personal lives and on our productivity in business. And then the last thing he said about properties people should have in order to be more successful, he said that. raw intellectual horsepower will become less valuable than adaptability, right? Because AI will know stuff for us and being able to use it, adapt quickly to changing tools and so on will become significantly more important. Now, Sam sees AI as the most development and interesting scientific revolution of our lifetime, partially because it's going to enable a lot of other stuff in other scientific fields. I mentioned before that Dario Amadei is thinking this can double our lifespan. Well, OpenAI, in collaboration with Retro Biosciences, which is a company that Sam personally invested in about 180 million, announced earlier this month that using GPT 4b, which is a model that they developed specifically to do protein modification, they are now able to to significantly improve a process that is called Yamanaka factors. And that process enables to take human skin cells and convert them into stem cells. Stem cells are the cells that are the source of everything in our bodies. Basically, they can turn into any other cell. This is the first step of babies. It starts with stem cells that then diverge into all the other types of cells in our bodies. So being able to take an existing cell and convert it into a stem cell could open a completely new universe of health care solutions for people and animals, and that process was known before, but using this new a I model GPT for B, they can do it 33 percent more effective than any human would have done this so far This episode is sponsored by the AI Business Transformation course. I have been personally teaching the AI Business Transformation course since April of 2023. I've been teaching the course at least once a month in many months. I've done two Courses a month, but most of these courses are private, meaning companies, organizations hire me to train their people. And about once a quarter, I have the bandwidth to actually open a course to the public, which I really love doing because it enables you literally every one of you to come and learn with me with an amazing cohort of people like you who are business people who are looking to learn how to implement AI successfully in your business. We just opened the registration for the next cohort, which is going to start on February 17th, which is a Monday at noon Eastern. It's two hours a week for four weeks. So eight hours with me, plus office hours every Friday to come and ask me anything you want about the progress or about anything else in AI. And because you're listeners of Leveraging AI, you can use the promo code LeveragingAI100 in order to get 100 off the course. Don't miss this opportunity. As I mentioned, we open the course to the public about once a quarter, meaning if you want to gain the maximum benefits you can from AI in 2025, and you should come join us in February, because most likely the next course is going to be in the May timeframe. I really hope to see you there. Now, back to the show, staying on the topic of discoveries. Demis Hassabis just announced that AI designed drug trials are projected to start in 2025. So, isomorphic Labs is one of Alphabet's companies, so a sister company to Google, and they are working on drug discovery. They have partnerships with some of the largest drug companies in the world, and they're going to begin AI designed drug trials by the end of this year, by the end of 2025, and they're targeting of issues like oncology, and cardiovascular, and neuro problems and diseases, and they are claiming that they can shorten the discovery process by 10X, meaning instead of five to 10 years, it could happen in a few months. These are all very promising ideas that can help us fight some of the biggest diseases that we have in the world today within the next few years, which may relate to the crazy prediction by Dario Amadei of doubling our lifespan. Now, If that's not enough, Axios has reported, presumably OpenAI are preparing to announce a breakthrough in agent technology, and they will deploy or at least have what they call super agents that are PhD level on almost every topic. Now, they're claiming that multiple sources indicate AI companies are exceeding their internal development projections and that they're achieving AGI and beyond within this year that obviously caught like wildfire on X and rumors started flying left and right, started connecting it to the meeting that Sam Altman has behind closed doors on January 30th with government officials. the rumors basically went out of control. So Sam Altman went to Axe and shared the following tweet. He said, Twitter hype is out of control again. We are not going to deploy AGI next month, nor have we built it. He also shared that the company has some cool stuff coming. We already know some of it like tasks in operator, maybe there's more coming, but he warned AI fans needs to cut their expectations by a hundred X. So contradicting ideas and concepts coming from multiple directions. But what's very obvious is that agents are here. They're going to become more and more capable and more and more available to anyone to develop and use. And these agents will be able to do stuff that we can do right now. And that will, on some cases, Remove jobs on some cases, enhance jobs, enhance happiness of employees, because you'll be able to do things that are more valuable and it can impact some really big questions like scientific discoveries and accelerate them dramatically. Now, if you remember last week, we talked a lot about infrastructure as well. So that's our next topic today. Again, President Trump, that is now not president elect, but the actual president, announced a 500 billion infrastructure project that is started by three tech giants. The project is called Stargate and it involves OpenAI, SoftBank, and Oracle. And the plan is to initially deploy 100 billion of investment, scaling to half a trillion dollars, all in AI infrastructure on U. S. soil. Now, the project aims to build 20 data centers in the U. S., at least 500, 000 square foot The first facility is already under construction in Texas, and it perfectly aligns with OpenAI's roadmap that we shared with you last week. And their estimate that there's 175 billion in global funds that are currently awaiting AI project investments that OpenAI is trying to bring into the U. S. Now, while this is on one hand promising and interesting and shows that there's a really serious investment and that the new administration is fully supportive of this, there are a lot of open questions such as where is the power going to come from? Where is the cooling capabilities going to come from? But what the current administration is saying is a keeping the U. S. ahead of the competition, mostly China at this point, and B, generating new jobs, potentially replacing the jobs that would be lost by AI. So this particular project of infrastructure is expected to generate a hundred thousand jobs. Now that sounds like a really big number, but if you think about the fact that AI agents might be around the corner and they'll be able to do, well, more or less everything that we can do in front of a computer, and then robots can do more or less the rest, and that's our next topic. 100, 000 jobs is not that many, but at least it's creating some jobs. Now, going back to the positive side of things, the World Bank just did a very interesting study about AI tutoring in Nigeria. And what they've learned that by integrating AI personal tutoring to every single student in the way they like to learn where they can ask questions and get assistance in their own pace has made a dramatic improvement in the results of all students taking it. And it basically took the equivalent of two years. of learning and was able to achieve the same results in six weeks. The students who participated in this test outperform 80 percent of other educational solutions in developing nations, and it showed benefits topics across English, AI knowledge, digital skills, and so on. It even had an improvement performance in other curriculum subjects outside of the program, just because it gets the students to be excited about learning and being able to be more successful. I said that many times before, I think the education system in the 20th century has made very little progress in the last hundred years. Most of the things that are taught are still taught by a teacher with a board behind him in front of a class with a lot of people that does not make any sense in the year of 2025. And while AI represents a very big risk to that, it represents the biggest opportunity we had to really unlock fast learning that is tailored to the needs of the individual, both in the pace that they need to learn the things they get stuck on and the way they like to learn, whether it's playing games, listening to stories, reading books, watching videos, whatever works for them, or any combination that will be tailored to the specific individual needs and allowing them to achieve significantly better results faster. And now there's a first research actually showing it again, two years of learning progress within six weeks of using an AI tutor, absolutely mind blowing. And really exciting from my perspective. Now I told you, we also got to talk about blue collar jobs in this episode. Well, a few interesting things have happened this week as far as releases. So a Chinese company has released a new version of the robot called black Panther. So it's black Panther version two, and it can sprint a hundred meters in under 10 seconds. So. As the name suggests, it looks like a panther, like it looks like a four legged animal, but it can run crazy fast, meaning it can run faster than most people on the planet. This is an Olympic record kind of speed. The abilities of these robots are accelerating very, very fast. Take a robot like this to helping in any kind of natural disasters where they can run on uneven surfaces and get to places very, very fast. It's actually relatively small, so it's about two feet high, so it can crawl under and get around things relatively easy and think like, again, a big pat. And so. That's very promising. There's always the flip side. This can be used for military purposes, and then it can be in the battlefield and do a lot of damage if it's not controlled properly. So there's always the good and the bad with each and every one of these deliveries. Unitry G1 is a robot that we talked about many times before. Unitree is one of the Chinese companies that are developing the most advanced robots that there are out there today. And they have a big robot called H1, which was the first thing that they developed, but their focus recently have been on G1, which is its little brother. It's about four feet tall and the price point for it right now, they're already pre selling it. So if you want one, you can go and order one, is 16, 000 for the base model. That's Very reasonable for the stuff that it can do. And what they just demoed is that they demoed the latest version of G1 actually running and jogging across a huge variety of terrains. So first of all, it can move again, a lot faster than a previous robot, but it can do it on site slope on very steep up and down slopes on uneven terrains with rocks and debris and so on. And so these robots are becoming highly, highly, highly, highly, capable and they are becoming really cheap. Now, not yet, but we talked about this last week, that Elon Musk is claiming that Tesla is going to manufacture hundreds of thousands of these robots in 2026. So I hear from now in potentially millions in 2027. And so the mass production of these is just around the corner. Relating to that is an economist named Rubini, who got known as Dr. Doom for his gloomy economic forecast is warning about the rapid development of these robots their potential impact on the blue collar job market. He is. Estimating that the market of these robots will reach 7 trillion by 2050 and that these robots will start disrupting the labor market in the next year or two. So again, aligned with the timelines that Elon Musk was sharing in his predictions on how many of these are going to manufacture. So if you put in all of this together, the capabilities of AI agents combined with the of robots. And you see a huge disruption to almost every job out there. And the pace in which it's potentially happening. And yes, we talked about things that will slow it down, but the pace in which it is potentially happening is a disruption nobody is ready for. Now let's dive into some rapid fire announcement on some new features, capabilities, and other announcement that happened this week. First of all, Anthropic announced voice chat memory features for Claude. So the features that are coming and haven't been released yet is two way voice conversation, similar to what gemini and Chachapiti already have memory of past interactions. Again, something that also Chachapiti and Gemini already has and personalized chat capabilities, which also open AI already has. So all these three capabilities are coming to Claude. I must admit I use Claude a lot and I like it for many different reasons. One of the things that drives me crazy is that it still doesn't have internet access. And yes, I know you can get it to connect to the internet through third party tools. But I don't understand why. Anthropic hasn't released that capability yet. So if somebody from Anthropic is listening, please add that functionality, to your tool because I love using it, but it's just really far behind the other tools from that particular perspective. Now, another really interesting thing that Anthropic just launched and that is available is feature called citations and citations allowed to ground AI responses in source documents, dramatically reducing hallucinations, and it also provides references to specific sentences and passages in the original data. Now it's going to be available through the API to Cloud 3. 5 SONNET and Cloud 3. 5 HYCU and it's going to be available through their API or through Google Vertex where you can also get access to the Anthropic platform. Just as a quick note, you can do stuff like that today In any chat tool that you're using, I do this every time I reference information, every time I ask it questions about information that I provided, I ask it for exact quotes, and I ask it for the name of the document, the page, And the specific paragraph where they found the quote so I can go and verify the information that it's giving me. And that dramatically reduces hallucinations and also allows me to verify the information very, very quickly. But having it available natively in the API would be absolutely awesome. Now, similar to this, Perplexity released what they call Solar Pro API, which does similar things. It allows you to get citations and customize sources in the API to answers that it's giving you. While focusing on speed and affordability, that's perplexity. So again, the concept of hallucinations is something that's going to be reduced dramatically this year as well, which will give us much better trust on these tools right now, because right now it's a hit or miss. And yes, the hit or miss right now is probably 80 percent correct and 20 percent incorrect, but there are many use cases where that is just not acceptable. Now staying on the topic of new models and capabilities, we spoke A lot last week about deep seek version three, which came almost out of nowhere and took number seven on the leaderboard On the LM sys chatbot arena. Well, they just released a thinking model that integrates actually the ability to do test time compute. So think as it's doing the task in combination with deep research capabilities and that model jumped into number four of the chapel arena, actually taking number one on several different aspects. I must remind you that this model was developed on tens of cents on the dollar compared with the amount of money that was invested in the leading models from open AI, Anthropic, Gemini, and so on. And because of that, they can provide this kind of capability for extremely low cost. Now you can use it through the API, you can use it on their chat, and you can also take and use it however you want because it's a full open source model that you can actually go and grab from their website or from Hugging Face, install it on your servers or on your computer and run it locally with your data without sending it anywhere else. This is to me is a huge promise for a future in which we may not need 500 billion of investment in data centers, and then the cooling and power and the environmental damage that comes with it. Because if they can achieve these kinds of results with significantly less compute and significantly less time, there is hope that other companies and hopefully will do the same. And if competition tells us anything, that's what's going to happen. And again, that's why I find this to be very, very good news, despite the fact that this comes from China and not from the U S. Going back to another piece of addition about agentic news, Microsoft just released AutoGen 0. 4 and that provides the capability to users of the Microsoft ecosystem to develop and integrate agents directly ecosystem offering custom tools to build and deploy agents across enterprises. Some interesting things about this model is that they're achieving cost effective efficiencies that were not possible before, while enabling asynchronous processing, meaning multiple agents can run in parallel, one can do data collection, one can do parsing, one can do reporting, one can do, communication and so on, all while being orchestrated by a centralized agent that will basically be the project manager. Another interesting tool that was, announced this week is Mistral, which is a French open source language model company, has came out with what they called CodeStral 2501, which is now the most accurate code generator out there. They're achieving 95. 3 FIM accuracy score in programming language, surpassing OpenAI's API by over a million by 2. 6%, which was the leader so far. So GPT 01, and it has a context window of 256, 000 tokens, which allows you to write or review a lot of code without model. I'm reminding you that Mistral is an open source company. So again, you can go and either use it on their platform with their API, or just get the open source and run it locally for yourself and now develop code better than anything else. It's very, very obvious from these tools, as well as announcements from people like Zuckerberg, that the world of writing code is going to change is not going to change, but it's already changing dramatically. If you remember, I told you last week that Zuckerberg said that in 2025 coding agents will be able to do the work of mid level software engineers, and that they're already doing it at Meta. And two really big pieces of news that I kept for the end because we had too many big items in the beginning is iT spending is projected to hit 5. 5 trillion dollars in 2025. That's almost a 10 percent increase for 2024, which was a record breaking year. And it's driven mostly by AI hardware needs. Now Gartner just did a research and they released it this week. And they're saying That the generative AI is entering, and I'm quoting the trough of delusion meant, which basically means that there's going to be more and more expenses and less and less ROI, Which is a standard phase in the cycle of hypes and the main concern is that AI sectors that are showing insane growth. but are not really showing big differentiation between every one of them is raising questions about the sustainable value creation that they can actually keep on delivering. Again, time will tell what is going to win, whether it's going to be this amazing technology or society slowing it down. But it's very obvious to me that this is moving forward. Despite these concerns, doesn't mean anything about valuations because literally every single round that happened to companies recently has at least doubled the valuation of companies. And the previous valuation was Six to 12 months ago. So companies valuations are doubling every six to 12 months, and that's obviously not sustainable. Whether the hype is justified, we'll have to wait and see from everything we're seeing right now. I would say yes, it's too early to tell what's gonna be the actual ROI on that. And the other piece that is really, really interesting is that Google published a research paper about what they call the Titan architecture and It is potentially the first significant major advancements since the transformer paper back in 2017. So those of you who don't know the story, the paper called attention is all you need. That was the paper that Google research have released in 2017 is basically the baseline for everything AI that we know today. And they now released a new paper called the Titan, which builds on top of transformer. So it's not totally replacing them, but the idea here is building a neural long term memory alongside the short term memory and basing the long term memory on what they're calling surprise based learning, which they claim is how human cognitive patterns work. So basically it's going to look for stuff that is out of the ordinary, that is not aligned with its existing patterns in order to consider it new. And then learn that as a new thing, early benchmarks already showing superior results to existing Language models across several different use cases. So it'll be very interesting to see if this really drives the next cycle of innovation in the actual architecture of how all these AI models work. We are going to be back on Tuesday with another fascinating how to episode with a expert sharing with you a specific AI use case that you can start implementing in your business immediately. If you have not yet ranked and rated this podcast, I would really appreciate it. It really helps me get to more people and it allows you to play an important role in delivering AI education for everyone. And so please open your app right now, share this podcast with as many people as you think can benefit from it, and give us whatever rating and review you think we deserve. Hopefully I've earned a five star review from you, but whatever you think, write it down there, give me feedback. I actually read all these comments. I would love to hear what you think. You can also connect with me on LinkedIn and let me know what you think about the podcast. I love chatting with you about the podcast and on things that you think can make it better and until next time, have an amazing weekend.

People on this episode