Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
125 | PHD in one hour with GPT o1, It's raining AI-Agents (Microsoft, Salesforce, Slack, Workday), and many more AI news you need to know from the week ending on September 20, 2024
Is AI about to replace an entire year's worth of PhD work in just minutes? What about an 8-year-old girl creating software with AI?
In this episode of Leveraging AI, Isar Meitis breaks down the latest mind-bending news, and explores how AI’s ability to generate PhD-level results and simplify software development for a child could spell the end for traditional software models. You’ll also get a glimpse into what this all means for businesses, from giants like Salesforce and Workday to your own operations.
In this episode, you’ll discover:
- How OpenAI’s new GPT-01 model is able to generate PhD-level code in minutes.
- A shocking experiment where an 8-year-old used AI to develop software with zero coding knowledge.
- Why Klarna is ditching Salesforce and Workday, and what that means for the future of enterprise software.
- Predictions on the decline of SaaS software as AI builds custom, tailored solutions on the fly.
- How AI-driven agents are poised to revolutionize everything from HR to marketing and sales.
- Insight into Google’s groundbreaking NotebookLM tool that transforms dry documents into engaging podcast-like conversations.
- The timeline for AGI (Artificial General Intelligence), and how soon experts think it will be here.
Want to stay ahead of the AI curve?
Join our AI Friday Hangouts—a laid-back, weekly get-together where we dive deep into the latest AI advancements and discuss how you can apply them to your business. DM Isar on LinkedIn for more details and to join the fun.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to a weekend news episode of Leveraging AI, the podcast that shares practical, ethical ways to use AI to improve efficiency, grow your business and advance your career. This is Isar Meitis, your host. And since I got such great feedback on the previous episode and a deeper dive we've done to some of the news, I'm going to follow the same thing this week. So first of all, thank you for all the people who reach out to me on LinkedIn and on other channels to thank me for the previous episode. And also, as I mentioned, I listened to the feedback and so I'm going to do something similar today. There is a lot to talk about because even though last week news were huge, the implications of it and some other big news has came to light this week, including replicating a complete PhD work in just a few minutes, as well as some really other amazing pieces of news that are going to blow your mind. So there were three things that really completely blew my mind with AI this past week. And I do this for a living and I'm in this news every single day. So when I tell you three different things blew my mind this week, it is A big week as far as announcement and interesting technologies. So let's get started. The background for this first story is a direct continuation of last week and last week, if you remember, last week, if you missed the episode, OpenAI released O1, which is a next generation model that is a completely different kind of model that can reason, meaning instead of just being able to pull data out of its database of everything it scanned, it can actually work step by step on complex problems and solve them the way humans do by evaluating different options, checking which option works better and coming out with a solution to a sophisticated problem. So last week that blew my mind. And the fact that there were multiple hints and scoring that showed that it can perform at PhD level across multiple tests that they run it through before the release, really, Amazingly, and I said that it was profound and so on, but these were just messages from open AI. They were not actual real live world examples. Well, this week we already have a real live world example, a guy called Kyle Cabassers, who is a research scientist at NASA did a really interesting experiment with GPT 01. So when he did his PhD, like every PhD does, he wrote his research paper in your research paper. You have to state your methods, meaning how did you perform your research or your experiment? So other researchers can verify your work. So all he did is he he took the methods component from his research paper and fed it into GPT 01, asking it to write code to get to the outcome that he was able to get to. Now it took him a year to write the code as part of his PhD work. It took 01 seven prompts back and forth, and then a few seconds to write the code that does what his code does after a year of research and working on it. Now, he released a video that he recorded in real time as he was doing the process, and his complete shock when GPT 01 was able to do this, he said, Is worth watching. So we will add the link to that in the show notes, or you can just look me up on LinkedIn. That was my post on Friday morning about this topic. So you can find the video over there, but it's definitely worth watching the video just to understand how a...successful PhD that has completed his PhD. And he's not working as a scientist in NASA feels when an AI system can do the work that took him a year. In just a few minutes. So go watch that video. The second thing that completely blew my mind this week is an eight year old girl who in 45 minutes working with an AI co development environment was able to develop some basic software. So she's the daughter. Of one of the executives in CloudFlare and he was showing her how to use Cursor IDE. So Cursor is one of the companies that got a lot of tailwind when it comes to integrating AI into a new style of software development environment. One of the people who raved about it was Anurag Karpathy, who is one of the big names in the AI world, and he's not affiliated with them. He just shared his personal pleasure and amazement with how efficient he became writing code using AI in the cursor environment. So this guy taught his daughter. How to use English in the cursor environment in order to write code. And she was able to create some basic software with it. Now, if you combine these two things together, the capability of an AI to generate PhD level work based on instruction that it's given together with the capability of an eight year old to write software, it leads you to some very interesting outcomes. So every Friday I hold an open to the public AI Friday Hangouts. It's a lot of fun. And there's a group of people that, that gets together and talks about what happened in AI this week. How can that impact our lives, our businesses? We talk strategy, we talk tactics, we talk about a lot of things, ai, and one of the questions that I was asked today is with all these amazing advancements, especially with the release of O1 and everything that's related to it, where do I see this is going? How will that impact businesses? Now, one of the other pieces of news we're going to talk about later on is Klarna, which is a tech giant that has already made some incredible moves when it comes to AI has shared that they're going to stop using Salesforce and Workday. So the largest CRM company in the world. And the largest HR company in the world, because they're developing internal AI tools to replace everything in those ecosystems. So I want to combine these three points together, the ability to write good code by using simple English, the ability of these models to really understand and reason at PhD level. And the ability to develop tools that are custom tailored for companies. And how can that impact businesses moving forward? Now, I don't know if that's going to happen. And I don't know if that's going to happen in three years, five years or 10 years, but I think that's a trajectory we're moving into. And I think what's going to happen is the concept of SAS software will cease to exist. And the reason I'm saying that is this. If we get to AGI. On the current timelines. And there's a debate on those current timelines, but the news from this week, which again, is another thing that blew my mind in other, in addition to the other three is that Norm Brown, the guy who led the team, who developed this O1 model that was called strawberry before he was asked on Twitter on X this week, when does he think AGI will be available based on the latest advancement? And he said two years. So the consensus. So far was three to five years. Kurzweil's that it's 2029 for his singularity point. And that was kind of like the timeframe that everybody was talking about four to five years. And he's now saying two years now add to that, the fact that last year, he thought it's going to take them way more than a year to get to the point we are right now, maybe even the two years. Are too much time, but I'm putting that aside for a second because I don't know the timeline and I don't think anybody knows the timeline. The reality is we don't need to wait to quote unquote AGI because these models are already better than most humans in several different things at PhD level, including writing code. So let's go back to my prediction. I think that X years in the future, any company will be able to create. Any software it needs for its operation, that's going to be custom tailored to its specific needs, to its specific tech stack, to its specific industry, to its specific level of security, et cetera, et cetera, on the fly, because they will have an army of really sophisticated AI agents that are going to talk to one another, that are going to get a task defined in simple English or other languages, and the code is going to get created and Extremely fast, debugged and deployed within minutes, hours, or days, but it will not require you buying software and paying huge amounts of money for software licenses. So this is my personal prediction. Again, maybe this is science fiction, maybe not, but that's my gut feeling connecting the dots of all the other things that we talked about in the beginning of this episode. And if I had to guess the first step of this will be the app store. So while replacing a software like Salesforce might be really complex, even though again, Klarna is claiming that they're doing it right now. Klarna has a lot more resources than the average company, but the apps that we run on our phones or our desktop, that are small pieces of software that do something very specific. That is will definitely cease to exist because we'll be able to develop it for ourselves, just the features we need tailored to exactly our needs in the user interface we want, and so on and so forth, without having to subscribe to a store and pay money for different applications. Okay, so since we started with the topic of this new OpenAI, Model, let's continue talking about OpenAI, OpenAI has shared they are in the process of putting together a new multi agent research team. The company posted job listings, looking for people for those positions. So I want to take you back to what we talked about in several different episodes, including last week, OpenAI defined five different levels on the road to AGI. Chatbots, which is what we knew until last week. Reasoning agents, which is what we got at least a first glimpse into with GPT 01. And then level three is the AI. agents and agents are systems that can think on their own, collaborate with other agents and take actions on our behalf. Multiple companies are working on that. And we're going to talk a lot about agents in this particular episode, but open AI is now investing in going to the next step from step two to step three on AGI. I shared with you last week that I have a feeling again, I'm not a scientist and I'm not an AI development expert, but I have a feeling that the technical leap from step one, from step two, from chatbots to reasoning is significantly bigger than going from step two, which is reasoning to step three, which is agents. And I think that's, what's going to slow it down is our adoption, meaning how much are we as humans, as business owners, as operators will trust these autonomous systems to do the right thing on our behalf? I think that's going to be the limiting factor. And the fact, as I said, that more and more companies are releasing agents, and some of them are, you know, Real agents, meaning they're actually thinking systems that talk to one another and do things. And some of them are quasi agents, meaning they have these recipes that were pre built for them by humans and they can execute on them. And I think we're going to see a lot more of that in step one, because it's easier to trust. I've defined the path and then the machine knows how to follow that path in a smart, sophisticated way based on data that I provide to it or that it has access to and so on. So I think we're going to see a lot more of that in the beginning and only after that we'll start seeing quote unquote, real AI agents. Another interesting piece of news about OpenAI from this week. So we spoke several times before about the raise that OpenAI is planning. The numbers that are being thrown around right now are between five to 7 billion in this raise for about 150 billion in valuation. The valuation is obviously increasing. Extremely high when you compare it to the revenue that OpenAI is generating right now, but if you're taking into account everything we just talked about of they might unlock the possibility to a completely different future where everything you need can be created almost on the fly, at least on the software side, but the same thing in marketing and sales and HR and so on, you understand why the valuation is there. The amount of money that they're raising is actually relatively low. So I know it sounds insane that five to 7 billion is not a lot of money, But it's not a lot of money because the next model training is supposed to cost more than that. So that might be an interim stage in that. But the interesting new piece of news from this week is that they put a minimum for the investment to this round at 250 million. Meaning if you want to invest in this round by open AI, the smallest check you're going to write is 250 million. Again, compared to 7 billion, that doesn't sound a lot, but if you take into account that the vast majority of VC funds do not have 250 million after they finished raising the money, before they shared any of it, you understand that very few companies and organizations can actually participate in this round, unless they aggregate together and build some kind of a special purpose vehicle as PVs. To pull together different funds in order to have enough money to meet their requirements, obviously open AI are doing this on purpose. They want to have the shortest cap table possible, the least amount of new shareholders, both in means of dilution, as well as in means of how much money How big their board is going to be, and we already know that issues with their board before, even though it's the nonprofit board versus the for profit board, like there's a whole mess over there that ties directly to that. There are more and more rumors that have been going around for a while, and they're obviously getting stronger as they're getting ready for this raise that open AI are going to shake the structure of the company and get rid of one way or another of their nonprofit organization that is currently controlling and owning the for profit organization. That's obviously very problematic for investors. Their agreement with Microsoft is not an ownership and equity agreement, but rather a capped rev share agreement for a very long future. And I don't think that there are going to be a lot of other companies that will be willing to jump in for the amounts we're talking about right now without getting seats on the board, without getting equity in the company. So it looks as if this is actually going to happen and OpenAI will have to change its structure and move away from its origin, which is being an open source company to build AI to serve humanity. So that seems to be going away. Pretty fast. So right now, the companies that are in that round are tiger global with an undisclosed amount thrive capital, which are leading the round with probably a 1 billion investment, Microsoft, NVIDIA, and Apple are all in discussions. Kostler ventures. Uh, those of you don't know Kostler, he's a huge, AI expert and investor who invested in almost every big company, you know, and has been very, very successful. So he's in that as well. And potentially the Abu Dhabi AI investment firm, MGX, and some additional financial institutions that we'll talk about. Probably we'll put money together, as I mentioned, for special vehicles to put this amount in place to be able to participate in this round. Now, since we mentioned the O1 model that as we discussed last week was released in O1 Preview and O1 Mini, two variations that are still not the full O1 model. OpenAI just made these available to their enterprise licenses and their education licenses. When earlier last week, they only shared it with their private customers. So that's a, obviously a great thing and capability provided to the larger customers, as well as research and education, which can dramatically benefit from that. And to share how amazing this tool is and how much it could help in education fields, as well as research, Dr. Daria Unutmaz, who is a immunologist at the Jackson Laboratory, recently used the O 1 preview model to write a cancer treatment proposal. Now it created a full framework for Off the project in under a minute with highly creative aims and even consideration of potential pitfalls. That's a direct quote from the doctor. Now what he's saying, and I'm continuing to quote, this would have taken me days, if not longer to prepare. Now, the other thing that he added is that it even suggested some things that he didn't think about, despite the fact that he has 30 years of experiment in this field. So this model is something very, very different than anything we've seen before, and it could be used across multiple use cases, including medical and other research fields. Still on OpenAI, according to the information, OpenAI COO, Brad Lightcup, Shared in an internal message that ChachiPT has surpassed 11 million paying users. Out of those 10 million are individual paying subscribers like you and me and 1 million subscribers are on the higher priced business platform. Teams and enterprise plans. So still most of the users are individuals, but I have a feeling that in the next month, especially with the availability of this new model and the reasoning capabilities that the enterprise level is going to grow faster than the personal level. That being said, that's still a very small amount of users compared to the total user base that they have. So they still have a huge opportunity for growth and as they keep on coming up with new models, if they are not going to make them free, then it's going to obviously provide them with more and more paying customers. Now, this leads to the fact that the current conservative estimates is 225 million revenue monthly just from subscriptions, which leads to about 2. 7 billion in annual revenue from those subscriptions, plus about 1 billion in revenue from the API side of the business, which is a huge growth on both aspects compared to the previous quarter and definitely compared to last year. Now, the last piece of news about OpenAI is that they're restructuring their AI committee and their board. Again, They're doing this as an outcome of two things. One is, I think, related to what I mentioned before, that they need to change some things in the structure and in the organization to lead into whatever moves they're going to do as far as raising money and the equity that they need and how they need to restructure the business in order to support that, but also because of a lot of negative criticism they got about safety recently, and just go back and listen to the last 10 episodes. And almost in each and every one of them, there was some negative discussions or impacts or things that open AI has had against them when it comes to safety. So one of the things they did is they removed Sam Altman from being a part of the safety committee and now it's only comprised of other members of the board Led by the Carnegie Mellon professor, Zico Kotler, and some other people, including the core CEO, Adam D'Angelo and a retired U S army general, Paul Nakasone. Both of these have been on the committee before, but now again, it's independent of Sam's involvement. One of the interesting things that the new model does is it stops to think, right? It takes its time and it thinks about things for a while or quote unquote things. It doesn't really think right at the end of the day, it computes, but it just says that it's thinking, which I think is really cool because it allows us to connect with it as humans and say, Oh, it's actually thinking about it to make a long story short. This slows down the process. There are three different companies that are working very, very hard and diligently to accelerate the generation of tokens. These are three hardware providers, Samba Nova, Cerebrus and Grok. And we've talked about Samba Nova and Grok before Cerebrus, just a third company, and there have been in fierce competition this week on sharing their success with different models. So Samba Nova has shared that they're able to achieve 471. 000 tokens per second running GPT 3, so an older model, Cerebras immediately after that share that they can generate 353, 000 tokens per second. That's way faster than what you're running on the regular GPT models that we're running on the OpenAI platform. And then Grok, which are the undenied leader right now, came and shared that they're running on their LPU processor, 18, 000 million tokens per second. So two orders of magnitude better than the other two competitors. I've used Grok before I've tried their platform. They're mostly running open source models, uh, like meta. And it's insane to watch. Like instead of seeing the outcome of the GPT slowly appear on the screen. Complete pages show up all at once. It's absolutely mind blowing. What I do not know, and I haven't seen anywhere, I will update you once I see that, does that make any difference in the speed of quote unquote thinking? And I have to guess. That it does not because what these new hardware are very good at is generating tokens. And I think the thinking phase of this new model is not token generation. As I mentioned, I not saying that because I'm sure about it. If any of you knows the answer, please write to me on LinkedIn and I will share that in the next episode, but it will be interesting to learn. But anyways, there's huge improvements when it comes to hardware and speed of using the. Chatbots that we're using now almost every single day. And from OpenAI to Google, and the third thing that blew my mind this week, I shared with you last week that Google created a new tool, or actually upgraded a tool that they released a little earlier, that's called NotebookLM, and NotebookLM basically allows you to analyze and research and learn new information in new ways using AI. So you can upload documents, PDFs, links, Google Slides. Text that you can paste in it and so on and so forth, and you can ask questions about it. You can have it summarize it for you. You can ask it to create a learning guide and it will create a learning guide for you. It will create questions to test you on it. It will create the answers to the questions. So you can know if you answer it correctly. And all of these are awesome. And it's really an incredibly useful tool to learn new information at scale, but the coolest feature that blew me away is that it has a voice summary feature, which basically you click a button there. You don't prompt, you don't need to do anything. You just click a button. And about a minute or two later, you get a podcast of two people, a male and a female that are having a conversation, a casual conversation of two people that are geeking out about the topic that is in the documents or the information that you uploaded. The outcome is absolutely mind blowing. It's nothing like anything I could have expected. I assumed when I heard about it, and when I shared it with you last week, before I tested it, that it summarizes information and creates a narration of the summary, but that is couldn't be further from the truth. It sounds like two people, humans, That are getting excited, that are talking to one another, that are talking one over the other and interrupting each other every now and then, that sometimes contradict each other, that sometimes agree with one another. It sounds like a real human conversation about the topic. How do they do that? I don't have a clue, but it's a really cool way to consume information, especially if you're like me. I love listening to podcasts. I probably listen to two episodes a day on average across all my life. Multiple topics that I like learning about, and now I can take literally. Any topic that I want to learn about and turn it into a really fun to listen to podcast with a click of a button. So again, as a podcaster, and also as somebody who's just looking to consume and learn a lot of information every week, I find this absolutely amazing. And it also opened my mind to what's possible with this technology right now. Again, this is a huge difference than just reading something in a human voice, or even reading a summary in a human voice. It's a completely new generation of new content in a new that is fun to listen to. And I fed it a very dry document. I fed it a how to user guide to something I share with my clients and with my course participants, and. This just created this really cool, fun to listen to podcast of 12 minutes, which as I mentioned, six times by now blew my mind. So go check it out. It's free to use right now. The URL is notebooklm. google. com. And on the same topic, Google announced this week that Google Gemini Live Voice is getting additional new voices. So based on TechRadar, Google now has 10 different voices that are available, the Gemini Live feature, as well as the Gemini Assistant on Android devices. These voices are named after constellations and stars and astronomical phenomena such as Orion and Capella and Nova and so on. Google claims that these new speech engines have significantly improved emotional expressiveness with the goal of obviously improving human connection and conversation. And after listening to that podcast or podcasts, because now podcasts, because now I've created more than one, I can tell you that they're definitely achieving that. So it will be very interesting to see how that is going to play out. Now, on the same topic, a company called Hume AI has launched EVI2, stands for Empathetic Voice Interface 2, which is an enhanced voice model compared to EVI1 that they've released before. Now, this new model has a 40 percent reduction in latency compared to its predecessor, a 30 percent lower cost. It costs just over 7 cents per minute of synthetic speech using this, and it has enhanced natural speech, emotional responsiveness. So the point behind this company is that they have a very. Realistic emotional inflection in their voices when they create their content and which allows them to be used for a much bigger variety of things. So it's not just Google that is working on this. Other companies are doing the same thing and this will probably be the norm moving forward, which on one hand is amazing because we're going to get something that feels very natural for us to do, to talk with these machines because it will talk back just like humans. On the other hand, it could be. Really, really scary. It could have a lot of negative implications, whether it is bad players using this for fraud or just people getting attached or addicted to talking to these machines, because they're going to sound so human and we'll be able to understand and inflect emotion. One more comment on Google, since we're talking about them just a second ago, is Sergey Brin, the co founder of Alphabet, back then Google, before they became Alphabet, acknowledged On the all in summit that Google has been too timid to deploy language models so far, and he acknowledges the fact that they are behind in the race right now, despite the fact they invented the transformer architecture seven years ago and wrote the famous paper that now everybody's basing their large language models on. Now he's saying that it comes from two different reasons. One is the fear of making mistakes that will cause them embarrassment, as well as the tendency to keep models close to their chest until they're perfect. I think there's another reason. And that reason that is something that we're seeing right now with perplexity. I think they knew that That these models being out there is going to put at risk their main revenue stream, which is ads and paid search. And so I think they were pushing this back because they knew this will put them in a problem. And they knew that if they would release it, other people will chase them or we'll do the same thing, which accelerate the process they're in right now, where their main livelihood, billions and billions and billions of dollars every single quarter are at risk. Now from Google to GitHub, GitHub, like other ideas, so like other Code creation environments has already integrated the O1 model series into their tools. So now in GitHub Copilot, you can choose the new O1 model, which provides significantly better results in their internal testing. They're showing significant improvements, both in code optimization, so making it run faster, as well as debugging existing code. And they're even saying that when they allowed it to analyze existing code, it gave them superior suggestions on how to improve it. Next, we're going to talk about a company we don't talk about a lot, but they're definitely a big player and they're going to become an even bigger player with this new initiative. So Oracle has been around the IT and database game for a very, very long time. And they're now announced that they're developing Most powerful supercomputer in the world right now that is going to have 131, 000 nVIDIA GB 200 GPUs. So to put things into perspective, Elon Musk's supercomputer in Memphis, that we talked about a couple of episodes ago that just went live has a hundred thousand NVIDIA H 100 GPUs. So the previous version of GPUs, because this one is going to have 30, 000 more GPUs, all from the newer version, which are significantly more powerful, which is going to make this Oracle supercomputer, the most powerful supercomputer for AI training in the world. Now, Oracle already secured permits for three modular nuclear reactors to power those facilities, which are going to be deployed across nine global regions. So they're not gonna be all in one place, but they're gonna be interconnected through fast networks. As I mentioned, running NVIDIA's latest H 200 GPUs versus the H one hundreds, which most companies are running right now, including the biggest supercomputer right now, which Elon Musk has built and just went live. And from then to Microsoft. So Microsoft just announced a significant upgrade and a lot of new features to its co pilot tool. So some of these tools, some of these new features include having a business chat that integrates with web, work, business data, in a new tool called co pilot pages, which basically allows you to create dynamic web pages that are built from existing data that you have in your company. So think about really smart, sophisticated dashboards that are created by AI. But these dashboards can also include texts and summaries of your existing data and so on. They also added a feature that I was expecting for a very long time. And we talked about this previously is co pilot in Excel. We'll now be able to create Python code. So, so far I was able to do this by connecting APIs of open AI and other tools into Google Sheets and Excel in order to do this with external tools. But now it's going to be built straight into Excel, which will allow you to create significantly more sophisticated data analysis just by typing English. And behind the scenes, Copilot will rewrite it. Python code for you. They're also enhanced the capabilities of co pilot in PowerPoint, Outlook, Word, OneDrive, and Teams. They also announced co pilot agent. So remember we talked about earlier in this episode that a lot of the big companies are pushing in that direction. So here's the first announcement this week. This new co pilot agent capability allows Users to build agent based assistance that run from a single prompt response agent to fully autonomous agents and anything in between. And they're wrapped with Microsoft responsible AI data security was supposed to keep the data and the users safe. They also included an agent builder that will allow companies and organizations to build custom copilot agents. And they're claiming that they're added improvement capabilities to the existing co pilots because they're now running GPT 4. 0 model in the back end. All of this I think is great news because so far as somebody who's working across multiple AI platforms because I'm running different experiments with different clients and each and every one of them has a different environment. I was really disappointed with Microsoft Copilot's capabilities compared to all the other tools, including closed source and open source third party models. Now, speaking of agents, if you remember last week, I shared with you that Salesforce is going all in on agents and in their dream force event, which is their biggest event of the year that happened this week, they made a lot of agent related announcements. One of the interesting thing they unveiled was AgentForce Partner Network, which is a collaborative initiative that will allow them and other companies to deploy agents across multiple platforms. So these partnering companies include tech giants like AWS, IBM, and Workday. Now The idea is first of all, to accelerate the development of these agents, but also to standardize them in ways that will make them more accessible across multiple platform, which is obviously beneficial to everyone involved. Asus users will be able to use the same kind of agents across multiple systems. The companies can get more clients by developing an agent once and basically everybody wins. As I mentioned, this push on AI agents is coming from all directions. So we mentioned in this episode alone, that open AI is hiring a new team to invest more in agents. We talked about Microsoft investing in that. We talked about Salesforce, one of the most successful software companies in history, going all in on agents. It kind of tells you where this thing is going now in Dreamforce, Salesforce announced a lot of other interesting things. First of all, Benny off the CEO has shared that he had to redo his keynote just three weeks before the event, which is very, very late compared to probably every keynote at that level, but definitely compared to him when he said that at this point, that three weeks before he usually has rehearsed his keynote at least 36 times, and he had to rechange the whole thing. And the focus to the whole thing became as criticism as the DIY, the do it yourself approach that has been promoted by all other AI vendors. So what they're claiming, which is true, that there's a big disappointment overall right now in big, large corporations from the ROI on investing in AI across the platform, because these companies are invested in licenses from open AI and Microsoft, et cetera, and got very, very little in return. My gut feeling tells me, and I shared that with you, that the reason is Lack of training, meaning it's not that the technology is not good enough. Is the fact that people just don't know how to use it. And because it's not tied to specific use cases for specific individuals, it's just laying there and costing millions of dollars in licenses. So the new Salesforce approach that they've started implementing already at Dreamforce is they brought in 4, 000 people to help Dreamforce attendees develop usable agents right there and then during the show. The idea again is to move away from concepts and do it yourself to here. You have a functional agent that you can start using in a very typical way to Benny off in the same exact way. He founded Salesforce as a company. If you don't know the story, it's definitely worth going back and listening to it. He's going after the existing consensus, and he even shared that the current Microsoft Copilot is just the new Clippy. If you remember the paperclip that was supposed to be the assistant in the early Microsoft Windows systems, uh, which ended up being one of the biggest jokes in probably Microsoft history. He's claiming that the current Microsoft Copilot is the new Clippy, which I find, A, funny, and B, as I mentioned before, personally, I feel kind of the same way. Microsoft is obviously not agreeing and they're making significant steps to change that in their systems. But the philosophy behind the new Salesforce approach is saying that models are just commodities. Everybody will have models and everybody will be able to develop models. And the difference is going to be data, metadata, and the ability to tailor them into actions within your systems, which is exactly what they're doing. They're making a very, very big bet on that. And only time will tell if they write, my gut feeling tells me is there. Absolutely right, especially with the amount of data that companies currently have under the sales force umbrella. Staying on the topic of agents, slack is now expanding its capabilities, adding more and more capabilities. One of them is running agents, not just from them, but also from various third party platforms. It will be available to all paying slack users, and it includes. Obviously Salesforce AI agents, but also third party from the partner network that we mentioned before. But some of these agents are going to come from companies like Asana, Cohere, Adobe Express, Workday and Rider, which means it will become extremely powerful to use Slack and be able to ask questions about the data that you have in all these other systems and take actions to In those systems from within slack that obviously puts slack in a completely different realm of capabilities for an organization beyond just a very important communication channel. It may become more or less the interface to many critical business functions, which I think is a brilliant move. The question would be how quickly will people adjust to that? If people will adjust to that and how hard will it be for organic, for organizations and individuals to get used to using these systems, at least partially through Slack and just. Talking to it and chatting with it in order to get stuff done. Some other new AI cool features in Slack are going to be transcriptions of huddles, which is Slack's audio meeting features, workflow builder for task automation, and AI search to find answers in file transcriptions and other connected apps. So a lot of great functionality that is coming into Slack in the immediate future. So if you think we had enough about agents workday, the H. R. Giant just announced that they're coming out with agents that are going to be built to streamline a I. H. R. and finance operations. So there's four new agents that are coming out. One is called recruiter agent expenses agents. Successor succession agent and workday optimized agents. The name are more or less self self explanatory, but these are all coming out either at the end of this year or in the first quarter of next year. So. In all cases, the immediate future that we're going to have agents built into another large platform. In general, what I can tell you, and I shared that with you in the past, if you are a large corporation or even a smaller company, and you already have a tech stack that you like, and some of that tech tech stack comes from some of the big giant companies, like the companies we mentioned on this podcast regularly, Microsoft, Google, Salesforce, et cetera, don't try to reinvent the wheel and invest tens of thousands or hundreds of thousands or millions of dollars trying to build custom AI solutions for these platforms, because they will most likely come out with them sometime in the future, most likely the near future. And the solutions they build in house are very likely to be very well integrated with the platform. Probably beyond the levels you'd be able to do on your own in a reasonable budget. That being said, there are multiple small add ons that you can build very, very quickly without a big investment that will pull data from these tools through APIs, or even through manual export to CSV that can give you huge benefits and significant ROI in the near future. So if it's a small project that you can do quickly, that will give you a benefit. Go do this. If it's, I want to find a ways to bring AI into everything, Microsoft or Salesforce, et cetera, don't do it because they will do it on their own. And by the time you are done, you'll be able to get it as part of your license. Now, since we just mentioned Salesforce and Workday, I cannot continue this episode without talking about Klarna. So Klarna is a huge fintech company that has already made past big announcement about their usage of AI, but they just announced a Huge workforce cuts that are going to be possible because they're going all in on AI. So while a lot of people are saying that AI is not going to cut jobs and that we'll have to learn how to live with it and that will enhance the current workforce. I think the market powers will prove this theory wrong. And Klarna is just a very good example of a company that is a large sized company that is going all in on AI and is going to reap the benefits. Now, is that good or bad? I'm not going to get into that hypothetical discussion, but the reality is they're doing that. So the question that you're probably asking, what does that have to do with Salesforce and Workday? Well, one of the things that Klarna just announced Is that they're stopping their relationship with Salesforce and with Workday because they're developing in house AI capabilities that will replace Salesforce and that will replace Workday. These three things combined, so cutting Salesforce, cutting Workday and cutting about 700 employees across various departments is potentially going to save them 100 million. million dollars every single year, which is obviously extremely significant, which is obviously generates a very big ROI. Even if you're going to invest 20, 30 million a year in developing these AI capabilities, that connects very nicely to the point that I made earlier in this episode, where I believe companies will be able to develop in house solutions using AI that will over time replace more and more of the SAS software that we are using today. And sadly, employees and jobs as well. And from Klarna, let's move to another giant, Adobe. Adobe just announced a new version of Firefly AI technology, specifically focusing on video generation capabilities. It is designed to help editors to ideate, explore, create creative visions, And even fill gaps in timelines and add new elements to existing footage. This comes in the shape of text to video that allows you to just write text and create video like we know from other tools such as Runway Gen 3 and others. They also have image to text capabilities like we know from all these other tools, but another thing that I have that I find really, really cool is generative extend, meaning you can add more frames to existing clips. If you need to fill up gaps to fit your timing, your angle, your story, or whatever you need, you can just add to an existing video that you shot in the real world and extend it. Just using a I. These features are going to be connected into the Adobe suite, such as Premiere Pro. And these better features are coming later this year. They're claiming that it's going to have rich camera controls and perspective and angles and motions and zoom capability as well as, as I mentioned, the ability to create content out of thin air. Now, since we're talking about video and I already mentioned Runway, Runway just revealed a new really cool technology that is video to video AI capabilities. You can upload a video that either was shot in real life or was created with AI tools such as Runway itself or other, and then manipulate the video just by adding prompts. So you can change the textures, the lighting, the type, so you can take a real video of yourself and change you. To glass and change the background or change yourself to a sketch or a Or a Play Doh version of whatever is in the video, literally anything you can prompt, you can change the video to if you haven't seen examples of people doing really cool stuff with it, just go and look it up video to video capabilities from runway. It's mind blowing how good and how cool it is and how creative it is. People are getting with that. So expect a total explosion of creativity with video coming from people using these new capabilities that were just not available to us before. That's it for this week. I know this was a longer episode and we covered a lot. I hope that my insights on some of the things that are happening right now are helpful for you. Again, I got some really great feedback from you on the previous episode. So that's why I decided to do this again. I would appreciate your continued feedback on everything we're doing, including the news and the expert. Specific use case shows that we release on Tuesdays. I have two requests from you. Request number one is rate this podcast, put your phone up right now, unless you're driving and rate this podcast, either on Spotify or on Apple podcast. It helps this podcast reach more people. It helps us grow. And it helps us jointly to deliver AI literacy to more people that will hopefully help us as a society to get to a better outcome with this AI tsunami that is just getting stronger and faster. And the other thing that I request is while you're at it, go and share the podcast, click the share button. On your podcasting platform and share it with a few people who are going to benefit from it again, from the same reasons mentioned before. I mentioned in this episode that we do AI Friday Hangouts. It is a free thing. It doesn't cost anything. Anything. It's a group of really great people. We get together every single week and we just geek about AI and what we can do with this in our businesses, how it can impact our lives, how can we teach it to our children and so on and so forth. Very laid back and fun environment that is extremely educational. If you want to join that, just find your LinkedIn is our mate is semi DM and I will add you to the group and you can join us as well. And on Tuesday, we will be back with another fascinating episode with an expert sharing with you how to use AI in a specific use case with a lot of detail that you can implement in your business immediately and until then have an amazing weekend.