Leveraging AI

139 | How Enterprise+Big Tech+BC+Government are shaping the AI future, anthropic and Perplexity are raising billions, Google releases Vids, and more AI news for the week ending on November 8th

Isar Meitis Season 1 Episode 139

Are we on the brink of an AI future we can’t control?

The pace of AI development is astonishing, but as industry giants like Anthropic, Microsoft, and Meta race to expand capabilities, the push for responsible regulation faces major roadblocks. This episode dives deep into the relationship between big tech, venture capital, and government, asking one critical question: can policymakers keep up?

In this weekend’s news episode, we break down the latest headlines on regulation and power in the AI industry—and what it means for C-suite leaders navigating the future of technology. From new proposals in AI oversight to intense lobbying against regulation, discover what’s shaping AI policy now, and why it matters for enterprises everywhere.

In this episode, you'll discover:

  • Why Anthropic is calling for urgent U.S. government intervention—and why they say there’s only an 18-month window to act.
  • The strategic stance of Microsoft and VC powerhouse Andreessen Horowitz on regulation, and why they’re pushing back on AI oversight.
  • A fascinating shift in U.S. military technology with the Chinese military’s use of open-source AI models for defense purposes—and what that means for open-source access.
  • How the AI arms race is heating up in corporate America: Visa's AI-driven restructuring, Intel’s market removal, and XAI’s supercomputer goals.
  • The colossal energy demands of AI data centers and how power shortages could reshape the industry.
  • The future of workplace AI: why Visa and other enterprises are scaling AI applications across their operations—and what it means for the workforce.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Speaker:

Hello, and welcome to a weekend news episode of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Metis, your host, And in the past few weeks, I got a lot of feedback from many of you on LinkedIn that you love these weekend episodes about the news, just as much as you'd like the episodes that we do on Tuesdays that provide practical guidance on how to implement different AI use cases. But many of you said that you would wish that these episodes would be shorter. So you can consume all of them. In one drive or one walk with a dog and so on. So I've been listening, so we're going to try a slightly different format of these weekend news episodes starting with the one today, where I'm going to deep dive into one or two topics, but then all the rest of the news is going to be fast and rapid fire without me providing my point of view and adding more information to it. So the topic that we're going to dive into today is what's happening in the big picture of AI right now, in the relationship between the tech giants that are developing the models, the companies that are providing the financing for that enterprises in the U S and the regulation that Is or is not coming from multiple government agencies. So that's going to be the topic that we're going to focus on. And then there's going to be a long list of very quick updates from multiple companies. So let's get started. The first topic that we're going to talk about is, Is a letter that was written by the anthropic leadership calling from much more regulation from the U. S. Government, and they're providing a specific window in which they think that this has to happen. Otherwise, it will be too late. And that window is 18 months. They're stating multiple benchmarks and explaining why the sense of urgency right now. So I'm now quoting from their letter and they're saying on the SWE bench, software engineering task models have improved from being able to solve 1.96% of a test of real world coding problems with Claude two in October of 2023 to 13.5%. So that's 10 x on March of 2024 to 49%. In October of 2024. So we went basically from 1. 6 percent to 49 percent half the tasks in that benchmark in one year. Now they're also writing, and I'm quoting in internally, our frontier red team has found that current models can already assist on a broad range of cyber offense related tasks. And we expect that the next generation of models, which will be able to plan over long Multistep tasks will even be more effective. Now, to add to that some statistics from some other sources, AI system showed 18 percent improvement in scientific understanding from June to September. That's one quarter, three months, 18 percent improvement in scientific understanding, open AI 01. So they're more advanced, long thinking model achieved 77. 3 percent on the GPQA hardest section. So top humans can achieve 81. 2 percent and this model achieves 77. 3. So basically at human level on the hardest part of that test, the UK AI safety Institute found that AI models demonstrating PhD level expertise in biology and chemistry and so on. So basically what Anthropic is saying is that the government has to put some serious regulations and require companies for a lot more transparency before allowing them to release any models. They're also mentioning that companies should be incentivized to add additional security standard and management protocols. And they're suggesting what they're calling surgical rules, meaning they want to address very specific risks versus a very broad regulation. And they believe that will a solve the more complex and security problems and B can move significantly faster. Now, Anthropic are suggesting that the government follows similar steps that they have been taking internally in what they're calling Responsible Scaling Policy or RSP. And they're suggesting that as a prototype for the government, that potentially in the long run can be replaced with other tools or other processes. And the goal is really to have a proportional risk management framework that will allow the government the opportunity to evaluate potential risks before they happen and react fast. And the company is, as I mentioned, stating that there's an 18 months window to get this into place and that it must happen because otherwise what we'll see is what they call poorly designed knee jerk regulation that is not going to actually solve these problems. And it's just going to be reactive and hysterical one things gets out of hand. And they're calling for the attention of both the U. S. federal government as well as state level regulations to deal with that. So that sets the stage for today's episode. And I want to touch on a few more topics around the same concepts, not just regulation, but as I said, the big picture.

We have been talking a lot on this podcast, on the importance of AI education and literacy for people in businesses. It is literally the number one factor of success versus failure when implementing AI in the business. It's actually not the tech, it's the ability to train people and get them to the level of knowledge they need in order to use AI in specific use cases. Use cases successfully, hence generating positive ROI. The biggest question is how do you train yourself? If you're the business person or people in your team, in your company, in the most effective way. I have two pieces of very exciting news for you. Number one is that I have been teaching the AI business transformation course since April of last year. I have been teaching it two times a month, every month, since the beginning of the year, and once a month, all of last year, hundreds of business people and businesses are transforming their way they're doing business because based on the information they've learned in this course. I mostly teach this course privately, meaning organizations and companies hire me to teach just their people. And about once a quarter, we do a publicly available horse. Well, this once a quarter is happening again. So on November 18th of this month, we are opening another course to the public where anyone can join the courses for sessions online, two hours each. So four weeks, two hours every single week with me. Live as an instructor with one hour a week in addition for you to come and ask questions in between based on the homework or things you learn or things you didn't understand. It's a very detailed, comprehensive course. So we'll take you from wherever you are in your journey right now to a level where you understand. What this technology can do for your business across multiple aspects and departments, including a detailed blueprint of how to move forward and implement this from a company wide perspective. So if you are looking to dramatically impact the way you are using AI or your company or your department is using this is an amazing opportunity for you to accelerate your knowledge and start implementing AI. In everything you're doing in your business, you can find the link in the show notes. So you can, you just open your phone right now, find the link to the course, click on it, and you can sign up right now. And now back to the episode.

Speaker:

So another interesting article this week that has to do with security and safety and how we should look at these models. There's been in the news, the Chinese military has developed AI using Meta's data lama open source models achieving 90 percent of GPT 4's performance. Now, yes, GPT 4 is now not top of the line, but the fact that countries that we're trying to slow down their AI development can use us developed large language models in order to advance their development is obviously not a good sign. So they have developed a new model they call ChatBit or ChatBIT and it's been developed by their military research facilities. And as I mentioned, it's achieving very high standard based on the fact they used an older MetaLlama13B model. And now there's way more advanced models developed that are all open source that they can use to develop their next military variation of this. And it is the first time, or at least the first time that we have a documented proof That the leadership in China is leveraging open source large language models for military purposes. Now, technically, Meta's license explicitly prohibits military application, but there's absolutely no way to enforce that, and hence it makes it less relevant, whether it's acceptable or not acceptable. I mentioned that many times before. My biggest fear with open source models, and I'm a big supporter of open source software in general and open source AI, but I think it has to be regulated much more than it is today. So yes, it's still open source. Yes, you can still use it, but you have to sign up and provide exactly what you're going to use this for and get an approval to use the models versus you just leave on a server somewhere that anybody can download and use, I see that as a very, very big risk, whether it's a different country or just individual or groups that will use this for the wrong reasons. The open source models are going to keep on getting better and better. And the open source models next year are going to be better than the top closed source models that we have today. Meaning they're going to be extremely good. Powerful and will allow people to do things that we don't want them to do. And I think that has to be part of policy and regulation, both from governments, as well as from the industry itself. Now, staying on the topic of communication between the industry and the government, Microsoft and Anderson Hurwitz wrote together a letter to the U. S. government urging them to reduce the amount of regulation, or I'll be more specific as we dive into this, but Microsoft CEO Satya Nadella and President Brad Smith, as well as Mark Anderson and Ben Horowitz, the founders of a 16 Z, one of the most known VC companies in the world. And by the way, there, there've never been really good friends, Microsoft and Anderson Horowitz, but they are advocating for market based approaches over government regulation. They're basically saying that states should stay out of the way. They were very strongly opposing to Californian SB 1047. We discussed that in length in the past, but they were lobbying very aggressively for those regulations not to pass the, that regulation would have required companies to present what are required companies and developers of large language models to go through a lot of scrutiny before they can release their models. If it's above a certain threshold, plus some additional capabilities, plus some additional safety requirements, they pushed very strongly against that. So what they're suggesting to the federal government is to take an approach that will focus on the outcome, basically the application versus the models themselves. And I'm quoting from their letter, a science and standard based approach that recognizes regulatory frameworks that focus on the application and misuse of technology and should focus on the risk of bad actors. Miss using AI. Basically what they're saying is just like weapons don't prevent us from selling guns, prevent people from doing bad things with guns. So they're basically saying this should be similar to the approach to weapons in the U S meaning you can go and buy guns as many as you want, and we're not going to regulate that, but you should use them for good. And if you do, bad things with it, then we'll go after you. I think it's not a good approach because this technology is way more dangerous and it will allow to change society, will allow to change the way humanity interacts with itself. And so this should be controlled more like nuclear weapons than it is like handguns. And so I personally disagree that, but that's what they're pushing for. In addition, they're claiming that these tools should be able to get Training data, basically on whatever they want, any data on the internet, and they're stating the following. They're calling it the right to learn. Copyright law is designed to promote the progress of science and useful arts by extending protection to publishers and authors to encourage them to bring new work and new knowledge to the public, but not at the expense of the public right to learn from these works. Copyright law should not be co opted to imply that machines should be prevented from using data. The foundation of AI to learn in the same way as people. Knowledge and unprotected facts Regardless of whether contained in protected subject matter should remain free and accessible. So basically what they're saying is they're saying, let us train these models as we wish, regardless of copyright laws, because it's learning just like humans are learning. And the goal is to provide more knowledge to the public. I must admit this makes some sense. I also think there should be protection for the people who are creating this content. but to stay on that topic and to expand beyond that, as you know, open AI and some of the other large language model developers have been sued by multiple sources around copyright laws. this week, one of these was finally decided in court. So a copyright violation case against OpenAI that was argued in the Southern District of New York was just dismissed by the court. The judge cites lack of concrete, actual injury. So if you don't know how the law works, if you're coming to sue somebody, you have to prove the fact that you did it. suffered damages as a result, and that people who are suing OpenAI were not able to establish that. So the court basically decided that there's a serious difficulty providing proving harm from AI synthesized content. There's a challenge in applying traditional copyright laws because this is not something we knew before. So the laws don't really apply to it in the normal way. And they also mentioned that statistically, it was very hard to prove that AI is actually replicating the work versus synthesizing the work and providing new kind of outcomes. So if we connect that with the previous point from Anderson Hurwitz, they're basically saying that should be the situation, meaning allow the large language models to train on whatever they want, regardless of copyright, because it is just like people learning. On the very small case in the New York specific event, the court seems to agree, and this is the first time there's a benchmark or that from a legal perspective. While this is, definitely shifts the balance in a specific direction. It still doesn't mean anything. I totally think that all of these will be eventually decided by the Supreme Court, and only then we'll know where this is going to land. Now staying on the big picture and big companies and what is happening in the world. Something interesting that happened this week is that Intel was kicked out of the Dow Jones and was replaced, surprisingly, By NVIDIA. So on November 8, Intel was removed from the Dow Jones after 25 years because they fell below the 100 billion dollar market cap and they were replaced by Nvidia with their 3. 32 trillion market cap. Now, these two stocks have been moving exactly in the opposite directions in this past year. Intel stock lost 54 percent in 2024 so far with a worst out performer by a big spread and NVIDIA stock rose over a hundred percent at the same time, showing you what the world cares about today, more and more demand for AI chips. And AI compute and less and less need for the offerings from Intel. There's a much bigger story in Intel that we're not going to dive into right now. Lots of really bad strategic decisions, definitely doesn't look good while Nvidia's. Present and future look very bright because of the growth for AI need. Staying on the topic of large compute need, XAI, Elon Musk's company, is continuing to develop their incredible supercomputer called Colossus. So we reported about that several times in the past. It has currently running 100, 000 H 100 GPUs, And they're planning in the near term to increase that to 200, 000 GPUs with a mix of H100 and H200. And by the summer of 2025, so less than a year from now, they're planning to have 300, 000 GPUs, including Blackwell B200 GPUs, which are the most advanced that are generated by Nvidia. So a huge growth, they've currently invested 3 billion in that facility. And if you do the math on how much more they need to invest. The average price of an H two hundred is about 40, 000. So if they're going to add another hundred thousand of these, that's another 4 billion just in that hardware, not in any supporting infrastructure and so on. Now, if you question the speed that this may actually happen and they're saying, okay, how are they going to double the facility in just a few months and then triple it in just one year, they've built the data center from the ground up in 122 days. This is the fastest time by a very, very big spread. Any company has done this. Whoever they hired and whatever processes they put in place allows them to build these gigantic data centers very quick. And they're obviously not the only ones that are building this. So we're going to continue our story and To understand what's happening in the big picture in the AI world right now, talking about the other need of that data center. So in addition to compute, you need power and you need cooling. And these two things are becoming bigger and bigger bottlenecks. So recently amazon has signed an updated agreement with a nuclear power plant that is next to one of their facilities to increase their current allocation from 300 megawatts to 480 megawatts. So more than one and a half X on the amount of power they're consuming today. And the Federal Energy Regulation Commission, the FERC, has rejected that power agreement, basically canceling it. And the reason they cancel it is that they're claiming several different things. One is that it's unclear how they're being charged for that extra amount of power and that it might tap into the actual regular grid instead of just bringing in extra capacity and tapping into the grid means basically borrowing or Stealing, depending on who you ask, electricity from homeowners and other plants and other places who need that power without a clear compensation at the same price that these other people are paying. So what might happen is that this facility And if you expand that and broaden that, any new large facility will basically make electricity more expensive for everybody else. And because these companies right now are not even looking for profits or positive ROI, they're basically just spending their way into world dominance in AI that makes the situation for everybody else significantly more complex. So at this very particular point, the FERC rejected that increase in power, And we need to expect this to happen in a lot of other places. Now, what's the solution? there's two solutions. One of them is a solution that many of these companies, Amazon included, are planning to build these smaller, what's called small modular reactors, SMRs, which can be built significantly faster and provide power just for those data centers. That's a great solution. The only problem it's not going to happen tomorrow. It's going to happen in a few years when these are built and up and running to support data centers next to them. The benefit of those is obviously is they have very low carbon footprint, while in the meanwhile, it will have to be carbon neutral. Based solutions that are definitely not good for any of us. And with the current elections, I assume they will get those extra sources very quickly from people who will be willing to provide it using carbon based power sources. And so that's where we stand right now. There's not going to be a stop in building those data centers. Just like Elon Musk is building his Amazon Meta and Microsoft and Amazon and Google are going to keep on building these centers and they will find ways to power them, which we may or may not agree with, but that's what's going to happen at least the next few years until they find or develop or build alternative power sources. And from that, I told you, we're also going to touch on the big picture on the impact of that on enterprise. And so Visa just announced that they have deployed more than 500 AI based applications inside their company. They're planning to invest 3. 3 billion in AI and data infrastructure over 10 years. And in parallel to all of this, they're planning 1400 layoffs. They're targeting a ratio of eight to 10 AI quote unquote employees. One human supervisor. So let's put this all together. There is this incredible race to provide more and more AI capabilities. These are, I capabilities are risky across multiple aspects from either just Going out of control and then we don't know what we will do to just bad players using them to do bad things, whether it's countries or specific companies to do this, the companies with the power and the political cloud are pushing as hard as they can to reduce regulation and increase their freedom to do whatever they want with the data and the power that we all need for other things that we have created in the case of the data. And this is being deployed in companies to take jobs away. Now, this may sound doom and gloom. I'm not an AI doomer. I definitely think there are amazing benefits in AI, but I do agree a hundred percent with the very urgent statement that Anthropic has put out there. The government and governments around the world have to move very, very quickly to put the right regulations and the right guardrails to protect us from all these negative outcomes, or at least for most of them, so we can benefit from the positive implications of using AI for different things. But we're walking or actually flying very quickly into a future where AI will be able to do more and more tasks in any organization, and that will take jobs Away and that will have very significant social impact. So if a company like Visa is saying that they're planning for every employee to have 8 to 10 a I employees and they're already letting go of 1400 employees and they're not the only company who's doing that were reported several of those in the past couple of months. The direction is very clear and we need to be prepared as a society and as an economy to what that means. Means. So that's the main section of the episode today. And now we're going to jump in to talk about some other interesting topics and we're going to run through them very quickly. The first one that still relates to this somewhat is that Anthropic Palantir and AWS are forming a strategic alliance to bring Claude to the U. S. defense sector. So they're not the first company to do that, but the goal here is That Claude AI models available in Palantir's impact level six environment, which is a highly protected and regulated environment running on AWS Gov cloud government platform providing U. S. intelligence and defense agencies access to powerful A. I. In a secure environment. Another interesting topic that is related to how fast we're moving is that Sam Altman did and ask me anything session on Reddit earlier this week. Lots of interesting information over there, and I highly recommend you go and read all of this. But to me, what caught my attention the most is that Sam was claiming that AGI is achievable with current hardware, meaning there is no need for more hardware or there is no need for a breakthrough in hardware in order to achieve AGI. Now, he didn't exactly clarify what he means with current hardware, and he didn't exactly clarify what he means by achieving AGI, but that was his answer. He's maintaining his timeline of a few thousand days to get to AGI. And then after that super intelligence. And so what a few thousand days means, I don't know that could be five years, it could be seven, it could be 10, but either way, we're talking about a relatively short amount of time and it's showing you how much we're not ready for what's coming. By the way, in parallel to that, Sam Altman released a statement that's saying that's what's slowing the development down of open AI is that they're lacking compute and that they're not releasing some capabilities, including Sora and other functions because they don't have access to enough compute power. So that sounds a little bit contradicting to the other statement of saying we can achieve AGI, but maybe achieve AGI in a lab at a small scale versus allowing the world to use it is what he means. But. Definitely there's a very serious race over compute capabilities right now, as I mentioned in the beginning. Another interesting piece of news about open AI is open AI hired Kaitlyn Kalinowski, who played a very serious role In Meta's AR glass development. So she led the team at Meta that developed their Orion glasses. And she also been at Meta also developing MetaVR. And before that, she was at Apple MacBook design team. So she knows a lot about developing hardware for specific and advanced software capabilities. And she's now at OpenAI. What's the plans for that? Very unclear. But it's very obvious That open AI are also planning to get into the hardware side of the AI universe. Another company that we talk about a lot is Anthropic. Anthropic is now looking to raise additional money. They're most likely going to raise$500 million at a$9 billion valuation. The deal is being led by Institutional Venture Partners, IVP, and through that we've learned that their current revenue is about$50 million per year, up from 2.5 million in 2023. so a huge spike in the usage of perplexity. And speaking of valuations and raising money. Anthropic is also looking for the next round. As their biggest investor so far has been Amazon. Amazon has invested about 4 billion in Anthropic previously, and they're most likely going to invest a similar amount of money on a 30 to 40 billion valuation. But there's a little trick in this particular one. So Amazon said that they will invest that money in Anthropic only if. Anthropic will start using their homegrown training chips on AWS versus using NVIDIA powered servers that they're currently running on Amazon and Google combined. And so they're basically twisting the arm of Anthropic to leave NVIDIA, at least partially, and move their compute into Amazon homegrown chips, going back to that part of the beginning of the show, talking about How the race over compute is intensifying, not just in volume, but also between the different providers. Going back to learning about, and the financial that has been released as part of this news is that Anthropic total funding so far was 9. 7 billion. The company projects 2024 burn rates to be 2. 7 billion. So basically they're losing 2. 7 billion this year, which explains to you why they need a lot more money. And that's before they're adding and training more models and stuff like that, which we know is extremely expensive. A quick another piece of news about Anthropic that is valuable to each and every one of you is using Anthropic. They just added some additional functionality to CloudSonic 3. 5. The coolest functionality is that now they can analyze visual PDFs, meaning it can read and understand charts and graphs within PDF documents, which means you can ask questions and get answers on a lot more information while loading PDFs to And since we mentioned a release of a product, let's talk of another interesting product that was released this week. Google now expanded the availability of bids, which is their AI video creation tool to, Anybody who has any of their business licenses on all the different tiers. Google vids is actually a very interesting product. It's not like run where you're playing that allows you to generate random videos of stuff that you want based on images or based on just prompt. It's more of a, Video integrated PowerPoint, if you want, which allows you to take a topic, build a story around it, and then combined various types of videos and images into it in an automatic way while intervening in the process and making changes and adaptations as you want. So it's a part of the Google office suite, if you want, or the G suite, and it will be accessible, as I mentioned to everybody, and will allow companies to create an iterate on creating video based or video infused presentations very quickly. The demo looks very impressive. I didn't get a chance to play with it yet, but I'm definitely planning to, and then I will report on my findings. That's it for this week. There's a lot more news. And if you want to find all of them, you can go and sign up for our newsletter. There's a link in the show notes over there. There's going to be at least 10 more pieces of news. But as I mentioned, I'm going to try to keep these episodes shorter, so you can consume all of it within one swoop. As I mentioned, either walking your dog or taking care of the yard or washing the dishes or whatever it is or driving or whatever it is that you're doing while listening to podcasts. I would love your feedback. If you like this new format and let me know on LinkedIn, just connect with me. Sorry, matey. So let me know what you think. If you think it's better, if it's worse, if you're like the previous one or any other feedback you have about this podcast will be highly appreciated while you're looking at your phone to sign up for our newsletter. Please give us a review and a rating on your favorite podcasting platform, whether it's Apple podcast or Spotify or any other third party platform that really helps us spread the word and helps you spread the word as well and give more AI education to more people. And while you're at it, I would really appreciate it if you click the share button and share this podcast with other people that you know that can benefit from it, which is any person that you know that is running a business or in business, because this is becoming a critical part of our future success as business people. We'll be back on Tuesday with another episode that will show you how to implement a specific use case in AI. And until then have an awesome weekend.

People on this episode