Leveraging AI

39 | Top AI Wins: Key Business Use Cases from Hundreds of Companies, shared by Demetrius Brinkmann, CEO of the ML-Ops community

November 21, 2023 Isar Meitis, Demetrius Brinkmann Season 1 Episode 39
Leveraging AI
39 | Top AI Wins: Key Business Use Cases from Hundreds of Companies, shared by Demetrius Brinkmann, CEO of the ML-Ops community
Show Notes Transcript

Is your business missing out on the AI revolution?

Artificial intelligence has exploded onto the business scene, leaving many leaders wondering: How can we leverage AI to boost efficiency, revenue, and results at our company? 🤔

Topics we discussed: 💡

  • Tangible ways companies of all sizes are currently using AI in business (no hand waving here) 
  • An insider's analysis of the most in-demand AI applications based on surveys of hundreds of companies
  • Practical frameworks to assess if and where AI could augment YOUR business 
  • Low/no code "starter kits" to get up-and-running with AI quickly 
  • Coding assistants and other tools making AI more accessible to non-techies 
  • Key considerations as AI capabilities continue their exponential growth 

Our guest Demetrius Brinkmannknows the AI business transformation scene inside out. He founded the 18,000+ member MLOps Community back in 2020 - sharing practical insights on implementing AI. With so much hype and uncertainty around AI, Demetrius provides a refreshingly realistic, business-focused perspective. 

Connect with him on LinkedIn to keep tabs on the latest AI trends and opportunities!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello, and welcome To Leveraging AI. This is Isar Meitis, your host. And in today's episode, we're going to dive into the results from a survey done by the MLOps community. So this is a community of of the ML Ops community dimitrius Brinkman. At the end of the episode, I'm going to share some huge. News that happened this week. And it's been, I say that a lot of weeks, but the news this week were really absolutely insane. So stay put for that. By the way, I'm going to change the format moving forward. This is the last time that I'm going to do the news as part of the episode. I'm going to start releasing a very short. Episode every Friday that will focus only on news and the Tuesday episodes are going to stay just the interviews. It will make it easier for you to consume and every one of them is going to be slightly shorter. But for now, let's dive into what are the what are the top use cases that company are implementing with AI.

Isar:

Hello and welcome to Leveraging AI, the podcast that shares practical, ethical ways to leverage AI, to improve efficiency, grow your business and advance your career. This is Isar Matis, your host. And many of us started using AI for business purposes in 2023, in the wake of the release of ChatGPT sometime late November of last year. Now that being said, companies and businesses have been using AI and building machine learning capabilities for years. Our guest today, Dimitris Brinkman, is the founder and CEO of the MLOps community. This is a global community of people who have been implementing AI solutions for businesses for years. Dimitris founded this community in April of 2020. So he's been having conversations with machine learning and AI implementers, probably on a daily basis for three and a half years. Years now, the community, the MLOps community has recently performed a couple of surveys among its members and the surveys help them to understand what people are actually working on in businesses. Meaning what are the biggest use cases, the things that raise above the other stuff that people are doing in implementing AI for support of. Business needs and also what are the roadblocks and what are possible solutions. So this is real world information from hundreds of companies that are willing to share what they're doing with AI. So I find this invaluable for myself and for people who are interested in implementing AI in their businesses. And so I'm really excited to have Demetrius on the show today. Demetrius, welcome to Leveraging AI.

Demetrios:

Thank you Isar. Wow. I realized there are levels to this podcast thing and you are quite articulate there. I appreciate such a kind intro.

Isar:

This is what I do.

Demetrios:

Yeah, I can tell. It's very well

Isar:

put. Thank you. Thank you. Listen, you really have a view that not a lot of people have, right? Just because you're talking to, as I said, hundreds of actual implementers, people who are doing this. Let's really dive straight into who are the people in the community, just so people have an idea of what kind of companies we're talking about as far as sizes and where they are in the world. So people can understand how wide this

Demetrios:

thing is. Yeah, definitely. as you mentioned, we started in 2020, right when everything was making the shift to go online. And back then it was a couple hundred of us at, after a few months. And that's when I realized that, wow, there might be some traction here because people can't find this information on stack overflow, because the things that we're talking about are a little bit too new for that. So it's not like something that. You could go and Google and you would find really quality results on. And so that combined with all of the pandemic and everyone being inside made it much more of a, homey place and people started coming and that back then it was very much like people at the forefront and it was easier to say who was in the community because there was a couple hundred of us now there's. Over 18, 000 people that are in Slack alone. And then on YouTube, we have something like 18, 000 subscribers and, on, we're in over 37 cities around the globe doing in person meetups, and then there's. However, many people come to each one of those meetups. And so it's a little bit harder to say. I think I can safely say that, if you're thinking about a company that's doing machine learning or AI, they probably are in the community. And so that means from the smallest of the small, one person army to the gigantic, big, companies that are, fortune 10 companies driving the stock market, that

Isar:

type of thing. Wow. First of all, incredible. And, congratulations. This is really, amazing. It's really everybody, right? Anybody who's implementing AI is a part of this and is participating in the conversation. And I, you said something that is very interesting to me. And I think it's a lot of people don't understand that, and they have serious fears of this whole thing because they feel that they don't know where it's going. And I think the beauty of what you just shared. So those of you who don't know who are regular business people, not techies stack overflow is the Bible, right? This is where everybody shares. Every piece of code that they want to share. So some people don't share some of this stuff, but the stuff that is shared is on stack overflow, meaning when you want to have conversations about how to write code, how to create this module, how to connect these things, how to do this API, like it's all happening on stack overflow. And what Dimitris is saying that The whole AI thing just did not exist from a conversation perspective three years ago, which means it's very new to everyone, including on the technical implementation side of things. That being said, it also shows you how quickly this thing has grown from a few hundreds of people three years ago to tens of thousands right now. But so let's dive into the survey itself and things that you found relating to what our companies, again, big and small. And if you can segment that even better are actually doing with AI implementation. And for those of you who are like, Oh my God, this is going to be a tech discussion. It's not going to be a technological discussion. It's going to be a discussion about actual use cases. Looking through the eyes off the tech people who are actually implementing.

Demetrios:

Yeah. And one thing that we try and focus on in the MLOps community, again, breaking it down, because I imagine some people are like, what is that word that they keep saying? And I understand community, but maybe the MLOps part is a little foreign to people. And so ML stands for machine learning, which is what we used to call AI. I guess now, we can just call it AI. And ops is the operations part. So it's like machine learning operations, and it stems from the practice of DevOps, which is really about, it's a whole nother practice that I develop the developer operations. And so we really focus on getting the AI into production. That's one of the main things. And what you'll hear us talking about in the community a lot is. How a lot of these business trade offs that you're making. And so I, I kind of contrast that with when you are looking at research. So there's a lot of great AI that's happening that, and especially these days, there's papers coming out tens or hundreds of papers every day. On AI and the newest, best things that they're doing in all of these different universities or research centers across the world. But what we really focus on is how can you take all of that state of the art stuff? And then. Make use of it and gain value for your business out of it. And so when we put the first survey together and then subsequently the report that came from it, we said, you know what, we want to figure out how people are actually using LLMs in production. And so LLMs are basically your chat GPT, large language models. And we wanted to know. Are people actually using them? And if so, what are the use cases? What are the things that people are thinking about? What are the trade offs that we're going through? Because if you just follow Twitter, you think that like the world's changing and oh my God, AI is taking over everything. All of our jobs are not going to be relevant in six months. And so we set out to. set the record straight and see what was going on with the people who are actually using this. And what we found going back to the question that you had was like, there's use cases that are starting to cement themselves. And so when we put out this survey, it was like, April this year, 2023. And. So even still, I think from April until now, there's been solidification of use cases, even more so than we were at in April. And you have these different pieces of LLMs being used for certain things. And you have different spectrums of like how complicated you can get. On if I want to just go and grab some kind of third party tool, or if I want to host an open source model in house and make sure that I have full control over that. So the different use cases that I think were probably the coolest and the most worth noting are obviously like chatbots. And I think everybody has at least played with ChatGPT in some capacity at this point, or if not played with it, they probably heard about it. That's a form of a chat bot. And a lot of people had that where you can just stick a chat bot on your home page or your landing page. And that's cool. But the novel ways that we saw people using chat bots were. Where you could talk to a chat bot and ask it, if you were a project manager, or if you were just a manager in any capacity, you could ask it about projects that were happening. And then you won't have to bug your team about certain things because potentially all of this stuff is documented. It just may be hard for you as a manager. And you're looking over 2 or 3 or 10 projects to figure out where that's documented. And you just want to know one quick question. And so you can ask a chat bot that has all the context and it understands and it's continuously being updated on the projects movements and evolution. So that was a really cool way that we saw chatbots being used. And

Isar:

then, and I just want to pause you for one second. the idea is very simple, right? As business people, we have multiple data points. And in many cases, beyond the fact there's multiple data points, they're siloed in different boxes. even if you are just in, let's take a project manager, while you have project management data within a project management software, and then you have CRM data about these specific clients. And then you have, Excel files in a different place, looking at financial stuff off the project. And then you have, so you have all these pieces of an emails with you and the client and the integrators and your. Subcontractors and all of this to really understand everything that's happening right now, it's very hard, which means it requires regular meetings with multiple people that are not always available. And these meetings waste their time just to give some person the idea of what's going on. And all of this can go away if there could be a system that can actually look at all the data every single moment and can answer any question you want to answer, connecting the dots of all these pieces of data. And this is exactly what these chatbots do. You give them access to all that data, and then you can ask any question and get a short and accurate response based on those multiple data points that is otherwise very hard to get. And the same thing is true for the finance department. And the same thing is true for the HR department. And the same thing is true for the marketing department and sales department and so on. And all the way up, if you connect all of these together to the leadership team and CEO to look at. Across whatever they want to look at. So from the ability to provide internal information to make better, more accurate, timely decisions, this is pure gold. Like it's something that was very hard to do before. And again, the only way to do this was to get multiple people in a room and some of them don't always have the information and let me get back to you and stuff like that, and all of that can go away. If you have one of these models, built and accessible to the people.

Demetrios:

Yeah, you said something there that, is probably worth calling out, which is accurate information. And that is, it was the most astounding thing out of the whole survey is the biggest pain and challenge that people are working with right now is the model's abilities to hallucinate. And so how do I trust this output is actually true because the model is going to tell you what it tells you and it's going to act like it's true 100 percent of the time. And it's up to you to be able to discern if it's true or not and figure out ways to make sure that it's true. And so that was one that was really important. And it was a key challenge that people are working through. And That's one of the reasons that we then later went and we said, okay, our next survey is going to be all about evaluation and how do you evaluate the robustness of different models and what they're capable of and what use cases they're good on, what use cases they're bad on, et cetera, et cetera.

Isar:

So I want to ask you a couple of follow up questions. I'll start with the first one. What does a company need in order to even get started with implementing such a chatbot, right? okay, they have the data and like I said, it's scattered across multiple boxes in multiple formats, but what do they need? In addition to that, in order to get started and what ways, and you said that there's simple ways to solve it and there's way more complex ways to solve it. So if you can break down the different ways and what are the pros and cons on each one, I think that's going to be very valuable to people.

Demetrios:

Yeah, I think. You want to look at what are your constraints? And so if you're a smaller company that doesn't necessarily have engineering talent, then the easiest is probably just to grab some kind of SaaS solution off the shelf that will connect to your data sources, like your Notion or your Google docs or your email, whatever it may be. And it can collect all of that. And then you basically outsource the hard work to them. Now, I mentioned look at your constraints because potentially that sensitive data you don't necessarily want, or you can't have others touching. So in that case, you have to take a different route and you have to really think through it. In a different way. And there, I think it's a little bit harder, but you can also get a SaaS solution that just is super locked down, has all the SOC 2 and whatever other certifications that you need to feel comfortable. In order to move forward with that, that I would say is like the easy route you do that. And because it is the easy route, you don't necessarily get as much control over it. So potentially you're not going to be able to turn those knobs and tweak it. In a way it's like the joke that we always say is, yeah, the best part about a managed solution is that it's fully managed and you don't have to worry about anything. And the worst part about a managed solution is that it's fully managed and you can't go behind the veil and touch anything and tweak anything to your liking. So you have that end of the spectrum, like just get something off the shelf. And then

Isar:

just to give people an idea, there's tools. the one that I see the most is Dante, which is just, again, a commercial tool that you can pay whatever 20, 40 bucks a month, and you can dump data or connect it to data. And then you have a bot that you can ask questions and there's a bunch of others, but that's what they do right there. They're a very good starting point. If you don't have the technological skills in house to do something else. Or the budget.

Demetrios:

Totally. Totally. It gets you up and running quickly. And then you can figure out if it's worth it for you, because the other thing that I just want to mention from the survey that was, neck and neck with how do I trust the output of these models is how can I truly assess the ROI of using these large language models. And so I am not going to sit here. I'm probably like the most skeptical of everyone that is out there on if you actually need AI. And There are potentially it's not that you need your whole business needs AI. It's just that for a certain use case, like when your marketers are creating copy for the landing page that you're spinning up, then they can augment their capabilities with some AI. And then. You again may not need things like a fully built out system. You can just go to some third party. You can even just go to chatGPT and say, Hey, I need help with this landing page copy. How would you edit it? Or inside like notion AI has that capability. All of these, I think Google docs even have that capability now. So everyone's scrambling to put it into their product because they understand the value of it, but. When going back to the point I'm making is try it out, figure out if you really need it. And then that's for, I almost look at it as two different ways of using AI. One is you're using it internally. And so there's use cases like text summarization or text generation and the chat bot that we were talking about, internal chat bots, external chat bots. And then you can use it in a way where you're putting it into your product. So maybe there's very difficult ways of interacting with your product that only the power users know how to do. And all of a sudden you can create some natural language way of interacting with your product. So someone can say, Oh, I want to do X, Y, Z, and then boom. You can translate that to actually happening in your product. This is as we're going along that spectrum. This is where it gets like continuously harder and harder. And so you have the off the shelf solution, the SaaS solution where, okay, I just like connect stuff up with my data, or you can go along and you can say, all right, now I'm going to create just an API call to ChatGPT or Open AI or Anthropic, one of these third party model providers. And because you can do that, you also outsource the heavy lifting of keeping that model up and running, which is debatable if they are like that good at, but you know that if I, especially if you're not at a gigantic scale that if I send out an API call, I'm probably going to get something back. they've been getting better over time, but when there was the whole craze and it started, it was very hard to actually trust that you would be able to get those answers back when you sent out an API call or when you were incorporating basically. So for those who are unfamiliar with API calls, when you're incorporating the AI capabilities into something and you don't need to go to the ChatGPT website. In order to get that capability. And so then just finishing up, is probably what I would consider like the most advanced, and that is where you say, because of whatever reasons, because of whatever constraints that we have, we want the control, we want the privacy, we want something that ChatGPT or these third party model providers don't give us. And so we are going to take an off the shelf open source model, which is out there and people may have heard of like Llama 2. That's a big one that came out from Meta and you're going to take that and then you're going to host that in house and you're going to make sure that when people are trying to interact with AI on your product or in your data or whatever it may be, then you do all the hard work of making sure that it's up and running and it's continuously giving you answers when you ask it.

Isar:

Yeah, I think that you broke it into three buckets that are distinct and I think they're very relevant. One is, okay, a tool like Dante, it's, you sign up for it, you put the data, you give it your data and you're up and running. The second one is I'm going to build my own solution, but I'm going to relay. On a third party, large language model, which means I don't need a lot of work to figure out what kind of servers do I need, how do I keep them up and running, how do I create redundancy, what do I need for security, all those things. You do not need to do. So that's like the middle of the way solution and the top of the line, like the Cadillac solution, if Cadillac is even top of the line anymore, I don't know, but the most complex, but yes, the most control is I'm going to host my own stuff, meaning I need actual data ops people to set up servers on some kind of cloud, or even, host it locally somehow and data center. And. Computing power and connections and API and uptimes and all the other stuff that comes with running your own thing comes with baggage with that. But then you have full control, complete trust, you know what the data is and so on. And for some industries, they won't have a choice, right? If you're in the legal industry, healthcare industry, like some of these places, it will be very hard, defense, all that, all those things, it will be very hard to say, Oh yeah, we're going to trust Anthropic to do what they saying that they're doing, which is make sure all our data is safe. I want to ask you a question that I'm personally curious about that I think falls somewhere between category two and three of what you just defined. Which is all the big cloud platforms. So Azure from Microsoft and AWS from Amazon and Google Cloud from Google all now have these AI layer where you can basically pick and choose which large language models you want to run with. And they already know, like they have the infrastructure to connect it to your data that's in Azure. How does that exactly work? Meaning. Is it really as simple as saying, Oh yeah, I already have all my databases in AWS. Now I can pick Llama2 or Anthropic and connect it to the data. And with very little work, it will be up and running while still keeping it secure within my sandbox or is it really more like option three that we talked about just with easier setup, because it's already somewhere.

Demetrios:

I think you're definitely looking at something in between an option two and option three. And one thing that I wanted to point out is that when you're with this option two, and it's just API calls to a third party model provider, the type of. engineers that you need for that don't necessarily need to specifically be machine learning engineers, or you don't need to know that much about machine learning or AI in general in order to get value out of it. And that's, I think one of the reasons that there was this gigantic boom, because all of a sudden, all of these software engineers, which there's hundreds of thousands of software engineers across the globe, right? And then if you look at that in comparison to machine learning engineers, there's a much smaller pool of machine learning engineers who are dealing with different types of problems, which are these, what is now being called, I think it's, the AI engineer is basically someone who can understand what you need in order to get value out of a large language model. But it's not like you necessarily need to understand how that large language model works or how to serve that large language model. All of those things you don't worry about because again, you're outsourcing that headache to a third party. And so you're just making sure, Hey, do I have the glue that is in between this? Do I have a database that can correspond with my large language model? And now there's types of databases called vector databases, which are very popular. And then do I have some kind of. A orchestration tool that can go and help make all of this work. And so if you are an engineer, like a software engineer. then it's not that big of a step to understand that. Now going and being like, okay, we're going to go in between and we're going to start using an Amazon SageMaker or a Bedrock. And we've got these open source models that we're using with SageMaker. And you need to start like the surface area of what you need to understand. It just starts growing and growing. And it is harder, but it's not as hard. I would say as if you were to do it on prem with a fine tuned model that you just grabbed from, the Llama to Hugging Face, and then you decided to make that your own model and fine tune it and then figure out all of the things around it. So yeah, it's a little bit in the middle. I would say it's probably more towards number two than number three.

Isar:

Okay, good answer. So as a recommendation, now let's make it practical for people. If I'm a company, what are the considerations? So what we already discussed, right? This data security, like how much do you need your data to be in a box that you're certain that is your box? The second thing that we mentioned is complexity and skills, right? So how much effort and resources and money and time I want to invest in getting this thing up and running. The third thing we said is. the people that I have, right? So I don't have any software engineers in my company, not even third party that I can tap into, or I don't want to tap into versus I have software engineer versus I have. Machine learning, AI engineers, and then that will help me understand where I fall. And I would assume that the suggestion I'm asking, I'm not saying, is start as low as you can on that scale to see, is it really working? Is it really providing you any ROI? Is it really doing what you think it's doing? And then decide if you want to continue down that path to get more and more technical, more and more complex.

Demetrios:

Yeah, exactly. If you can, the easiest way is just to... To hit a third party model provider, as much as I don't enjoy saying this, it's if you are in this path where you're trying to see if adding AI to your product or to your workflow or your service is going to bring you that augmentation and that extra oomph. And it's going to add something to your offering, then try and get it up and running and working and people using it as fast as possible. That's one thing that I've learned over the years, like the bane of existence for the data scientist who has. Traditionally been the one who has been doing this with ML is that you can explore for ages and it's very hard to know if what you're creating is actually adding value. And so you have to really tie closely metrics, some kind of metrics that people care about in the company. You have to figure out, okay, does this. capability actually add to the overall metric that we're trying to move

Isar:

the needle on? Yeah. when I work with businesses, I always talk to them about this. Like even when you do small things within the company, pick whatever tool and figure out how we implement it in different use cases. I always go back to, you have to put KPIs in place and you need the two types of KPIs, right? You need the trailing indicators. Okay. This has saved us. 150 hours of work this last month, which translates to X number of dollars because the average salary of people that this has helped is X. So you can put a dollar value, but to know that you need to wait sometimes for it to see it. So the leading indicators can be, are people using it? How often they're using it? What results are they getting from it? Are there then going back to what they did before or not? Like you can put other KPIs in place to see that a solution that you have. That you're testing is actually yielding a result that has a business. Cause otherwise, Oh yeah, it's really cool. And we just spend two months on it, but either nobody's using it or they're using it, but it's taking us the same exact amount of time that it took us to do the thing that we did before. it's just more cool than you just wasted two months of salaries and computing power and so on. So I agree with you a hundred percent. I think at the end of the day, people have to remember that this is a business. Decision that has to be tied to a business strategy and a process and KPIs to measure that this is actually moving forward, despite the fact there's a very serious FOMO right now of Oh my God, everybody's implementing AI and I have to do it as well. And despite the fact that it's really cool. So I

Demetrios:

love that. That's where one thing that I will say is companies that I've seen doing this well, it's not necessarily that the feature comes from the machine learning engineer or the software engineer, it's more a product engineer or product manager. That understands the product, understands the capabilities of the product, and then can understand what the user's looking for and have that empathy of where the users are getting stuck right now. And if. This product engineer understands what AI can do in a real tangible, like feet on the ground way, not like Twitter demos of this is, auto GPT is now going to create these agents, which will be able to go out and do anything on the internet, and that just is not the case for 99 percent of the people that have used. These agent tools, because it's very difficult and we can get into that if you want, but I think the product owners and product engineers who understand AI, but they are very heavily based in there. They have their product sense. Those are the ones that say, oh, maybe we can add an API call here and we can just magically add this feature of natural language processing and then boom, we can do it. And they understand that there are certain metrics that they need to tie things to and they can, I, I'm thinking about a specific instance when I talked with this guy, Philip from Honeycomb. And he spoke about how their complicated product, which is, monitoring and observability platform. to break that down, it's are my servers up and running basically in a very simple way it's. And are we doing what we say we should be doing? And if we're not, is our product failing and why? And then we can debug why it's failing. And so Honeycomb helps you do that. And Philip talked about how there's certain things that he wants. he sees success. When someone has accomplished uh, three tasks in the first 30 days of testing out this product. And so he noticed that when they added AI capabilities, they, those three things were like 85 percent more likely to be used. and so then boom, they said, you know what, however much we're spending on API calls to open AI, if we just get one converting customer, it makes it all worth it. So he was very clear on what the metrics were that he was trying to change and how the money that they're spending on the API calls using the AI actually is net positive.

Isar:

Yeah, I definitely, agree. I think it's, first and foremost, a business problem. And then the second thing, and I agree with that as well. It's a product question. And even if it's not a product, like if not a SAS company trying to implement it, then look at the processes that you're doing in house, look at the processes you're doing with your clients and look how they are being improved. How much time is it saving? How much more money it's generating? Like whatever the case may be, it has to be tied to a business KPI that you can actually measure. To see whether that's worthwhile. And as we mentioned, you can start small. You can start implementing what we said earlier. One of those open source, not open source. So one of those, SAS solutions that you can connect to data. Takes a non techie person less than a day to get started to play with it. It's literally going in, setting up your logins to the different things you wanted to have access to, and defining some parameters of what you want it to do, and you can test whether it's working or not, it's that simple. And so within. A investment of 20 to 50 a month, which in a business perspective is free. You can test the concept, right? And if it's working and then you're saying, Oh, I wish we could do this. Now we can start continuing down the rabbit hole and going further and deeper, understanding that's going to require more resources and more time, but will yield more accurate, more consistent, better connected, like whatever you want to. challenge in that particular thing, results for you. So great conversation. I think this was awesome. I know there was one more use case you wanted to mention as far as coding, and I'm very curious about that. And I wanted to touch a little bit about it because I don't know if all the listeners care about coding, but I think. More and more companies, even if they're not a software company, have people who are writing code to support things that they're doing. And I think understanding what are the capabilities today with writing code with AI could be another very interesting point, things to touch on.

Demetrios:

Yeah. And you've seen lots of these coding co pilots come up. It's not just Microsoft co pilot that's out there. I was actually talking to a founder yesterday that. Has this tool Codium and they've got a lot of users. They've got like over 400, 000 users and they're doing, they built their own LLM. It's not, again, this is like they're playing on hard mode. They said, okay, we can't differentiate ourselves by just building exactly what Co pilot has because Co pilot is always going to be a winner there. Since Microsoft invested 10 billion into open AI. What can we do that will differentiate ourselves? It comes down to, we're going to build everything from scratch. So we, they had the expertise. They were a bunch of. Guys that came from the autonomous vehicle realm. They were working at neuro. I think it was. And so then they said, all right, let's go in. Let's build it. And they built it and came out with it. And they said that because of their expertise in the company that they were doing before this, they understood how to be able to serve so many people. At such high scale and so quickly. And so now when you're using that tool, or if you're using copilot or if you're using, there's a few other ones that I should probably mention. There's one called Cody. From the, I think it's source graph. And then there is another one from, I think it's called BITO, B I T O. And all of these folks

Isar:

have their own And there's GitHub's copilot, right? That's another

Demetrios:

one. That's, yeah, that's like the classic one. I think that everybody has probably at least heard of a little bit. And so these are. Quite useful. And I know Repl. it also has one. It's like ghost coder or star coder. I can't remember if I get it mixed up because HuggingFace has an open source model. That's one of those two, maybe it's star coder. so don't quote me on the title on that, but Repl. it is like a GitHub and it has, I think all the coding tools are going to have some of this built in because it just makes sense that it's, Very useful. It ups the productivity of your developers. Now I will say this though, again, it's like it's easy to say that things are going to be totally different. Everything's changed. And now we have co pilot. It's going to make any bad engineer an incredible engineer. But what we've seen is that you still have to know what you're doing. surprise. You can't replace experience and understanding of code bases fully a hundred percent with a coding co pilot because even if you don't know how to code, then you don't know if the code that it spits out is correct, or it's the best way to do that, or why it could be problematic down the line. And that ends up getting you into trouble.

Isar:

And I think another thing, so just to. To clarify this for people, these coding co pilots, what they do is they spit out code and they know how to do it in multiple coding languages by now and for multiple use cases, and they're very good at saving time for stuff that is just tedious work. Like I need to write this thing that does that thing. And it's. Whatever, a hundred lines of code, and it will spit it out for you based on the parameters you told it. And it saves you writing a hundred lines of code. The thing is, as you get further from the code, more up into understanding what the system that you're trying to build needs to do from an architecture perspective, from an efficiency perspective, from the hardware that it's using, there's so many other layers other than the hundred lines of code you needed to write. That you need to understand in order to make your code the most efficient for that particular setup. And that's where these things are not very good because they're not engineering the system yet, I should say, but as of now, they're very good at writing these short code snippets. If How to explain it to them. And if you know how to test that it's actually doing what it's doing, it could make people, and you may have numbers, I don't know from an efficiency perspective, how much more efficient it makes people who actually have played with it and used it, but I'm sure it's tens of percents,

Demetrios:

right? Yeah, for sure. Yeah. And I, so I don't have an exact number on that, but I do, I will say that. It makes you much faster at doing those tedious things. It also can, be problematic because you accept the auto complete. And then later on, you're like, why is this not working? Like it used to, and you realize that, oh, it's because I accepted that auto complete. and I will say too, though, that like Codium, I dug in when I talked to the founder yesterday and was asking him about that and he. Understands those downfalls. And he is making a point of trying to take as much context as possible from your code base before giving you a suggestion on where we're going. So it's not just like it has no context and it doesn't understand what you've been writing, what your system looks like, what hardware constraints you have, et cetera, et cetera. And so in that regard, I do see it getting better really quickly. I feel like. It is an amazing tool and it is one of those use cases again, that is a shining star. Like the chat with your data and the coding use cases are very clear. That LLMs are very good at this. And with coding, it's even better because you can potentially test your code after you accept that auto complete and you can see, does the code still run? And so it's very black and white. It's not I'm testing out if this copy, like copy is very much subjective, right? If it's marketing copy that you're putting on a webpage, you can't really put that through a yes or no, does the code still run type thing, but when it comes to code, you can. So I see a very long future in this. I wonder, I don't think it's going to be like what we saw in the GPT 4, demo day where they were like, Oh, I just drew this picture and now it's spit out this full blown webpage. I don't see that happening yet, but hopefully it will, because I would love that. I think that would be really cool.

Isar:

Yeah. very interesting. I think a quick thing on the coding stuff, it's important for people to understand, and by the way, for everything, not just for coding, this is the AI that exists today is the worst AI we'll ever have, right? We are in version 0. 1 of. All of that, like even the stuff that is quote unquote mature is in the very, very early infancy stages of it. So everything that we're saying is true today may be somewhat wrong in a month. It may be completely wrong in six months. Meaning the ability of these things to make progress very fast is nothing that we've seen before. But as of now, this is an amazing time saver for writing pieces of code that you need. And like you're saying, it's, there's already steps in the direction like, Oh, I need to understand the broader system and the broader code and the way this particular person writes and sets up his code and comments and so on. So I want to make it fit that and so it's, it will move in the direction where we'll be able to create more and more software. And the beauty of that, going back to the applicability of this to anyone, and that's true for all the AI stuff. It's the democratization of things. so far, let's say you wanted to start writing code within your company, regardless of what you do. You either needed to find a third party company or hire 6, 7, 8 people to get the job done. And within a year from now, you might be able to get away with hiring one guy, because with these tools, he'll be able to create a lot more stuff a lot faster to do all the things that the other four people were supposed to do. And yes, that requires a slightly different set of skills, but I think again, developers probably more than any other profession in history have learned to adapt to new languages and new development platforms and new integration capabilities and new database types, because it constantly changing. And so I think we'll see a whole new era of new kinds of developers who will know how to use all these tools in order to do stuff that a year ago would have been considered magic.

Demetrios:

Yeah, I will. So I have two things to mention about that. One is I contemplate a lot about where we are on the S curve. Are we at the bottom of that S curve? And we're about to just see like vertical growth for the next six. 12 months or maybe six years, who knows? Or are we at the top where we're just like. Logarithmically for the next 10 years, going to slowly be getting better. Like with the iPhones where nothing really changes from one model to the next. But if you look back from the one that I have now to the one that we had when it first came out 10, 15 years ago, there's a big difference, right? Yeah. I am not sure, I have no idea where we are on that, nor do I think anybody can with confidence predict. Where we are in that. And even if they tell you that they can, I would be skeptical because maybe we've hit a plateau or maybe we're at the bottom and we're about to get another huge breakthrough, right? And so one thing that I think I've heard people talk about is how can I prepare, especially like a business owner? How do I prepare if in six months? All of the stuff that I'm working on right now becomes irrelevant. Or like you were saying, I go out and I hire seven software developers. And then in six months, I only need one. And why don't I just wait six months and get that one? That one is not guaranteed. and I would probably err on the side of you're probably not going to have one anytime soon. But the thing that I think the wisdom that I've gotten from talking to people is that you want to optimize your system for what is possible now, and then whenever that gigantic breakthrough comes out, whenever it is, you can upgrade, but you don't need to try to do these like Herculean efforts to try and get that extra 1%, unless your meta and billions are made with that extra 1%, then it's obvious. All right. Yeah. Throw as much money as you can at it because just even like a 0. 004 percent of ads optimization is going to net you a whole lot of cash for the most of us. It's That Herculean effort to try and get that extra 1%, you don't necessarily need to do right now because in six months or in 12 months, we're probably going to be 20, 30, 40 percent better than we are today. And so that 1 percent isn't going to really mean much, right? So that's how I've started looking at it. I don't necessarily want to say Oh, what is that going to look like as far as how many software developers I need to hire? I will just say yeah, let's try and optimize for the tech that we have now today and not get FOMO that in six months after I have this six month project. And finally, in six months, I put my baby out into the world. Then there's a whole new technological breakthrough and it renders whatever you created obsolete. So I wouldn't worry too much about

Isar:

that. I love this comment and I want to generalize it for a second. And it's true, not just in coding, right? It's true in this machine learning world that we're in or AI world that we're in. And there's always there's literally hundreds of tools coming out every single day. it's a chase that a business cannot do because you've got to start doing stuff. And so when I work with companies, it's always a mix of two things. And one thing is figuring it out. What are the low hanging fruits? What are the places where an AI tool, one or another can really yield immediate business results, either more money, more clients, savings, whatever. Whatever optimization of business processes that you can do, and then you can win right now. Like it doesn't matter if there's a better tool in six months, because right now you just saved yourself 200 hours a month, or you just have 10 percent more clients every month with the same resources you have right now. So this is number one. And number two is really the other end of the spectrum where, okay, let's look at the business strategy. As a whole, will our clients need the same things they need right now, two years from now, because AI is coming in, do we need to make changes in how our business is structured, the type of clients we're going after, like the more strategic thinking, knowing that this is going to get better and better either quickly or slowly, it is going to get better. And so now with, through that lens, let's look at our business as a whole, from a strategic perspective. And then if you combine these two things. You don't really care about the fact that six months later, there's going to be another big thing. And the other people that, the other thing that I tell people is figure out how to use AI. Don't figure out how to use this tool, figure out how to use AI for this use case, because then if six months from now, there's a better tool for this use case, you already know how to implement it, how to use it. The team is more comfortable doing it and you can upgrade relatively easy. Versus, oh, I just spent six months in how to use this tool and now there's a new tool. And so always think about it from the use case business perspective and not through the tools lens. And then the next step will be a lot easier because now you know how to solve for the use case with AI in general, and not just with this particular tool that you fell

Demetrios:

in love with. Actually, that's a great point because we've been talking a lot about large language models, which are. Based around language, but there's two other sectors, I think, of AI that have had incredible progress over the last year and a half, one being voice. So either it is us talking and then that being transcribed into text. Or it is us writing texts or potentially a large language model, writing text. And then it being put in our voices because it has enough of that training data. So the voice like audio is incredible. I'm not the biggest fan of like music made by AI. I know that's been happening. I'm actually, I'm a musician at heart and I just haven't gotten it. I do understand that is incredible what's happening there, but. I'm not going to be making, any AI generated songs anytime soon. That's personal opinion, but that is out there too. And then, but the second area besides audio and voice is photographs or not photographs, images, and generating images from text. is absolutely mind blowing the rate at how good that got, how quickly that got. And especially if you're looking at something like a mid journey, that is just. almost everything. You can prompt it by telling it looks great when it comes out, especially if you're looking at some of these newer versions of Mid Journey. So it goes back to what you were saying. I just wanted to bring it back to this idea of know where these different capabilities lie. And how you can augment your own workflow with these capabilities or your company's workflow. So maybe that's, you can go and find a tool that can create 20 different logo versions if you're a design shop, and then you have the professionals do the last finishing touches on it maybe, or I know that there's a ton of other use cases that you can do in this area, my friend actually has, this company called Storia and he is focusing on creating, how, when you create a movie, like the directors have their storyboards first of the shots that they want to get. So he is helping directors create storyboards with. These AI generated images.

Isar:

again, now, Adobe Premiere, last week in their big conference, they're doing this on the fly, based on the script. Like they're creating those storyboards and they can even then render a video of how the storyboard will actually look like once you shoot it and just to get things going to in the direction you're going in X number of months into the future, I don't know if that's six or 12 or 18, but somewhere in that range, it will be able to actually render a high end quality. Video, which means you may not need shoot the video. You will just have it out of the stuff that you've already created. I listen, I think this summary is a great summary, right? It says that find the use cases, find the places where it's the most useful, start using it, learn how to use it, and then adapt as it's. Moving forward, this was an amazing conversation. Like we touched on a lot of really important stuff that is relevant to probably any person in business, but any, definitely any decision maker, if people want to learn more from you, follow you, connect with you, work with you, what are the best

Demetrios:

way to do that? I'm probably most active on LinkedIn these days. so it's just LinkedIn slash D P Brink M and I do a bit of trash talking on there, the whole AI ecosystem too. So hopefully it's fun for people to follow.

Isar:

Awesome. Dimitris, thank you so much. Great conversation. Lots of value. I appreciate you and I appreciate you taking the

Demetrios:

time and sharing with us. Yeah, man. It's been great. Thank you for having me. This is awesome.

wHat a fascinating conversation with Demetrius. I must admit, I'm not surprised with the use cases that rose to the top, but maybe it's a chicken and the egg thing. We know them because a lot of people are doing them, or maybe it's the other way around. But definitely creating chatbots that can connect to as much data within the company as possible initially for internal uses and then for customers as well is going to be a focus for probably many, many different companies in 2024 And now let's jump to the crazy news that we had this week, as I mentioned, I'm going to start releasing Friday news edition of this podcast, but there's a few really big ones this week that if you haven't heard of yet, you probably want to hear right now. So the biggest news in the AI world, maybe ever since the launch of chat GPT was that Sam Altman, who is the CEO of. OpenAI, which is the company that has created and gave us ChatGPT, was fired on November 17, in what seemed to be like a coup, very well executed by people at OpenAI's board. Immediately after firing Sam, they spoke to Brockman, who was the president and the chairman of the board. And they demoted him as well, basically revoking his seat at the board and obviously his chairman position, allowing him to only keep the president title. He immediately resigned after that move and the move of firing Sam Altman. And the board have announced that Mira Moradi, who is one of the founders and the CTO will act as interim CEO until they found a new one. This news was obviously really shocking because Sam Altman. Is the person that led the company into the launch of ChatGPT and from anonymity to maybe the most spoken name in the tech world today, a company that has created a whole new era in technology that was able to keep ahead of Google and Microsoft and Amazon and IBM everyone, and still leads the generative AI race today. Why was he fired? Still not completely clear, but I want to explain something open AI was founded as a nonprofit company, and at a certain point, when they understood that the nonprofit company cannot raise enough money, they've created a for profit company under the nonprofit company. What does that matter? Because the board that makes decisions for open AI is still a nonprofit board with no representations to the big investors at the top of them is Microsoft, which has committed 13 billion with a B to open AI. Presumably, Ilya Satskever, who is also one of the founders and one of the board members Was not happy about the speed that the for profit company was running when the nonprofit company was established in order to create safe AI for everyone. And he was apparently feeling that's not what the for profit company is doing anymore. And the alleged reason why he was fired. And now I'm quoting is that he was not consistently candid in his communications with the board hindering the ability to exercise it's responsibilities. So that open Pandora's box. Everybody was starting rumors on what he's going to do next. Is Brockman leaving with him? Are they going to start a new company together? Hundreds of employees shared their Disapproval of the move and even threatened that they would leave with him if he leaves. And the whole company has about 700 employees. So if hundreds of employees actually end up leaving that leaves OpenAI with, well, not enough employees to run the company. So I don't really know what's going to happen, but the latest news from today, Monday morning is that Sam Altman and Brockman Are joining Microsoft that is the biggest backer of open AI and that also Mira Moradi is not going to be entering CEO. So who's going to be the entering CEO of open AI? The company that launched Chachupiti, one of the most successful products in history. So the person is Emmett Shear that you probably don't know the name, but Emmett Shear was the CEO at Twitch and incredibly successful streaming business that is owned by Amazon. So he's stepping in. To the very big shoes of Sam Altman, Sam and Brockman are moving to Microsoft to run what they call a new company or a new division within Microsoft, which is obviously going to be the one implementing the AI tools for Microsoft across probably everything that they do. As I said earlier, they've bet 13 billion on this, and they're literally changing the entire Microsoft strategy around AI. So for them, I think it's actually a very big win to bring Sam and Greg Brockman over. How's the saga going to end? Nobody knows. This is the Monday update. As I mentioned, I will update you further this Friday, but before these nuclear news happened, a lot of other interesting stuff happened this week. The world's first AI powered humanoid robot became CEO of a rum company called Dictador this week. The robot was built by a company called Handsome Robotics who has built these kind of robots for several years now. And while there's a lot of moves to making different Robots make different activities. Being a CEO of a company is something I did not anticipate to happen that quickly. How exactly is it going to work? How does he do all the human aspects of being a CEO, like hiring, inspiring, and leading employees? I don't know, and I don't know if anybody knows, but that will be very interesting to follow. Another big news from OpenAI themselves is, as we mentioned last week, they've launched GPTs and they've launched 4 Turbo with a lot of new capabilities. And that dramatically increased the demand for ChatGPT. So OpenAI had to pause new subscriptions and upgrades to GPT Plus So if you had one, meaning the paid version of it, you could keep on using it. With some issues as far as access and availability that I at least experienced and I know a lot of other people who did, but if you did not, or still do not have one, you cannot upgrade to the paid version currently because they have bandwidth issues. So this is before they've let their CEO and probably some leading engineers walk away. They were experiencing peak demand for their product and service. At the same time, their backer and closest partner, Microsoft, has held their Ignite event, and they've made some huge AI announcements The biggest one is probably the launch of Copilot Studio, which is a no code automation system that runs within the Microsoft environment that allows anyone with the right privileges within the system to create new advanced chatbots that can connect to multiple sources of information from within the organization and outside the organization as well. And deploy those within the business to allow people within the company to ask questions that connects the dots across multiple aspects of the business. This is obviously an extremely powerful capability that will become available to Microsoft Co Pilot users. If you connect all the announcements together, you learn that everything in Microsoft is going to be called a co pilot for something, which will be the theme across everything that they're going to deploy with AI, both on the end user level as well as on the enterprise level. Another big announcement from Microsoft was that they're adding multiple models to Azure as models that people can use with their data. These models include mistral, and Jace, and Lama, Stubble Diffusion, and Clip, and Whisper Version 3, and Bleep, and SAM. So, some of these I've never even heard before, but they're going to become available to people who want to use these models with their data that is stored on Microsoft Azure. We've seen similar moves From Amazon and from Google, allowing multiple models to run within their hosting environments in order to allow maximum flexibilities to the companies who are hosting the data with their platform or that wants to host their data with their platform. There's some other news from this past week that I'm not going to share and as I mentioned, I'm going to start recording Friday episodes with more news in a much shorter format. If you enjoy this episode, please give us a five star review on your platform. It allows us to get to more people and bring bigger guests so you can learn more stuff. So everybody wins. And until next time, have an amazing week.