Leveraging AI

78 | From Concept to Action: The Chief AI Officer's Guide to Practical AI Business Transformation with Matt Lewis

April 09, 2024 Isar Meitis, Matt Lewis Season 1 Episode 78
Leveraging AI
78 | From Concept to Action: The Chief AI Officer's Guide to Practical AI Business Transformation with Matt Lewis
Show Notes Transcript

Let's tackle the strategic mind of Matt Lewis, a visionary Chief AI Officer, and crack the practical steps to integrate AI into your business framework effectively.

In this webinar, Matt will share his firsthand experiences and strategies for fostering an environment where AI not only thrives but also aligns with regulatory standards.

Learn how to
•⁠ ⁠Identify the right AI opportunities
•⁠ ⁠Encourage team experimentation, and
•⁠ ⁠Implement AI solutions that drive tangible business outcomes.

Whether you're an entrepreneur eager to harness AI's potential or a business leader seeking to plot the AI landscape within a regulated industry, this session is for you.

Gain actionable insights and learn from Matt's experience of leveraging AI to enhance operational efficiency, decision-making, and innovation in a complex, regulated sector.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Welcome everyone to another live episode of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. We have a special guest for you today, and unless you've been hiding under a rock for the past year and a half, and if you're listening to this podcast, you haven't, then it means that you know that AI has taken the world by a storm in the last year and a half and companies large and small are trying to understand how to do something about it. The problem is most of them don't know what to do and even less know how to actually do it. Now, many smaller companies and large companies as well, hire consultants like me to help them in understanding the processes and so on. But some companies luck out when they have somebody in house who has exactly the right expertise to lead the charge on AI for the company. And one of those people is our guest today. So Matt Lewis has been holding senior leadership position in the healthcare industry for over 20 years. In almost every aspect of the business. So he held SVP and C suite positions across various aspects of the healthcare industry, which makes him, which gives him a very deep understanding of the business as a whole. Now, in the past five years, he held multiple senior leadership roles in data management and data analytics. Which makes it a perfect transition into the AI world, which is based on data. So the combination of these things, he's very deep understanding of the business as a business in multiple aspects of it, together with his understanding of data management, and now AI made him the perfect choice to be a chief AI officer. Now, I. Never got a chance to interview a chief AI officer So when I saw his title online on linkedin that caught my attention We had a very good conversation about it And i'm hence very excited to have him as a guest of the show Matt Welcome to

Matt Lewis:

Leveraging AI.. Thank you so much for having me. I really appreciate it. And, thanks for the time and thanks to the audience for listening in.

Isar Meitis:

Perfect. Let's really start from the top and then we can work our way down into more and more detailed stuff. What does a Chief AI Officer do?

Matt Lewis:

Yeah, it is a great place to start. I was just actually reflecting on this right before we joined each other today. I was with our chief people officer commenting and reflecting back. I've almost been enrolled for a year now. It seems like it's been a decade, but it's actually only been about a year. and, it's true that I actually don't have a job description. what I do and where I focus has evolved a little bit in that year. and I would have to imagine that, everyone that's in a similar role has maybe a slightly different kind of focus or a little bit different set of work streams that they align themselves to. I now liken the work to almost like having a bunch of C suite roles, all packaged up into one position. for me, it's like a mix of like chief learning officer responsibilities, where a lot of my time is spent. working with our teams that are responsible for things like learning and development, training, people, support and the like, because we, within our division, we're a little under 3, 000 people and most of them are, have doctoral degrees in, science or physicians, pharmacists, PhDs, that type of thing. And they're very skilled people and they're looking to advance their skillset with regards to AI and data science. So there's a way of kind of future proofing their skill sets for what's coming, do a lot of that same consideration out in the external environment, at conferences, at professional meetings, at, workshops, and helping to identify what the best practices are and standards that people can consider. So as they look to apply AI in their work, that they have something practical to take back. So a lot of teaching and coaching and facilitating and training in that respect. It is like a C, C. LO, learning officer type responsibility. And I'd say that's probably at least a quarter or a third of my work.

Isar Meitis:

another, I want to pause you, I want to pause you just for one second to say how much I love what you just said, because when I work with companies and again, you just do this for your very large corporation. I do this as a consultant for many companies and I teach courses. The first thing that I say, I have a checklist that you need to do. The first thing that I say is continuous education. there has to be a way you have to figure it out to solve for how to continuously educate anybody in your company on how to use, how to leverage AI and everything that comes with it across the different departments, across the different levels of, from leadership all the way to the last person in the trenches. And so I, I love the fact that's the first thing you mentioned is about education, because I think that's where it all starts.

Matt Lewis:

Yeah, and it is a very different disposition now than it was, say, seven, eight years ago before I was in this role. I was a chief data and analytics officer. And when I had that type of role, there would be times when the teams, that I worked with internally would want something done from a decision or data science perspective. And they'd, ring the bell, so to speak, they'd, send an email or the teams or slack or, Something and they want us to get involved with a project that was data science related, but they didn't really want to be taught anything. They just wanted us to do it and going to go off and make it work, get it done. Yeah. You make it work. It's very different now. very different. And there are lots of reasons why it's different, but it's just a completely different, environment now than how it was many years ago. Now people do want to be engaged. They want to experience and experiment directly and they want to be taught how to change their world and their work. So a lot of the work is coaching and teaching and training and all the rest. and I think, we We realized it early, but at the same time, we didn't get it all perfect or right the first time around. And, just like many firms, right out of the gate, like back in November 22, December 22, we had a policy that was, a little bit restricting on chat GPT and the rest, and, realize the error of our ways and very quickly, relax that so that we do now encourage experimentation across. The whole of the organization. We want people to try and see what works and try to, scale across the business. But, we learned that, trying to Prevent people from experimenting on their own just doesn't work as people will find ways around that and do it on their phones, but they don't do in their desktop computers and all the rest. So it's just it's not helpful or healthy to try to constrain innovation. So we, a lot of it is related to that. And then, the second large piece of my work is what I might call a chief innovation officer responsibilities where we either based on what we're Intending to do as an organization or what individual contributors think is relevant for them. We set about developing experiments for the teams and for the groups with whom we work so they can try to figure out how to do it. Make the work smarter for them, if you will, for the most part, leveraging generative, but also other types of AI where it's relevant. Maybe it's, machine learning or NLP or something. but, it might be, it's often the case someone, will come to us and say, Oh, I've been doing X for, 18 years and it's just really painful. I hate doing this, but there must be a better way to do this in the way I've been doing it. Can we think of another alternative? We'll put our heads together and think about a possible solution. And sometimes it's a homegrown solution that, we can apply directly to their problems. Sometimes it's something that's off the shelf. Sometimes it's through a partner, but we'll set about developing, a target, a, a baseline, a KPI. And seeing whether what they have in mind actually is, meaningful and can demonstrate an improvement. So that's a big part of what we do as well. We're trying to see if, how we work and improve the way we work so that our teams are more engaged. The work is, faster and more efficient and hopefully ultimately our customers are happier with what we're doing.

Isar Meitis:

Yeah, I love, the last thing you said, as far as at the end of the day, people, a lot of people are afraid that AI is going to take their job or it's going to change, but the reality is at least now, so I'll be fair. At least now what AI will most likely going to take is the tedious, annoying, repetitive part of your job. Most people don't like to do anyway, just like you said, I've been doing this for the last 18 years, but I hate the stuff that I have to do that is not fun, not interesting, not, like allowing my brain to evolve and to think and to do stuff, but it's just, I have to do this. And so if you find ways for AI to relieve people. From those tedious daily annoying tasks, that is a part of any job, or at least reduce them, right? Maybe you can't eliminate them, but you can reduce them from, okay, this is 70 percent of what I do to 20 percent of what I do. You get happier employees, which by definition will get

Matt Lewis:

you better results. Yeah, I definitely think that's true. I think it's also what I've seen over the last year in this role, I've been in AI for probably 14 years at this point, is that, especially with generative, that there's like a direct correlation between experience leveraging generative and both excitement of what's possible, but also lack of concern about role being eliminated and robots taking your job type of thing. It tends to be that the people that are most scared of those types of things have the least experience. And it's, they also tend to comment most, vociferously and vocally about lots of bad things happening that they actually don't have any direct experience. With at all. And then as soon as they get some experience and see what's possible, that view tends to be tempered. I can relate a story with a project. We did a pilot internally where we do a lot of work in the medical space. And so we're with scientific literature and reports of clinical studies and the rest. And as a result, we have to be very careful to make sure that the evidence we're supporting actually is referenced and is in the literature and is well supported. Because if you don't. factually include something that's true, then it can change someone's treatment outcome and the rest. So doing that, tracing back the evidence to the source is very tedious. It takes a lot of time. So we had this project that we're looking at trying to, making sure that the evidence has the right traceability, if you will, using AI instead of the traditional path. And at the beginning of it, one of the team leads is I'm really worried that if we do this correctly, the robot is going to take my job. And we didn't say no, that's not true. But it's just it's a kind of experience it to know that's not the case. And at the end of the project, two, three months into it, she came to me, Mike. Colleague and said, I'm not only not scared that the robots going to take my job I now realize that it's really, you know all about the people if you don't bring the team along then it's almost like a nerd like that You're not going to get anything out of this You have to like the team has to be trained to change our workflow. We have to think about what's possible and Yeah, I can do something but it's not going to take my job or the team's job But it took that whole length of time for her to realize that You And this was someone that was really worried at the front end that she was going to be, on the bread lines. And that's not a concern at all. Once you're actually deep in, you got to get people to take some time and really get, get their hands dirty.

Isar Meitis:

So I want to ask you a few follow up questions. First of all, I want to do a quick recap. And then I want to ask you a few follow up questions. You, you talked, we talked about the bank pillar. So you said the first thing was like coaching and education, right? What, that was the number one thing you talked about. The second thing you talked about is encouraging people to use it instead of trying to block people from using it, which I think is a very big deal in many different organizations, especially in the world's urine, like the more regulated environments. Whether it's health care or finance or legal, the stuff that's more regulated, it makes a very big difference. and like you said, there's, you can't prevent it. Like the only thing you're going to do is not being able to monitor or share with people what they're doing, because they're going to hide it from you. So finding the right way to do this is very important. And then the third thing you talked about is more of the innovation side, which on its own will feed back the ecosystem because people will see the actual results and we'll see what it's taking away from them. The big picture, what you talked about, which I agree with a hundred percent at the end of the day. It's a change management process, like any other change management process in history, which is all about the people. The tech is nice. It's awesome actually, but the tech will not do anything unless the right people in the organization and over time, all the people in the organization actually jump in on the bandwagon and actually use the technology. And so what I want to dive into right now. Unless you have any other hats you want to talk about, like as far as the pillar. So I'll ask

Matt Lewis:

that first. there are a bunch of other like little hats, if you will. the big one for me is I might call like almost chief customer officer, if you will, where we, there are a lot of people that, in the business, Interact with our customers, but in the, on the AI side, most of our customers that are looking to, do something with AI generally want to have a subject matter expert that understands where the field has been, where it's going and what's possible, and they might have a team, a data science team that can actually do the model building or implement in cloud or work with a partner to get it done, so to speak. But especially in large enterprises, they want. Someone an executive to have a conversation with about what their business is capable of and what can be achieved at a reasonable point of time. And I tend to get pulled into a lot of those discussions where it might be a billion dollar firm, multi billion dollar firm, global firm knows that by 2028, they need to transform 20 percent of their business using gendered AI. And they just need a kind of sense check as to what's possible now versus what's possible in 3 years. And a lot of that. Consideration comes from our engagements together, if you will, and some of it is model building. Some of it's consulting. Some of it is, architecting a narrative. A lot of my time goes there to help them reasonably pace out that plan, so to speak, and the other things I mentioned, the innovation, the coaching, the training, all the kind of people, he's almost Feeds into that, so to speak, because it helps them have the confidence that we know what we're doing. but, it is important because I think, no one wants to just all of a sudden change, as you said, they want to know that it's metered and reasoned and well considered.

Isar Meitis:

Yeah, no, I agree 100%. I think you touched on a lot of great points. for those of you who are with us, live right now, so either on the Zoom call or on LinkedIn, we're monitoring the comments. And so if you have anything you want to add, any questions you want to ask either to me or to Matt, feel free to throw them in there. the first one already did is Aaron. And he said that large language models were a shock, but after a time passed, the open source tools are becoming more and more available. And the machine accuracy is getting, is greatly increasing. And I agree with both like the, if a year ago you had to figure out how to connect the API to chat GPT, and that was more or less your only option. Now there's a plethora of options and many of them are built into the ecosystems in which the data is stored. Anyway, meaning if you have your data in any of the main three cloud platforms, then now they provide you on the actual platform itself, whether you're on Google cloud or AWS or Microsoft Azure, they have their underlying layer of AI capabilities that you can connect and talk to your data. And you can choose whether you want to use open source or any other, which means you can control, it's always a balance between how much effort you want to put into it versus What's the level of privacy and security you can keep and, but every organization will have its sweet spot where they can do it. And these platforms will provide you all the capabilities to do it. So great comment, Aaron. I appreciate that. I want to really change gears, Matt and ask you about the how, right? So he talked about the, what several times, and let's really go one by one. You started first one with the training. How do you. Establish the right training for the right aspects of the business. What kind of training are you providing? Who is providing it? Like all the how questions that you can think of for people who are a year behind your six months behind you in the process and trying to figure out, okay, I know I need to train the people. How do I actually do that?

Matt Lewis:

Yeah, it is a great question again. And, it's not something I think that anyone has really completely figured out. We're still constantly revising and thinking about ways to improve our own processes, even as I think we are somewhat far ahead in this. in this environment, I think what we did at the outset was very early on. We did a needs assessment across the entire organization or 15, 000 person strong company, but also each of the divisions as well. And we recognize that there are certain parts of the organization that were more mature than others. Some that are back office, some that are front office, some that are going to need more. Education both terms of quantity, but also in terms of the depth and breadth of what they're actually exposed to than others. Everyone needs something, but not everyone needs the same things and we very early came in front of the entire organization with direct discussions about the kind of significance of this. Innovation, so to speak of generative AI, and what it meant for the business and what we were putting into it to invest appropriately in our talent, our people and the organization so that we could stay on the kind of cutting edge and help our clients, customers, and, stakeholders progress forward. And when we do that on a regular basis, we communicate, we train, we educate and all the rest. We also realize that, just the scale of content and the pace of change is so different. With this than with so many other topics that we, having a kind of traditional training model wouldn't really apply, than the way that other things have in the past. I think you, you've said it, yourself, that it's really more of a continuous education than a kind of snapshot of education, because you can't just go into a group of people in a big company and say, oh, it's, it's April 4th, 2024. You're in this, we're going to do our. Training and then you won't get another training until September 4th. that's silly. Like I've said to my team, they're probably sick of me saying this, but it's true. Like a week in generative is like a quarter in the real world. So if you wouldn't you, if you did a training in April by May 4th, Almost a year's worth of content almost has occurred because things change so fast in the space. if you were on vacation last week, you'll have missed the whole Databricks model, eruption, if you will. So it's which is relevant in our space. So it's. It's a different consideration to do education here than it has been in any kind of data science space over the last decade, and it needs a much more nuanced approach, both in terms of the needs of the audience, the segments that we Intent to educate and then how we actually get content in front of them. So within our organization, we have a learning management system We you know, we push out content to folks We do have third parties that are helping to populate and build content for our ecosystem many of whom are known to the world and both within academia as well as an industry and we work with a number of different partners, because, we felt early on that, you could get good content from, a large academic institution that people are cognizant of that. Some people might favor a different approach from a big tech company, for example, and we don't want to have just one kind of option for everyone and force them to get their content there. So we have multiple paths people can walk, so to speak. One of the other things we did very early on is that we helped to cultivate what is referred to in the diffusion of innovations literature as champions. So these are people within the organization that have raised their hand and said, Hey, like I, I'm not an expert in the space about a data scientist, but I think there's real promise here. I want to be involved. And, put me in coach type of thing, help me help my team. And across our business, we have a lot of these people that are passionate early advocates of the technology. And we meet with them on a regular basis to hear what they're hearing from their peers and their customers. And we leverage that understanding expertise to both develop our content or curriculum, but also to test the narrative to test our early content and the kind of massage and, move it forward so that it reflects the actual feedback coming from the groups. And it's been instrumental in our progression. So it's not just, me in the C suite saying this is what we're doing. It's like the organization is really Yeah. Saying and considering what it needs for itself, and then it responds back to the organization as appropriate. One of the other things that has been really important in our space, the regular regulated industry space is that. it's always the case with any innovation that, the innovation precedes regulation, and, whether it's in financial services or in healthcare or in mining or in life sciences, you're going to see generative AI and other, other technologies, whether it's quantum computing or anything out long before there was government regulation and legislation, but in the actual world, it's tough For practitioners to do their business when there are no regulations and legislation in the area and there won't be for a while. So most of the people working in the world in these regulated industries primarily look to the professional societies that they already aligned with to understand what the best practices are and what consensus might be for these. Specific areas of consideration. So we've helped to contribute to an understanding as to how, for example, responsibly I should be considered from a medical writing standpoint, or, what is appropriate for, physician or patient to potentially interpret when they're at the point of care, because. There will eventually be regulation and legislation from the U. S. government, from other governments as well, but by the time it comes, you'll have two plus years of actual work from the industry at large and, we can't wait for things to, to eventually happen. there's a bit of direct. Understanding with people doing the work through our champions activities in the curriculum, and then there's some indirect work in professional societies, patient advocacy groups as well to understand what the community needs at mass, if you will, in mass to so that, we're reflecting the broader considerations

Isar Meitis:

as well. Awesome. I want to, first of all, I want to relate to some of the comments from the audience. one is from actually from LinkedIn, from Gary Pearson. He's saying, I noticed the word strategy is not used in the title nor in the content so far. I love that. Was that a conscious decision? So as the one who set up the content for this particular one, I'll take this question. Yes, it was in this particular one totally on purpose. And I, and the reason for that is. again, I very rarely get to talk to people at this size of companies who are running the show. And I am in this particular case, less interested in the strategy behind it, but more really how it's done, like really the day to day Because I think the, even companies who can figure out the strategy can, would still struggle with a day to day, like how it's actually being done, which leads to my next question to, and I hope I answered the question, Gary. But the follow up question to you is you said a few very important things. You said, champions in different departments. You said that you started with an assessment or a survey across a different organization in order to really understand who needs what are the needs so you can prioritize them. And then you said that there's an LMS. What's the actual delivery is the delivery self paced is the delivery? half days where this or that group gets together and there's some kind of a lecture Is the delivery a cohort where they meet once a month for what's the actual? Implications or implementation of the training process from the people who are going through the training, because as you said, there is no, Oh, you take the class and then we'll see you next year. We'll take the class again. It's not

Matt Lewis:

one of those. Yeah, it's all those things. There's a self paced component that everyone across the organization can access. And, there's, Content, everything from what is generative to how to prompt and there's specific, content on prompting itself which you know, i'm personally not a huge fan of but you know as some people are very deep in on that it's you can't you win all those battles there's a tremendously wide and deep, curriculum on every topic you can imagine for people to get self paced learning if they're You So interested, there are required sessions that we throttle and require people to engage with. And then there are also a lot of kind of very kind of business specific and functional specific expectations that are, related to the way that we approach the work. it's a little bit less about strategy and a little bit more about our thesis. So our thesis is really one of augmentation. We, we don't really, my title, which says, Artificial augmented intelligence is really one of where we see the role of AI, of artificial intelligence being a compliment or the ability to optimize decision making for the medical professional, if you will. And our thesis is that if we can help people recognize the value that AI contributes, then it'll make our teams and ultimately clinicians and patients more effective in the medical interface. so everything we do is really a matter of helping people understand what's possible so they can think about how their work works and then how they make it work better, faster, stronger, and get it done quicker so that clinicians make better decisions and patients have better outcomes. That's really what we're about. The strategy is interpolated from that, if you will, how we get that done, but it's really all about augmenting intelligence, not replacing people with robots.

Isar Meitis:

Awesome. So I want to summarize a little bit about what you said, because I think it's very critical. And then I want to deep, a little deeper, even at the end of the day, it's about Helping the company achieve its goals in a more effective way, right? It doesn't matter what the company does this in your case, you're in healthcare, but this could be manufacturing. This could be consulting. This could be other services. This could be software engineering, whatever the case may be. The company has a goal, by the way, going back to the question before, The goal might need to be reevaluated because some industries are going to go through a very dramatic shift in what the needs are that it's serving right now. And the needs themselves are going to disappear. So by definition, you have to change what your company does, but I'm putting that aside for a second, assuming that's roughly stable, like in the, in your industry, a lot of people will be sick and they will need healthcare. And so how can we provide that healthcare? With the best results for the people who need the help with the least amount of effort for the organization, which means we'll be able to provide more health to more people for less money. That's if you look at this from a strategic perspective, then you can start translating this down to, okay, what does that mean from a day to day? But the question, I know you have to something to say, but I want to ask you a quick question and then you can combine the two together.

Matt Lewis:

Yeah, I'm going to say what I'm going to say anyway, but sure. Okay.

Isar Meitis:

Okay. so say

Matt Lewis:

your thing and then I'll ask the question. I, it's one of these like brand new things that like in, and I want to get, I want to answer Daniel's question too, which I know he's asked five times, but I, it's one of these things in business. I, I've been working for 26 years and people always like, they always focus on, the, what are the things that can be achieved through this intervention? It's can we do it? Yeah. Faster can we do it cheaper? Can we do it at more quality than the rest? And like it used to be in the old world you could get like one of these things But you really can't get the other two ever. You get, faster, but it won't be, it won't be cheaper. It won't be better quality. And that's just the way it was because it just, what we had at our disposal. I think now in generative, it's realistic that we can achieve two of those three. And for us, for the most part, it's not. Faster and better quality. So when I say faster and better quality, if a group that we're working with, like a large company that runs a clinical research study, like a drug company, for example, has to get that information out to doctors so that patients can make decisions. Typically, they'll work with medical writers that are writing by hand the information, and it takes a long time from the time the study is done to the time the summary is written. and the actual. work product, the thing that's written, is Variable in quality like sometimes the quality is written at 11th grade reading level. Sometimes it's written in a way that a patient would understand. Sometimes it's written at a 4th grade reading level and everyone understands it, but it's not consistent using generative AI. You can shrink the completion time from 4 weeks to 4 days. And make it so that the reading level is consistent at a fifth grade reading level in the U. S. and the U. K. all the time. So the quality is consistent and the time to completion is 90 percent faster. Like those types of variables are very well received by the world because they get, Information that clinicians value faster and of higher quality. It's not yet possible. It might be in the future, but it's not yet possible to do the third one that you mentioned to also decrease the cost that might be coming. But right now we're thrilled if we can just Make it faster and also improve the quality, but it just like that was never possible five years ago. Now it actually is within reach, but that's where most of our conversations ends up circling around is improving the quality, getting it done and, four times faster than was humanly possible. And then, hopefully improving people's lives.

Isar Meitis:

Yeah, I want to relate to you what Philip is writing on LinkedIn, and then I'll go back to the question that I promised you I'm going to ask you, but he's talking about how ironic that is that companies, many companies are waiting for the legislation in order to start doing stuff because they're afraid that the legislation is going to contradict what they're proposing. Planning to implement and so on. And I'm very much with you, yes, there's things that you cannot use the let's ask for forgiveness kind of rule, but I think as long as you're using judgment and you understand the thought process behind the current legislation, let's take HIPAA in your industry. As an example, you understand what the role of HIPAA, you understand what it's trying to achieve. You understand that's going to be the baseline for whatever future Law is going to come out. And if you're aligned with that, you might be helpful to actually help set up what the new legislation is, because, okay, we've tried this it's aligned with all the rules we have right now. It achieves better results, whatever that is. Maybe the new legislation should be built on top of that. And then you're helping shape what that is. But in the meanwhile, you're not waiting for two and a half years delivering, not as good as a service as you could have had because you're waiting for legislation. So I think it's a great comment. Here's my question. And it's. And it relates to some of the things that we talked about and very much to the things that you said now, and how do we get this to be higher quality and faster, and then later on, maybe cheaper, as well as all the stuff we talked about before, do you figure out on your own or with those, or with a team or with those. what did you call them? Not experts, champions in the different departments. Do you figure out use cases? Is that a big part of what you do? Like here's a use case for this department in this particular scenario. Here's the tool you're going to use. Here's the prompt you're going to use. Here's the process you're going to use. Here's the outcome you most likely get. Is this a big. Part of the process or is it a part of the

Matt Lewis:

process? Yeah. the use case, consideration is I think, ubiquitous, component of, of our world, both our individual entity, as well as almost every group we talk to. when my role was created about a year ago, myself, as well as some of the senior executives within our firm, we had a kind of offsite retreat and we put together a list of. Just shy of maybe 400 use cases across the business, which might seem like a lot, but you have to keep in mind that, we're a multi billion dollar firm with tens of thousands of employees. So it really wasn't that many. And I think, if we'd been there longer, we probably could have come up with many more. Then later, we ended up working with our strategy consultants who are at McKinsey to pressure test the existing use cases. And then also, Adding some more. So our current use case list is far in excess of that original number. It's over a thousand, but just because we have over a thousand use cases doesn't mean we have a thousand products in development. These are just, areas that could potentially benefit from some deployment from a generative perspective. If there is a need, if there is, resources, if there is. Priority, it's not challenging coming up with use cases, and you just have to think about any places where there's been friction in the business that for long periods of time has largely gone unsolved and that now can be addressed through, Current compute and, the models and what they can do and the rest. I think the challenge we have not so much is coming up with use cases. It's really thinking about where we can get the most value out of actually deploying against the use cases. And if there is value to be realized, if it's scalable, because, it might be that like a single champion is working on a very interesting project in a particular. disease area, say dengue fever, for example. And it's really interesting and it could potentially be solved by this particular use case, but even if it is solved, it has no applicability to asthma or to heart failure or to something else. And it's if we spend a huge amount of money and time solving it, it won't help the rest of the organization. So it's a lower priority than something else. I'm not saying that's a real example, just hypothetically for anyone's we have to make those decisions because we Have finite resources and there are thousands literally abuse cases that do exist. Your earlier question or comment about the regulation, though, is an interesting one. In collaboration with the head of the Gender AI program at NYU late last year, we co published a white paper on this topic. And in our space, in the regulated industry space, I think there's a couple of things going on. One, there's a lot of people that live in regulated industries that have a risk averse Mentality, and they regardless of what the regulation says or doesn't say, they operate from a position where. They feel like they can't do certain things or won't do certain things because of the fact that they're regulated and it's more about mindset than it is about what's actually permitted. And in the white paper that we published, it's, we talk a lot about this scarcity versus default. Mindset that we grew up with as humans over the last 100, 000 years, and people are just not used to the idea of having or being able to pursue more than their share, if you will. And it lives extensively across the regulated industries, and I think regulated, what regulated really means is that the government, especially in this country, regulates the way in which products are being consumed or interacting with consumers. patients, consumers, that type of thing, but not the whole of the business. So if you look, say at like the clinical encounter, most hospitals are especially clinics or pharmacies for the most part are looking at generative in the back office or in places where the clinical encounter is not the primary part of the transaction, like in medical scribing, taking notes from the encounter, the actual diagnosis, the treatment, those things are not high priority right now. They will be over time, but they're not right now. Those are closer to regulation. The things that are further from regulation, like describing peace, prior authorization with insurance companies, anything that's administrative, they're not as close to regulation as the things I just mentioned. And as such, they're eligible for discussions. The same thing with our world. It's more back office or communication is less. Directly subject to regulation, whereas if it's touching a patient, then there is more concern because if the FDA comes in and says, this actually is something that you can't do, you can get fined a tremendous amount, you can get, you can lose your ability to license in the space or, be denied review coverage the next cycle. So there are major repercussions to, going against their guidance or, Going against the regulation, whether it exists or not. So there is, legitimate concern there, but I think it's less of a concern further upstream than it is further downstream, if

Isar Meitis:

you will. Yeah. It makes perfect sense. I want to touch on three points that you mentioned, and then I want to go to a few interesting questions from the audience. First of all, you mentioned collaboration, and I think that's a huge point that people need to understand. Like this field is evolving so fast. You cannot do it on your own. Like you just can't. And collaboration could mean a lot of things depending on the size of the company, the industry, whatever. But even just what you said, let's figure out somebody who was on the academia side of this or in the legislation side of this, and let's think about it together and write a paper that we can share with other people who can then, have an add more to the conversation. So that's one example. obviously you mentioned the one on developing content. Okay. We're using other people to help us develop training content because that's, we have other stuff to do in our business. and so on. So I think all of these are very important to remember. The other two things that I work a lot on with my clients as well as in my courses is how to pick the right people Use cases, as you mentioned, okay, the list becomes very long, very fast. Cause once people start to understand, okay, there's budget behind this. And there's people in organization working on this. People will raise their hands and say, can you solve this? Can you've got to have a long list. Then two of the most critical aspects of this. And there's a list of six or seven things that fall into the consideration of Venn diagram. But the two, maybe most important one. Is how much value does it create for the organization, right? And you said that in for each organization, that's different. I just want to generalize it, right? Does this provide value to the organization? Which means in most cases, is it generating more revenue? Is it reducing cost? Is it making our clients happier? These are the three main questions that you need to ask yourself. The bigger you get of those, the more value it's going to provide to the organization. So that's, number one. The other thing is risk. Let's say this is amazing, but it reduces a high risk to the organization for whatever reason. In your case, it might be exposure to, legislation or, Or other legal aspects of this, then you obviously have a problem that you don't want to face. And there, every business will have different kinds of things, whether it's, exposing client's data or special pricing that I have, whatever the case may be. So if you can find use cases that provide high value to the organization, and at the same time, do not generate a high risk, because of the same process, then you got yourself a good use case to go after.

Matt Lewis:

Yeah. I want to ask. I agree with that framework, implicitly, I would just kind of caveat that by saying that the actual use case itself is not directly transferable from business to business and it's not, contextually transferable. Business to business, even in the same industry. So for example, if you have the, when I mentioned earlier, taking a study, a clinical research study and preparing a summary of it for a patient that's called a patient lay summary in my space. if you're a large pharmaceutical company and you want to do that type of thing, and then you're another large pharmaceutical company, one company is maybe based in Belgium and the other company is based in Manhattan. It might be really attractive to the company based in Belgium, but really unattractive to the company based in Manhattan. And there are lots of reasons why that might be the case, but just because it works really well for company A in Europe, it could work really poorly for company B based in the U. S. And they're, it's not intuitive just because the use case works really well for one, that it works really well for the other. It's oh, I solved this. Use case for a pharma. No, it doesn't work like that. You have to really know the business and the company and there's, current state of affairs and how well they're doing and lots of other things to know if the use case works. And then even if it does work and you find that it is on paper transferable, you can't actually take the. Product, so to speak. I see some of the questions here of a product and just pick it up and, kick it across the fence and say to company B, Hey, I did this over a company. Hey, now you're company B. Here's the thing is ready to go. It doesn't work like that at all. So It's helpful as a kind of conceptual guide perhaps, but it's in the real world doesn't, it's not that transferable.

Isar Meitis:

Yeah. I think it very much depends, but in general terms, I agree with you because companies do different, even the same things in different ways and not to mention the fact that many companies do different things and then obviously they don't transfer, but there are things that are transferable, like things in. Sales processes, marketing processes, data collection, like these kind of things can transfer. I agree with you that there's still local adoption hurdles that you got to do every single time. I want to go back to a question from earlier from Daniel, and he asked about, again, more of the, org chart and roles and responsibilities within your organization while you're orchestrating all of this, what are Who under you actually does the execution? And I think that's what the question is. do you have a project manager that goes and, because it's a multidisciplinary thing, like what you're doing, like you said, you have all these different hats, you work with all these different departments. Somebody needs to actually go and put these things in place. Create these use cases, get them in the hands of the right people, sign up people for the train, like all these things. How does the actual roles underneath you work? And how is that different than just a project manager in traditional companies? If it

Matt Lewis:

is. Yeah, I know Daniel's asked about this, product managers, especially on the AI side with the organization. This is something that's been, hotly discussed within the last nine months or so. I can't say as we have any dedicated AI product managers at the current time. it is something we're still evaluating as to whether or not that's, actually a need or something we need to resource for in the foreseeable future. it has not been a historical need nor something that, has been either required or something that clients or internal teams have requested. but, I could see that being a change in the, evolving future. the way that teams have historically been deployed, first of all, the types of solutions that we tend to put forward into market. Are varied. there could be work that we do that's in the space. That's pure consulting. So the leads that kind of deploy that are consultants, there's no need for a product manager and consulting work. It's all consultancy on the things that are product led typically software as a service type products. And we do have people that are leading them, but they tend to be like lead data scientists and. People that are involved in the software development design process, like UX, UI, people as well as project managers and the rest. And then they tend to work across multiple SAS products, not a single product, which is why we've historically stayed away from having individual product managers that own a specific product. I know other organizations Don't do it this way, but it's worked well for us. as we've moved into the generative space over the last year or so, there has been a call for having dedicated AI product managers for this type of work as things evolve forward. The challenge is though, that, because of the variability of use cases, as I was just hinting at in the previous question, the difference between the actual implementations is. At least in our world, significant enough that they're not really products. they on paper could be close enough to being called products as to be worth the discussion, but they're more like somewhere between a configuration and a customization for those of you in the product environment require Extensive modification from deployment to deployment and the people that would typically gravitate to a product manager role may not even be comfortable in a work like that and are the structure historically or between data science and UX UI and the other teams with project management. Have worked well over the last seven years, so we have not moved there yet, but it is a discussion. We've been having it for a long time, and I think, maybe as the field matures a bit and we see more consistency in each individual use case, it'll make more sense to have separate swim lanes for each type of deployment. And that would make more sense. I'm certainly not averse to it. It just, the type of work and the way that it's evolved has not required it historically.

Isar Meitis:

Great answer. I and I agree with everything that you said. Obviously, there was two again, very tactical questions that I think that are great that are coming from the audience one just so we have more tangible. Understanding what are some things that you have deployed in the actual realm of your role in the past six months. So use cases that were implemented that are being used. And the second question is what's the initial kind of like beta test before you roll things out on a larger scale. So let's start with the first one, like actual examples of things you've deployed, recently. And then we can dive into what are the steps that you're taking before, like, how are things tested before you roll it out on a bigger scale? Test.

Matt Lewis:

yeah, I was afraid

Isar Meitis:

you're going to say that, but it's still worth a good discussion.

Matt Lewis:

yeah. Okay. In terms of deployed. So one example of something that we are, that's in market, that has been deployed, it has a legacy, AI involvement and now more of a generative component, is a work that, that's done, in this case to, to, to your earlier. Commentary is also with a partner. So you have a sense of the broader kind of ecosystem is designed to query, interrogate, interpret, analyze, and then surface insights across a medical organization. So it directly interfaces with an organization, CRM, and then pulls forward any voice of customer information across the ecosystem that aligns back to medical strategy. and then the historical version of this that was deployed was, an NLP only solution, which we, that had in market as early as 2018. Then we upgraded that to machine learning, NLP, and some kind of, point of care, Analyses that were available for kind of business intelligence users as it were in 2020. And then we upgraded to generative at the end of last year when the generative component is primarily on reporting. It's not content generation, so it allows the user to build bespoke reports in tools so they don't have to use dashboard functionality. It's really a custom kind of analysis as it were. and so it speeds time to decision by a significant amount, but I think we're clocking 87 90 percent improvements over the traditional deployment. and it allows for a much faster commercialization across. The teams are using mostly in cancer at this point. and that's a good example of both. the team is led without a product manager. It's data science led supported by strategist consultants and others to help interpret and contextualize the relevant information it takes to stand it up. It does require both a. A long lead time for testing because it's built on a ontology of the customer environment. It's, fit for purpose. So we have to build, for example, an ontology in lung cancer or brain cancer or whatever might be before it can be actually implemented and then tested before it's fully deployed. it typically takes three or four months for the testing before we can actually build the first instance and then another couple of months with testing. Training with the teams to get used to the platform and deployment. So all told it's six months of works really fully implemented, if you will. and then

Isar Meitis:

by then there's a news. new cool model that does it faster, better. And the whole thing starts all over

Matt Lewis:

again. Yeah. it's not even so much that the problem is that I think a lot of clients now, a lot of customers have this like expectation almost that what they see in the world is going to be in the thing we're doing with them. And this was never the case. Like 9, 10, 12, 14 years ago that we would deploy something in 2015 and they were just whatever we said we would do. We did now we say we do something in 2023. They're like, why doesn't it, why doesn't it work on its own autonomously and also. Put the whole 25 slide PowerPoint presentation together for my boss. I'm like, that's not possible. First of all. And two, we didn't say we would do that. So why do you think that's going to happen here? But with that, we get that now all the time. It's very challenging for the teams because no matter what we deliver, it's never good enough. And it's we're always running the risk of making ourselves obsolete with everything that we deploy. but it's that's a constant challenge. We're doing a lot of work also in, digital, digital avatar, environments, synthetic media or deepfakes, if you want to call them that, which, we're working with. This is less partner than it is vendor, but it is still an allied consideration. We're working with, all the leading firms in this space, we're, we are the only firm, we're the only company that is, A professional services firm that is a certified partner with Synthesia. we also are in discussions with DID and MRF and all the other kind of groups, HeyGen, all the groups that are out there. We work with Synthesia a lot. and, the use of Synthesia and these types of, cloning platforms in medical is primarily a training consideration where large groups of teams in far flung countries like in Israel or in, Saudi Arabia or Australia or, Morocco or Korea need to get medical content really fast. And we can't take experts into a room with a green screen and translate their context. It takes forever and it's quite expensive. And we can use the same approach with an avatar. And get them that content quite quickly, typically in about four times faster than it would go in the traditional route. And so we've implemented this across a regulated environment and it's very successful. So that's another kind of quick example. we have a couple of other things that are out in the world as well that are looking at the ingestion of peer reviewed literature and clinical trials data to predict, where the landscape is and what's. Forecasted to be of import for teams that are doing research and planning studies, that are similar to what I mentioned before, but insights where they started with a machine learning NLP consideration years ago, and now they're using a generative component for report building and helping to analyze what's relevant. So that's where a lot of our work is. mostly the field right now is moving into more of a discussion around kind of real time intelligence, real time content production, in the kind of bigger spaces where there's a clinician patient encounter, things like medical information or medical meetings and, things like that. So that's where our discussions are.

Isar Meitis:

Matt, this was, I think we can keep on doing this for hours, but we are running out of time. This was a fascinating conversation. I think we covered so much from conceptual ideas that any company should take, to practical, to tactical things, to change management, to actual examples. those of you, by the way, just one small comment on the Synthesia, HeyGen, The ID stuff, these are tools. If you don't know them, these are tools that allow you to create a digital clone of a person, or just create an avatar that looks real, even though there is no real person that it cloned, and then give it content that it can say, and that content can be translated automatically by the platform itself to, I don't know, like 80 different languages. That it can then speak. So if you need to deliver content to people very quickly before it gets outdated in multiple languages, as if somebody is there in the room or on zoom, it's a very effective way to do that. So if those of you who didn't understand that part of the conversation, that's what it does. And I can, I, and I can add links to that in the show notes. These are all really amazing tools. I just want to thank you. This was a really great conversation. Anybody was listening now or is going to listen. once this goes out on the podcast, I'm sure we'll take a lot from it. I really appreciate you taking the time and share with us if people want to. Follow you, find you, learn from you, work with you. What are the best ways to connect

Matt Lewis:

with you? LinkedIn is the best way. I, all my contents on LinkedIn, I'm happy to chat with anyone for any reason. all my presentations and podcasts and everything I do is on LinkedIn and happy to connect with anyone. And thank you again so much for the opportunity and for the time.

Isar Meitis:

Thank you. This was awesome. Until next week.