Leveraging AI

30 | Unleash the Power of Your Business Data: An Actionable Guide To Training Models Leveraging Company Data. With Mayo Oshin, Founder & CEO Siennai Analytics, a private chatbot development company

September 19, 2023 Isar Meitis, Mayo Oshin Season 1 Episode 30
Leveraging AI
30 | Unleash the Power of Your Business Data: An Actionable Guide To Training Models Leveraging Company Data. With Mayo Oshin, Founder & CEO Siennai Analytics, a private chatbot development company
Show Notes Transcript

Could AI Really Transform Your Business Overnight?

In this episode, we have expert Mayo Oshin to reveal how you can leverage your existing data to unlock transformative growth.

We uncover the step-by-step process to implement AI in your business and start seeing results fast. From goal setting to data preparation to model training, you'll learn the full playbook.

Topics we discussed:
✅ Why starting with a clear business goal is crucial
✅ Taking stock of your data across structured, unstructured, proprietary and public
✅ Data cleaning and preparation best practices
✅ Model selection: custom vs out-of-the-box solutions
✅ Typical project timelines and budgets
✅ Evaluating success and tracking ROI
✅ Deployment strategies and measuring impact

Mayo Oshin is an AI consultant and educator. He helps companies implement AI to transform their businesses.  

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello and welcome to Leveraging AI. This is Isar Mehti, your host. In today's show, we're going to talk about a question that a lot of people are asking themselves. How can I use my existing data, like customer service, marketing data, CRM, sales calls that were recorded, financial data, project analysis, and so on, and train my own models so I can learn faster and get better results than my competition. We're going to do this with the help of Mayo Oshin, who is a global expert that has been doing this as a service to multiple businesses in different sizes for a while. the end of the episode, as always, I'm going to share some exciting news and there were a lot of big news from most of the big players this past week. But for now, let's dive into the conversation on how you can train AI models on the data that you currently own to gain business benefits. Hello and welcome to Leveraging ai, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and I've got a special episode for you today. You know, one of the big promises of AI is getting us better results in business and higher efficiencies and so on. But the biggest promise is allowing businesses to leverage their existing data, stuff that they've collected through the years on any kind of data in order to build their own models, to get higher efficiencies, grow their business, and so on. And the problem with that is that, and that's a question I get asked a lot. Like a lot of people ask me, how do I do that? How do I go about, I. Leveraging the data that I have today of c r m, customer service, marketing, data, sales, data, et cetera, in order to get better results. And while I don't necessarily not an expert on this topic, our guest today, Mayo Sheen, is an expert on that. So Mayo has been involved in this world of AI and developing these capabilities around it for a while. He's been one of the first contributors to Lang Chain, which is the platform, that everybody's using to connect if in AI systems to one another. he has helped companies large and small to do these processes and implement AI as well as leveraging their models. Anything from small businesses to large giants like P W C. He's currently the founder and the c e O of c N I analytics, where this is what he does. He helps companies. he consults and train and implement companies, mostly B two B organization on how to utilize their existing data goldmine of any kind, whether it's documents or videos or databases or knowledge bases, et cetera, in order to improve customer satisfaction and reduce cost, accelerate growth, and so on. So he's the perfect person to have this conversation with, and like I said, since this is a hot topic that I get asked about a lot, I am very excited to have him as a guest of the show. Mayo, welcome to leveraging ai.

Mayo Oshin:

Thanks. Glad

Isar Meitis:

to be on Mayor. Let's start with people who don't really know what to do with this. okay, they hear about this concept that they can use their data. They're not sure exactly how. Can you give us some examples, some use cases from clients that you've been working with or that that have implemented this on how people are using their existing data and building AI models on top of that to gain business

Mayo Oshin:

benefits? Yeah, so recently we just finished work with a client who, that there's, they're pretty sizable training, consulting company. and they have a ton of videos, and the issue. From the user's perspective is, look, when you go through these training videos, whether it's internal or external, there's a ton of content to, to go through. And so a lot of times people give up, they don't even complete the training program or they just don't get the full value of what they paid for. So the task was to improve retention, to improve satisfaction, and, potentially basically add more value to what, the users, paid for. how do we do that? we took the videos, and each modules typically has videos, has PDFs and other exercises and all these other things, and transform that into a user interface like chat, G P T. Where the user can ask questions, and have references where they can click and it would send them to the point in the video where they can continue to watch. So imagine instead of going through all the videos and watching one by one, it can ask a question and then it provides an answer based on the content and also source reference to where that video came from. Another use case would be a legal firm, one that draft. they draft tons of agreements, so maybe load agreements and all kinds of incorporation agreements, and that's something they would have to do manually all the time. and then they want to effectively get to the place where, the task was to automate that process so that, when an another inquiry comes from a client, then the AI generates a draft of the agreement, and then a senior lawyer can step in. And effectively, review that. and so yeah, those are two, two use cases, but there's many others with P D F documents, databases, customer analytics, support documents. there's a lot that can be done. So as we talk, I can reveal more use

Isar Meitis:

cases as well. So I love the two examples you gave because I think we can generalize on both of them. One of them is more on video kind of content for any purpose, whether it's for training, for marketing and so on. It becomes a lot more impactful if you can easily find content within those videos. So not just categorically. Okay. These videos are about this topic, but in minute, 13 and 52 seconds. We cover this very specific topic, so now we can watch 30 seconds or three minutes instead of an hour and get exactly the answers that you need. This is good, obviously for marketing, for customer service, for training, for any place where you have video content that now can be cataloged not to the content of the video, but to the content of every second of the video, and then can be chopped off to a much more useful, to many more use cases than just the content long for content. So that's one example. The other example, if we generalize it, you have an existing set of documents that you need to continue generating similar documents that are not exactly the same. Instead of starting from scratch or even starting with a template, you can start with something that is already adapted to the outcome that you need, the specific client, the specific case, the specific scenario, the specific market, like whatever the case may be. The AI knows how to take that quote unquote template, adapt it to whatever the scenario is, and now give you a very mature starting point, saving you a lot of time. So I think both of these use cases are relevant to a very large number of businesses. And hence, I love the fact that you pick these two examples. The next question that I think most people have is, what do I do? How do I get started? Like I'm a c E O of a company, or I'm in a leadership position, or I'm in charge of implementing AI at my business. And we have a lot of data and a lot of companies saying, we don't have any data. That's usually not true.'cause even if you have a C R M that's been running for three years, I guess you have a lot of data. You have marketing automation tools, you have a lot of data. So anything that you have in the company, any, Google Docs that you have with, interactions with different clients, it's data that you can use in order to train these models to get efficiencies. But the question I think most people ask is, okay, let's assume I understand I have all these data. What are the steps? what do I need to do to go from having that data to having a tool that can either be used internally or Be customer facing. Yeah.

Mayo Oshin:

So I think speaking to leadership, the first question is, what's your goal here? and the reason for that is if that's not clear and it's just jumping on a trend, or just trying to just be reactive to competitors doing things, then you might not get full buy-in from everyone in your team. they might get, excited in the beginning and give up maybe when they start to see results and not as they expected it to be. So I think it's about clarifying that goal and to clarify that goal thinking first and foremost about. What is the, the business outcome? Is it that you want to, improve retention of customers? Do you want to, strengthen your brand? where can it potentially show up in your, in your income statement, for example, so that you can point to something tangible as the project goes on that the stakeholders would buy into? Because if it's just a case of, oh, this is gonna help us, that's too big. if it's a case of, here's how this could potentially cut support costs by 20% and improve our gross margin, then that becomes a lot more interesting to other stakeholders as well. so I think that the clarity of that goal is very important. Ultimately, all you're doing is replacing the manual task that a human being would do with a machine. I. and so that, that's all that's being done here to make it more efficient and more scalable. And so once that's clarity is there, then the next step is, okay, what data do we currently have? What have we, if you are in any type of business, even if you are a brick and mortar business, you are, you have been collect collective valuable data. it could just be email addresses, it could be, the demographics of your customer. it could just be, a process that you have going on internally that's a form of, that, that could assist with that. And the best way to think about data is like a, think of it like a, a table, like a table and one side you have, let's say structured and unstructured. And then you also have your proprietary and non-proprietary. So proprietary data is obvious. This is things in-house. You know that you don't share with the public. only your staff or stakeholders know this information could be data about your company or customers or research that you've done in your industry. The non-proprietary data is typically staff open to the public or you will share with customers. This can include stuff on your website, maybe you have other public stuff or stuff you shed in conferences or speeches. Anything public will be in the non-proprietary, quadrant. And then you also have the unstructured data. Unstructured data is just data that it's not in a table for. So anything that's PDFs, like I gave the legal drafting example, docs. any, anything that can be, put in that kind of unstructured form. And then your structured data is more like table of stuff. This is your csv, your databases and so on. So you can see in this four quadrants, you can have combinations, proprietary database. For example, you can have non-proprietary P D F. and so if you're able to then figure out, okay, where's the valuable data here and see where they fit in this quadrant, that's another case. and a use case of that, which I didn't mention in the first, portion of this was, an example where I built a chat bot to analyze Tesla's, annual reports over the past three years. And so this was a case of going through, SS e c filings and passing through and extracting tables because it contains the financial statements. And so this was an example of a non-proprietary, because it's public, but also structured data, which I then turn into a chatbot that you allow investors to. Make a decision on the stock. So that would be, the next phase is about uncovering the data you have. once that's done, it's about cleaning the data.'cause a lot of times companies don't think about, or they're starting to think about data, but they didn't start off thinking about data. So it's a whole digital transformation process to basically start to curate this data in a way that's usable. And so there's a whole process of getting the buy-in from employees to make sure they, fill in certain things. And then your IT team to make sure they centralize the, data in a particular place. And then there's the cleaning of the data, making sure it's consistent format that can be used. so that would be the first

Isar Meitis:

place. can you explain what cleaning means? I think it could mean different things for different people, and obviously from your perspective as somebody who implements this, it means something very specific.

Mayo Oshin:

Yeah. for example, you can have maybe one P D F, so a company can have maybe 1000 PDFs and half of them are scanned. the other half are just text PDFs. And now the scanned PDFs are harder to deal with because when you do OCR and all that kind of stuff. It can end up with special symbols or whatever. Or a lot of times companies do save documents that are corrupted. those are then difficult to deal with. then there's also the structure of the data. How has it been saved? Maybe they've got a database, but then it's all over the place. it's not really structured in a consistent way. Maybe the names or the way different things are named are different or they mean different things. so all these things basically make it difficult to, just extract the DA data out of the gate without some sort of a, a way of making sure everything's clean, restructured, and then ready for training with the ai.

Isar Meitis:

Interesting. Okay. Question about this. Is this a process? Because this sounds like a very technical process that you need to know what you're doing. Is that something that typically company like yours would do? Or is that something the organization themselves needs to do ahead of time? Or is it a mix of both?

Mayo Oshin:

you mean a mix of, whether it's inside

Isar Meitis:

of work? is it something you would come in, look at the data, say, okay guys, you need to do this and that, work on this for a week. Call me when it's done. That's to me the mix of the two. it's up to,

Mayo Oshin:

I, some companies have the resources in-house to, to do this kind of data, restructuring and cleaning. there is a point though where, the person who's gonna be in charge of trading the AI model will probably want to be involved. Just to make sure that it's in, in the format that makes, that would work best for training the model. but ultimately, most companies don't have that in-house capability, to do the restructure and the clean it.

Isar Meitis:

Okay. So what's the next step? So now I have a clean set of data. What's the next step?

Mayo Oshin:

okay, we're moving now to actually training the model. Okay. Yeah. So yeah, so once the data's being cleaned, then the next step here is, okay, we want to decide on how we want to present this to the end user. obviously this would've been discussed initially, but we'll, that the next process is actually building this out. So usually, typically there's a UI. that's the process of that begins. But then at the same time, there's what's known as ingestion. typically with my company, my, my two teams work simultaneously. The UI team is working on the front end, and then the backend ML team is working on this ingestion. So what's ingestion? Ingestion is just taking all the documents, transforming it into a format that the computer can understand'cause computer doesn't understand text. So we need to transform it into numbers so that we can perform different calculations on it down the line, which I'll explain. Then we store these numbers in a special database called a vector store, and this place will basically house your data, but this data would also have associated numbers for the, for those data. these are typically in chunks because it makes it easier to perform what's known as retrieval, which I'll speak on in a second. And so what happens is when the user asks a question, like for example, let's say we, did ingestion on, gimme a, a book you recently read.

Isar Meitis:

I'm reading AI for marketing right now.

Mayo Oshin:

Okay? And so let's say in the book there's a question about, there's a part of the book where you wanted to ask a question like, okay, what is, a good AI marketing strategy for a small business? And so what happens is if you ask that question to Google or a typical search engine, it, what it does is it performs what's known as a keyword search. It's gonna look for. Keywords, similar keywords, and it's gonna return back results that have those similar keywords. But what the ingestion process does is it effectively allows you to search by, meaning by semantic search. So it's gonna look at the context of the question you're asking, and then it's gonna look at what we quote unquote turn into numbers or that initial ingestion phase, and then retrieve the relevant chunks of your book that are associated or semantically similar to the question. And so what happens is the model then uses those, semantically similar, source documents to then produce a final result. So this will be the equivalent of opening up chat, g p T, and then you copy and paste sections of your book. Underneath the question you ask. So it's basically the prog programmatic way of doing the same thing, but no one is gonna have time to do that, or rather, you're not gonna be able to copy and paste your entire document Yeah. Into chapter. So this is, effectively, how it goes from your document or your data to a UI like chat, g p T. The user asks a question, they get a response, and then they get a reference to the page in your case where that refers to that question, where the answer to that question came from.

Isar Meitis:

Okay, so I wanna do a quick summary and then ask a bunch of follow up questions. So you said, first of all, start with a goal. And I agree with that a hundred percent. At the end of the day, you're trying to solve a business problem or to advance a business. So start with a goal. What is this supposed to help us do? Either reducing a pain or increasing a, potential of something that has a potential we couldn't do before Then you said define the quadrants of the data. Is it structured or unstructured? Is it proprietary or non-proprietary? And this is one of my follow-up questions in a minute. then you said, okay, go collect the data, clean the data. And we talked a little bit about that. Is train the model, define the user interface, meaning how is that gonna be engaged with, and then do the ingestion process in order to get the data into the actual structure so you can then ask it those questions. So this is more or less the process. A few questions. Question number one is the whole. Aspect of proprietary information, regulated information, personal, like p i information. What do you do if you have a lot of that? Do you separate it out? Do you run on a open source locally installed? what are the solutions that companies have if a lot of the data that they have is not stuff that they wanna share with the world?

Mayo Oshin:

Yeah, I think you basically touched on the key ones. I think this is where, you have a choice as a company to choose between, out of the box, powerful, accurate, close model like OpenAI versus an open source model that's not being widely tested. and will require its own infrastructure and deployment. And its own, support. And so the company needs to think about two things. What resources do we already have in-house if we don't have those resources, what is our budget to get someone who can do this? And so I'll start with the close one'cause that's what everyone's familiar with. So if you're going with OpenAI or anthropic, any of these closed solutions, you are gonna get something that works out of the box. it's gonna perform really well. It's gonna be a very good proof of concept if you are just trying to get buy-in from stakeholders. and you are probably gonna be relatively happy with the results. it's just an API, that your developers can connect to. and you get decent results. Now, I. The open AI recently has released reports, privacy policy, documentation, just reassuring enterprises, look, we're not gonna touch your data, and so on and so forth. Now, whether you believe them or not, that's a different question. Same question if you believe Microsoft or not. If you're comfortable with Microsoft, then I say, why not? It's all within the same family anyway. Yeah. in the middle, what we tend to do, so for example, we had a client who, wanted to work with OpenAI. So we came up with a hybrid solution. We stripped away pi, so any personal information, emails, anything that could, suggest any entity before we, went through that, the process of work, basically showing the model, the information, and That was a hybrid, approach that the client was happy with. But you do have clients that they don't even want the context or the, surrounding information, even if it's not p i a to be shown to the model. And that's when we moved to the other extreme where there's an open source solution. You find one of these up and coming models. You have one called M p T by Mosaic, Lama two, there's another one called Falcon. And, you essentially deploy one of these models can cost a company anywhere from 600 to maybe$1,500 a month to host these models so that they'll be running 24 7 typically, or per time of usage. and then you basically have to collect a training set from your company, which someone needs to come in, collect all the data, train the model, deploy the model, and then you need someone to do what's known as machine learning operations. Like a, now you need maintainance of the model. now if you don't have the in-house, capabilities of this, you either need to find a consultancy to do this for you, or you need to hire, a machine learning, ml ops personnel who can help you run through this process. And not all businesses have the budget for this. So that's the challenge.

Isar Meitis:

question about that. Just, I think you made it very clear. Like you go with one of the closed models, it comes ready out of the box. yeah. It's an API it's ready to go. You don't have to host anything. You run on their servers. Just talk through the API. It's easy to deploy. It's easy to maintain, but, You're taking some level of risk of where that data is going, versus I'm not willing to take any risk. I'm gonna run the model locally. That means I need some kind of, tech infrastructure in order to do this. Which needs to be set up and maintained, which costs money. Two questions. One is I know that all the big, hosting platforms today, whether it's a w s Azure or Google Cloud all now provide such infrastructure basically built into their tools. Meaning you can now get these containers for that kind of data with the models already running on top of them, and you basically can pick and choose, at least in some of them What type of infrastructure you want with what kind of model you want on top of that. That gives you that flexibility to do these things without having, or maybe with less. ML ops knowledge inhouse. Is that a true statement or is this, time We'll tell if they're really taking us there or not? Yeah.

Mayo Oshin:

you still, you need you. Yeah. You would still need someone, you still need someone to do the training. You still need someone to collect the data, to process the data to make sure it's ready. if you don't get the training set properly done, then the results are going to not be great. The, again, because these models are not as strong as the closed models, so you need to make sure that everything is done really well. Got it. Okay. and so you still need that machine learning personnel, in my opinion, anyway, or a consultancy that can help you set up the

Isar Meitis:

whole thing. Second question. That is more of a balanced question. I. So now I have a set of data that I give. Let's take ChatGPT as an example because that, and probably Claude2 are the two most advanced models out there today. And I give it access to all this data and I'll allow customers, or maybe internally, and we can talk about this in a minute as well, ask questions and get results based on this data. Does the company have control on what percentage of the answer comes from ChatGPT as an engine versus what percentage of the answer comes from, just look at the data that I gave you, and all the answers have to come from there. Is that something as a business that I can control? Yeah, that's,

Mayo Oshin:

in fact, I would say that's probably 90% of what we focus on is what's known as reducing hallucination. Hallucination is when the model effectively veers off the context which is related to the company's data and then just generates a response that's either inaccurate or it's not completely true. And a lot of the clients we work with are in industries where you cannot afford for, inaccuracies, right? Yeah. So there's a lot of time spent in terms of, what's known as, enhancing the retreat more or augmentation, and also to ensure that the model can see the context. And if it's not sure. So this, like a guard clause, that's typically uses you say to the model. If you don't know what the answer is, just say that you don't know. Yeah. Yeah. So it's part of the prompt engineering to make sure that it doesn't just veer off and make things up.

Isar Meitis:

There's nothing in the setup process of the model that increases its dependability on the new data versus what it knows as a model out of the box.

Mayo Oshin:

That's mostly done through prompt engineering. So the more you, in the prompt, you reinforce that it should only use the context, the more it's going to do that. And also we found, and many other, people in the space have discovered that GPT 3.5 does a particularly bad job of following this what's known as the system prompt. This is the prompt instruction. GPT four is very good at following the system prompt. So if you literally tell GPT four, if the answer is not in the context, just say you don't know what the answer is. and it's gonna listen to you nine times out of 10.

Isar Meitis:

follow up question to this. You said it's a lot about the prompt engineering, but if I'm gonna make this an open to the public tool, let's say this is to help customer service, right? So I want to be able to have people chat with my customer service database. It's like a really smart f a Q, right? Because it, it sees everything that happened before, it can find similar events. It can find what was the answer and can give a good answer on people and what to do without me having to apply another person to answer that call or that ticket or whatever. This is a very obvious use case, but the end user doesn't know. He needs to define those things. So does that mean that when you implement such a solution, there is a prompt that the user doesn't see that's always there when you actually send it to the model, when you actually send it to the large language model, there is a backend process that happens that sends kinda like a pre-prompt thing, that then you just take the end user prompt and attach to that. That's the way it works. Correct?

Mayo Oshin:

Yeah. So what people don't realize is ChatGPT, for example, I, there was a leak of so people try and hack around and find the prompts of all these popular things. And so they found the prompts for notion, they found the prompts for Bing and ChatGPT basically it is like a constitution. it's like a whole, like four or five paragraphs about what it should do and what it shouldn't do and what happens if someone asks a particular question. Because if you do try and ask a certain types of questions, it just tells you, look, I'm not trained to do this. and that comes from the prompt that's hidden from the user. Got

Isar Meitis:

it. So basically any prompt that you send, you can set up a additional set of guides and rules that get sent to the actual model regardless of what the user wrote. Yeah, it's called a system prompt. System Prompt. Perfect. Another follow up question. So there's a big difference, I think, in my eyes, between creating a tool for internal usage and opening the tool to clients. definitely means of risk. if it's an internal tool, it could do stupid things. You still have somebody from the company looking at it and saying, this doesn't make any sense. Maybe I should go and do some additional research. Versus the same answer goes to, the end user. Is there, are there any best practices on what are the steps of deployments? Okay, now I have done all this process. I have done all the stuff we talked about. I've created a model. I've created the user interface. I'm ready to go. What are the steps you recommend to customers, to clients to do in order to make sure that they're. A, enjoying the best benefits possible, but also reducing the risk of shooting themselves in the foot by deploying this to the world before it's fully tested. Yeah,

Mayo Oshin:

so there's a, we spend a pretty good portion in authentication, right? because if you just have the live u r l, then anyone can go there and, basically use your application. So the first question for is typically, who do you want to have access to this information? And then we have a preset emails. It's usually provided by the company of the individuals. And you can see when they've logged in, when they've logged out, the questions they've asked, which is again, is something very important for ongoing evaluation and testing.'cause you can see the questions the user has asked. And the responses that came back from the model. so authentication is a big part of this. Another thing too is, moderation. So if you have users asking inappropriate questions, they get flagged, right? And eventually you can blacklist it, right? so you can actually block them from asking questions. another measure is known as rate limited. you don't want a situation where someone just keeps spamming, questions either themselves or using some sort a bot, which will cost you a lot of money and cause your system to crash, right? So again, there's rate limit, mechanisms in place. So those are the three typical things, that, that are done. So that from a security point of view, at least the authentication is in place and you don't have randomness who can just use the

Isar Meitis:

application. Interesting. Okay, so my next question is, how do you measure the success of these things? Because I go back to what you started with. I'm like, okay, I define a goal. That goal hopefully has like a success criteria, right? I want to improve my customer service efficiency by 20%. That means that I can either spend 20% less money answering the same number of, customer service calls, or I can do 20% more with the same amount of resources that I have. Doesn't matter. How do you go back and measure what's the act? But the, and again, I go back to this problem that I just mentioned. There's five other moving things that are happening at the same time, right? I've hired additional more people. We've upgraded our ticketing system. Is there a good way to measure the actual benefit and impact on the business of deploying such tools that are. Quote unquote embedded into the The actual solution, so I can track what it's actually doing from a business performance

Mayo Oshin:

perspective. So this is like also part of the prelim process would involve the client providing what we'll call an evaluation data set. So what this is essentially is what is a typical question being asked and what is a satisfactory answer for the end user. And so what the goal is here effectively is if we can replicate what a human being would respond like with 80% or more accuracy, then you can deduce from that the amount of support you can cut down. Yeah. So we start with the data set provided by the client valuation data set, and a huge part of the process is, How do we keep optimizing this model to get closer and closer to what the client considers to be, a good result? So in the case with the training company, they already knew the material inside out. They already knew what would constitute, the typical questions that the users would ask and the answers they were expecting. And they knew the cost of having a coach or some other support person manually provide those answers. And so when we hit those goals for them it was like, oh wow, this is a, essentially automated what a human being would do. So there has to be a benchmark in the evaluation process or in the prelim process from which the client or the company can look at and say, if we can replicate these answers. Then we can automate the cost and effectively, raise our, grow our business.

Isar Meitis:

Interesting. So basically you set the benchmark upfront to the level that it needs to perform at, knowing that if you hit that level, that's the level of savings or growth opportunity that you're generating on the other end.

Mayo Oshin:

Yeah, because again, what is AI and what is all the fear and excitement about it is automation of human manual tasks. So whether you have a sales team doing something over and over again, whether you've got junior legal petitions, drafting agreements over and over again, or in the courses or, any other case. The point is if a machine can do it with, with high, accuracy and consistently, Then you don't need those, you don't need as many employees or you don't need to pay as much. and you can do the mats on the machine because, I think open eye charges, depending on the model, but you're looking at anywhere from, approximately$0.04 per, I think it's a thousand tokens, which is 750 words. and and there's other mechanisms to save costs. We talk about cash in and stuff like that. so yeah, it must because otherwise that means that the client does, didn't set their goals. their goals weren't clear. Part of the setting the goals must be what does

Isar Meitis:

success look like? Awesome. I love that. since you started talking about money and more the business aspect of this, I. Roughly how long does a process like this take? And obviously it depends on the amount of data you have, but let's say to a small to midsize company who has, let's take a very simple use case that we talked about before, which is customer service. How long should a process like this take from starting to work on this to we have a model that we've tested internally that is mature enough that is providing the business result and we can actually start deploying it. So how long does it take and roughly how much does it cost to set up and how much does it cost to operate moving forward?

Mayo Oshin:

Yeah, so if, if a client already has, an idea of the data source they wanna work with, so let's say the customer services, they have, tons of website pages and maybe they have some other PDFs for internal docs.

Isar Meitis:

Yeah. and the ticketing system that, that they've used with all the open cases and closed cases and what was the process like? All of those things. Yeah. Yeah. So if they have

Mayo Oshin:

that and then they want to use OpenAI, just keep it simple. Yeah. 46

Isar Meitis:

weeks. Oh, okay. So it's not a huge project. N no, it's still, I'm

Mayo Oshin:

speaking for my, I dunno what other

Isar Meitis:

no. I'm saying this, what you gave is a great answer. I was wondering if this is a month, two months, six months, a year and a half. so a month to a month and a half is a very reasonable timeframe. Yeah. Because

Mayo Oshin:

the main job for me is get getting,'cause a lot of people just want to experiment with the stuff. They don't really know what they want. So my job is to provide clarity, to ensure the data's processed properly. That, that's a lot of what I spend time on. Once all that preparatory stuff is done. We just, we know what we need to do. And it's a lot more straightforward. A lot of the issues come from clients not really knowing why they're just doing our fomo or they just wanna keep up with competitors. They don't really know why they're doing it. Yeah. so

Isar Meitis:

I have another question related to that. There are more and more no code do it yourself kind of tools that do this process, right? Where I can go in and upload data to it or give it access to something and then get a chat bot that, that, that works out of the box. What are the biggest differences things that I need to consider, good and bad? Yeah. Whether I wanna do and use one of those tools or whether I wanna hire a company like yours to do a more robust process. Or maybe what are the use cases or cut points where, okay, if you want to do this, you should go with this. If you want to go below the line, you should do that. Yeah,

Mayo Oshin:

I think it's the same as any other industry or before ai, right? There's always gonna be out of the box solutions that you know you can use. But at the end of the day, if you want that personalized, high accuracy solution, you have to go custom. there's no way around it. And I know I've got friends who are running some of the most popular, no-code solutions. I'm, I, it's impressive. What they've done is scale, but on an individual level, you're just never gonna, you are never gonna have that. it's like food as well, right? you've got the, you go to, a chef makes, you go to a restaurant where a chef makes food, particularly for you, versus you go to some, mainstream chain, right? So what do you want as a company? And for me, for people who are new to the space, I think it's important they have a first experience as mind blown, right? If they go and use some out the box solution or something that's just mediocre, then it proves that doubts.'cause a lot of people are coming into this Yeah. Yeah. Saying, okay, look, it's all hype, it's da. And so for me, I would rather their first impression be something custom that kind of blows their mind. And then they get excited and they tell the stakeholders like, look, we need to bring this in. but if it's just some generic solution, which one, it is gonna be cheaper for you. So I'll talk about the pros. it's gonna be cheaper for you. they will, you could get up and running immediately. I would say that's another benefit.'cause it's a SaaS solution. Yeah. outside of that, the cons are, as I've said, whereas with the custom solution, it is definitely gonna be a lot more pricey and it's gonna take time because you need to be involved, in terms of get everything up and running.

Isar Meitis:

So basically what I hear you say that the out of the box solutions are good. If you wanna maybe experiment knowing that the level of, you know what, I'll rephrase that. I think if you're willing to accept a lower level of accuracy, In the outcome. Then it's a good enough solution, whether you just want it for experimentation or you want it for an actual solution. if you're in an environment or if your use case, if it works 70% of the time, it's good enough, then perfect. Then you got a quick and dirty solution that you can do in-house without going through, bigger project. But in cases where you want higher level of accuracy, which is probably most of the things in business where clients are involved you probably want a more custom solution. Would that be a good way to classify the two solutions? Yeah, I would

Mayo Oshin:

even, I would not even say it's up to 70%. I'm not saying this because I'm pro, my stuff, but yeah. it's closer to 50% and also your options are limited. they focus more on simple stuff like PDFs. maybe, what else? A website, stuff like that. It's more straightforward once, once you start wanting certain requirements, maybe you've got, some other types of your database or you've got some other types of quirks. you don't want, certain names to show up, certain emails to show up. maybe you want it to be, in a particular design, for example. maybe you want to, and a big one for a lot of clients is they want to be able to see the chat, conversations with users and, have control. Basically. If you want more control, then I think that's the way to go. It's yours, it's your ip. You can do what you

Isar Meitis:

want. last business question of this, we said it's about four to six weeks. How much are we talking about budget wise that a company needs to be able to prep in order to say, okay, we, let's take this first step and see where it goes.

Mayo Oshin:

Yeah, so on the low end, you're looking at five to 10,000. Okay. that's just a basic plan. You can just imagine it as, a lot of the fu a lot of the stuff, it's similar to chat G B T, but it's trained on your typically PDFs and Yeah. Stuff that's from a data person point of view. It's not too crazy. But then you do get your own, your own, application effectively, authentication, all that kind of stuff. 10 to 20 is where you starting to look at. You are more interested in a combination of two things you're working with, things like databases, APIs, things that are just a bit more complex that require more security measures. Yeah. you are wanting to place a little more emphasis on accuracy. So the basic one is just more Okay. Building the solution for you. it's done, it's good. But then if you want to go from good to better, then you are looking at, okay, we need some sort of way to improve this. And then 20 up is when, yeah. Now you are talking, full on, basically full scale, working on ongoing evaluation, making sure that all the security boxes are ticked. Making sure, there's just a ton of testing. I'll say 20 plus is just usually enterprises. A lot of testing, a lot of security checks, a lot of, trying to get as perfect as possible, I guess is the way to say. Yeah. So obviously each of them require different, resources from a development point of view, which is why you have those different tiers. So I would caution anyone listening to not, I've noticed that some people try to, shop the market and find lower and lower prices and all these kind of go offshore and do all these things, but then you risk, having something that's not secure, you don't know what they've put in there. So listen,

Isar Meitis:

I just be careful with that. I think it's very simple, right? You gotta think about what is the level of risk that this can put you in, right? If this is an internal tool that's gonna be an assisting something or okay, you can take some bigger risks and then maybe invest a little less upfront. If this is going to be basically allowing your customers to query your company in order to impact large business processes, whether it's sales, whether it's marketing, whether it is, customer service, the future of your business depends on this thing working perfectly, and hence, you just gotta. Define how much budget willing to put into this. And at the end of the day, I think if this thing works, if you applied it to the right problem, the r o I should be very easy to prove. That's, so let's invested a hundred thousand dollars in this, but it's gonna save you$1.5 million in manpower in year one, or it's gonna make you 10 more million dollars on top line in year one. it's a no brainer. It's a very easy investment to make. And so I think it goes back and I love that, that we went for full circle. It goes back to picking the right goal in where investing in building the right solution, we'll yield a high enough r o i to make this a very easy business decision. Yeah, that, that's

Mayo Oshin:

exactly it. because again, I will say as I said before, I. Who are you replacing? Okay, let's say in the case of a, a private equity firm, you managed to replace two financial analysts. how much would that have cost you? Yeah, I mean we're talking about at least$200,000. So yeah,$200,000 versus the price range we're talking about. I think again, like you said, it's

Isar Meitis:

investment. Awesome. Mayo, this was great. I think we covered a lot of stuff and we managed to do this without going too technical, so I think it's still a great conversation for any business person who's considering this and now can just go and understand the process and the steps and the pros and cons. And we talked about really a lot of stuff. I really appreciate you taking the time and sharing your information. If people wanna follow you, learn from you, work with you, what are the best ways for them to connect with you? Yeah,

Mayo Oshin:

so my website is cyo analytics.com. So that's si. SS I E N A I analytics.com. that's the consulting website. I'm on Twitter as well, so at, m a y o w a o, SS h i n. so I'm pretty active on Twitter. I shared the latest, things I come across. I also recently launched a newsletter for, for leaders interested in building AI chatbots, which, I think it's our scene as well. those are the three main ways. I'm also in LinkedIn as well, if you want to, if you wanna say hi. so yeah, I'll be happy to, even provide just general strategy, if that's something that you are interested in for your business. I also offer consulting for that as well, so just AI strategy and how to set that up, as well. it's a very exciting time and, I know it's, it can be overwhelming to know where to get started, so that's what I try and do, simplify the stuff and take the noise away.

Isar Meitis:

Awesome. Thank you so much. This was really valuable. I appreciate your time, and thank you for joining us. Awesome. Thank you. Great conversation with Mayo. And it definitely helps demystify some of the concepts and needs when it comes to training models and what you need to prepare or be ready to do if you want to go down that path, I definitely do not suggest that as step one for any business, you should start with low hanging fruits and gaining efficiencies by learning how to prompt better and using existing models that are out there across more or less. Every aspect of the business, but this is definitely a logical next step. And definitely if you're a bigger company and you have a lot of solid data across different aspects of the business that is well organized. And now to some exciting news from this week. As in previous weeks, and specifically last week, there were a lot of really important big news, so I'll try to go through them quickly, but I will also add in the show notes additional news that did not make the cut, so there's still big news, but they're not important enough for me to share through the recording of the podcast, but there will be links if you want to find out more. And we'll start with government regulations, two senators, bipartisan Blumenthal and Holloway are planning to introduce comprehensive AI regulatory framework. To do that, they've met with the leaders of Microsoft and NVIDIA and Elon Musk and Nadella and Altman And the biggest names in the AI world, in order to come up with what the framework is going to include, the framework includes provisions for AI licensing and auditing and a new AI oversight office and definitions of company liability and privacy and civil rights protections and data transparency and safety standards, and so on. The goal is obviously to reduce. The risk that a I represents to the society while maintaining the innovation and the capabilities that we gain from it on both personal and business sides, definitely an important move forward and hopefully turning this into laws in the near future. There's obviously pros and cons in the fact that the leaders of the industry are the ones that are helping dictate the law because then it might become a self fulfilling prophecy of what they want. I really hope that the lawmakers at the hill Take this a little deeper and maybe consult with additional people that are not from the industry in order to balance these people's views. But either way, I see this as a very positive move. Another regulatory action is the FTC is looking into potential antitrust problems in the AI space. They fear that incumbents may try to leverage their power against new generative AI companies, hence reducing competitive. Yes, this is obviously a serious risk because some of these big companies are gigantic and they control both software, hardware, engineers, and so on. So their ability to limit access to newcomers is significant, and the FTC I'm quoting appears ready to intervene aggressively if they feel that anti competitive actions being performed. in or around generative AI space. This is already problematic if we are combining it with the previous piece of news, because just the fact that some of these large companies that has already violated multiple laws are now helping to write the new laws that will prevent from new companies doing the stuff that they did in order to train their models is already. A problem. I see that as a positive move in a really hope that we'll be able to see a full ecosystem flourishing, including smaller startups, et cetera, that can compete with the bigger players, at least in niche specific markets or specific subjects. A few huge companies have released information about new AI capabilities in this past week. One of them is Salesforce. Salesforce Dreamforce conference happened, this past week and on September 12th, they introduced Einstein co pilot. Back in March, Salesforce introduced Einstein GPT, which was a smaller version of that, which allows users to use GPT capabilities to write and create content. But the idea behind Einstein Copilot is the ability to basically perform and query anything within Salesforce in natural language. As an example, A salesperson in a company can use this to research new accounts or newer customers, or a customer representative can look back and see what were results of similar cases or previous customers or a product manager can create a storefront from a new for a new product that he or she wants to launch. Really, for almost any action within the Salesforce world, instead of the users having to. Ask questions from other people in the company on how to do that, or have to know all the set of clicks and menus that they have to go through in order to find the right buttons, set the right queries, et cetera, they'll be able to just ask for what they're looking for and generate whatever they need in a natural language while conversing with the Salesforce co pilot. They also added what they call the Einstein trust layer, With two goals. One is to reduce AI hallucinations and false responses, and also for data security. In addition, they've announced Einstein for Developers, which is a coding tool built exclusively for the Salesforce specific coding languages, meaning developing new capabilities within Salesforce will become significantly easier more efficient. My take on this is it's not surprising. Again, it continues a trend that we've seen from Salesforce and that we'll see every other large software provider, otherwise they will lose market share to the competition that will offer these kinds of capabilities. And to prove my point, Ernest and Young. Also known as EY, just announced that they have invested 1. 4 billion dollars in developing their own large language model and AI platform. They call it EY. AI EYQ and it includes data management and use cases and framework analysis specifically for AI adoption. The goal of this is to position Ernst Young as a global leader consulting company for large organizations who wants to implement AI, which is basically the Any organization out there. This again, doesn't come as a surprise. If you look at their industry other companies already announced this before. So peers like KPMG and Accenture and PwC and Deloitte all have made similar announcement in the past few months, all varying between one and 3 billion investment each, so they fall within the ballpark. And if you're already around the topic of large consulting companies, BCG, Boston consulting group, just announced an alliance to deliver enterprise AI to clients with no other than Anthropic, the creator of Claude-2 large language model. So the goal is to give BCG's clients direct access to Anthropic's Claude-2 AI Assistant. And BCG will be the advisory company that will help those clients implement these models in the most effective way. It's obviously a big win for Anthropic be brought on board with BCG, one of the largest consulting companies in the world. But again, it falls in line with what we're seeing in the industry. I think we'll see more and more of these partnerships from the companies who generate the large, more successful models, together with big companies who want to provide these capabilities and do not necessarily want to develop them from scratch, like we've seen in the previous example with Ernest and Young. The last two pieces of news relate to how effective Is AI in different business components. The first of those pieces is research performed by professors at the Wharton School and the University of Pennsylvania. And they were trying to check who is better at generating innovative ideas, whether it's their MBA students or was it ChatGPT. And Chachapiti won by a landslide. And if you look at the statistics and read the article, they found the following. ChatGPT can generate ideas much faster and cheaper than students. On average, ChatGPT ideas were higher quality per survey. ChatGPT generated 800 ideas per hour compared to five by the humans. Now this is obviously not a quantitative test by a qualitative test, But at the end of the day, when they picked the top 10 percent ideas that were generated across human or ChatGPT, 87. 5 percent of the 10 percent best ideas came from ChatGPT. And in addition, the cost per ChatGPT idea was 0. 63, and the cost per human idea was 25, considering an average salary that they considered for these people. So ChatGPT won on speed, it won on quality and it won on price. That's not very good news to anybody who consider themselves an innovation consultant, or just somebody who is an innovator. The flips out of that, it will allow us as a society to innovate much faster, which overall is a great thing. And the last piece of news that has to do with. Productivity that AI brings is a research that was done by the Nielsen Norman group. they did three different studies in multiple companies. With the goal of evaluating the improvement in efficiency across different aspects of the business. Study number one was done around the topic of customer service. Study number two was done around writing routine business documents like sales proposals, et cetera. the third study was around coding small software projects. The finding were not surprising knowing that it's going to be more efficient with AI versus without AI, but the numbers are pretty astonishing on average, the productivity across these projects was 66 percent higher using AI versus not using AI. Also with a very big spread. Customer service only gained 13. 8 percent improvement in answering inquiries per hour, which is what they were checking. Writing business documents, on the other hand, saw a 59 percent increase in efficiency in creating documents per hour. And writing code, so 126 percent improvement when what they were measuring is how many tasks can a programmer complete per week? That's more than doubled while using AI compared to not using AI. What does that tell us? Well, if you can improve the efficiency of things across your business by 50, 60 percent or a hundred percent, but let's take the average of 66 percent one of two things can happen. Either you can find new clients that will allow you to use the extra capacity in order to grow the business, which would be amazing. The reality is that's not always the case. It's definitely not the case right now in a problematic economy, and it definitely not continuously the case across all sectors. Which means for the companies who do not have the flexibility to grow fast enough to make more use of the extra bandwidth that is generated, they will cut costs by letting people go. And anybody who says otherwise just does not understand how effective this technology is, meaning we will see significant job losses in the immediate future and definitely moving forward. By the way, I know I said that I've already covered the last piece of news, but it relates directly to this Forrester's 2023 generative AI job impact analysis anticipates that 2. 4 million us jobs will be replaced by generative AI by the year of 2030, I think this is totally underestimating it. I think the impact is going to be much bigger and I think it's going to happen much sooner. But who am I to question Forrester's analysis, but they also anticipate that 11 million jobs will be influenced and will require retraining to work alongside AI. I a hundred percent agree with that. And again, I think that will happen way faster and at a much higher scale. The good news is if you're listening to this podcast, it means you're one of the people that actually cares that understand this, that the world we know today in business is changing and changing fast, you understand you need to acquire new skills in order to learn how to work. with AI in order to become more efficient, which means you have a much higher likelihood of staying ahead of the curve, keeping your job, or maybe even starting your own company or a better job within your company because you will have those skills. That's it for this week. Have an amazing week, explore AI, share what you find with the world, share it with me on LinkedIn and have an amazing week.