Leveraging AI

144 | Hallucination free business research - Better and fast decisions with AI with Bryan McCann

Isar Meitis, Bryan McCann Season 1 Episode 144

Unlock the power of AI to transform business research. In this exclusive session we'll walk you through a groundbreaking AI-driven research workflow designed to streamline complex decision-making. If your current process involves countless hours of data searching, compiling, and analysis, this webinar will introduce you to a faster, smarter way.

Our guest expert, Bryan McCann, CTO and co-founder of YOU.com, will demonstrate how custom agents revolutionize research by integrating and analyzing both public and proprietary data.

With years of experience in AI and NLP innovation, Bryan has led the charge in making AI solutions practical for business people helping them save time while boosting accuracy. You’ll see how AI can turn weeks of work into minutes, providing you with comprehensive decision decks at the click of a button.

Bryan will present real-life example, that illustrate how business people across industries can leverage AI to gain competitive insights faster than ever. Expect a hands-on demo, clear steps, and the chance to ask Bryan anything about implementing this tech in your own workflow.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

GMT20241114-164616_Recording_as_2560x1440:

Hello everyone. And welcome to another live episode of the Leveraging AI podcast, the podcast that shares practical ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Matis, your host, and we've got a very interesting topic for you today that's going to combine the present and the future and what a lot of people do today. Let's get started. Companies and organizations have many departments, right? And each department has multiple roles. And yet there's tasks that go across almost every role in almost every department. One of these roles is research. People need to research specific client or a specific prospect in order to know what to offer them and what's the best solutions for them. We research We do market research, right? To understand where the market is going and where our niche is going to know how to plan better from a strategic perspective. We research prospects to know what kind of deals to offer them, what kind of services would serve them best. We research potential investments, either personal investments or investments in companies or new technology or infrastructure. Almost anything you do in business and a lot of stuff you do in your personal life requires us to do research. But research is not that easy to do because it has multiple steps. First of all, you need to find data sources. These data sources might be internal in your company, it's different databases that you have, your CRM and so on might be external, but you need to find those data sources. Then you need to find relevant, reliable data. Within those data sources. And because we have access to so much data, that's not easy to do either. And then you need to analyze the data to figure out what actually helps you in what you, whatever it is that you're trying to do, and then you need to summarize the data and then in many cases, that data comes in different shapes and sizes and forms. So it comes in different formats. And so it's not very easy to go through all of them to actually create a cohesive outcome that helps you make proper decisions. that's why most people are not very good at research because it's not easy to do. The good news is AI is actually really good at many of these steps. AI can help you in finding those resources and summarizing data and analyzing it in different formats and so on. But that's just step one. The reality is we're now walking into a future, which is now becoming the present of agents within AI. And agents can do a lot of these steps for us. So it's not just, we have to apply multiple AI tools in multiple steps of the process. It will figure it out on its own. Now, our guest today, Brian McCann is Potentially the perfect person on the planet to talk to us about agent and how to use them in research using AI. So his background is multiple years in AI research and AI for research, including spending several years in Stanford. As well as being the lead research scientist at Salesforce. So he has done this both on the academia side, as well as on the business side for more than a few years. And he's currently the CTO at you. com, which is a company that allows you to do research using AI agents. So he knows literally more than 99. 9 percent of the people on the planet on this topic. And so since I think this is one of the biggest breakthroughs we're going to have with AI, we already have access to it. Right now, I find this really exciting. And so I'm really excited to welcome Brian McCann to the show. Brian, welcome to Leveraging AI. Thank you so much for having me. Yeah, I'm stoked to talk about. So the technology that's going on right now, and maybe the future a little bit as well. Yeah, I think we're at the point that the future and the present lines become very blurry, where people are talking about, Oh, we'll have AI agents and they'd be able to do all these things. And if there's a company like yourself, we've been doing it for a while. I don't see what's futuristic about that. and there's a lot of things like that with AI right now, where the, Many people still haven't figured out the basic usage of AI. Like I teach courses. I have a core starting on Monday that I teach to business people. By the way, those of you who are with us right now, those of you who listen to the recording too late, those of you who are with us right now. we have the AI business transformation course that we've been teaching since April of last year. We have another cohort starting this Monday. So if you want to still join the course and you listen to us right now, you can join with Promo code AI 2025 to get a hundred dollars off. And this course teaches people how to use AI. But the reason I'm saying all of that is not to promote the course, but to say that there's a lot of people that still don't know how to prompt chat GPT. And yet there are tools like yours out there that allows you to just. send an AI to do a task and it will figure it out and it will do the whole process and we'll give you a beautiful summary. And so I think from an education perspective, what we're going to do today is going to be extremely valuable to people to understand what's possible today. And like you're saying, okay, when this gets perfect, what does that mean in the next two to three years? Yeah. we've seen a huge change in user behavior. U. com started in August 2020, so that was when my co founder and I, were excited enough about the inflection point. That was happening in the research world, that we just had to get out in the wild and start experimenting with what new products would look like and new ways of search, but really more about just interacting with information, broadly speaking. What we saw was that, or part of the hypothesis was that if, AI could understand language better, you could do a lot more for them, for the user. so we always talked about, Moving from being a search engine to being more of a do engine, something that actually does something for you and fulfills that intent end to end, all the way so that it's just done. And there's various versions of this, right? There's all these assistants. I think a year or two ago when ChatshubD first came out, people talked a lot about chat as assistants, and now the current word is agents. Some differences between these things and we can dig into those. but even those lines blur a little bit because, once, once it's cool to be an agent. a lot of assistants become agents and a lot of just software seems to be becoming agents. but really the exciting part. about agents themselves is letting the AI decide in a very different way than we used to in the past. but over the course of this last four years, We've seen people's queries kind of change, broadly speaking, go from basic Google like keyword searches to really complicated prompts, to entire workflows. and that's where we live now as we've transformed over the years as well, with the technology, trying to get at the heart of a deep research use case where. It's not so much about quick factoid answers, that you might have gotten from Google before or some of the other chat products out there. Even something like chat GPT, we're not off, we're not, optimizing our system, which is, dozens of models behind the scene to. give you the absolute fastest, shortest, most concise answer to your question. Instead, what we've been seeing with our users, and it dovetails really nicely with where the technology seems to be going, is this desire to have absolutely accurate responses, and be slightly leaning towards comprehensiveness over conciseness. And in some cases, that's actually quite extreme, so extreme comprehensiveness, where we're not giving you the, the answer the way some, sometimes it seemed almost like Google was becoming, like the information god. before all this started happening, it's oh, you go to Google. Google has the answer. And that's just, that's the answer. You accept it, you take it for what it is. and maybe it's one little line in a box. Okay. Instead, now we're seeing a, an appetite for, show me all of the information that you possibly can find on the internet or within my company's data, or both. Go to every tool you can possibly think of and bring it all back. Still synthesize it, obviously, and summarize it to some extent. But provide extreme transparency and attribution so that while we're in this phase of people gaining trust with this technology, you can click on all those citations. you can click on the charts. You can look and see where that data is coming from. which I think is It's exciting because it opens up a lot of new things we can do, more complex technology we can deploy. But also it feels like maybe we're growing up as a, information society. It's oh yes, it is really important to be able to track attribution and transparency. And even that brings up a lot of questions about where that information is really coming from behind the scenes. so it's a good one for us, I think. I want to touch on two things you said that I think are critical, and then we're going to dive right into, okay, the how this is done and what your tool actually allows people to do and so on. One is really the question of agents versus assistants, right? And I'll say what I think the definition is, and then I would love to hear your thoughts. but before that, I will say my two cents about the last sentence that you said. I think we live in a world where. We're getting more and more extreme with opinions, whether it's in politics or thoughts about society or thoughts about many different things. And it's because social media, and with that, I include YouTube as well, serves us what we want to see. So we consistently for the Past X number of years have seen more and more of the same instead of seeing a plural version of the world, which is the real world, which will actually serve us better as a society. So I think moving to a direction where you're going to get a variety of opinions, a variety of sources, a variety of information, and not pick the one that is aligned with, forget about pick, be served the one that, that aligns with your. thoughts is actually good for society. So that's my two cents. But now for agents, I think to me, the biggest differences are the following one, it can do more complex tasks, meaning it know how to take a goal versus a very simple task, analyze it and break it down for itself. What the tasks So a first step is like you need to know how to do this. So one of the things that the software needs to do to complete that where an assistant doesn't know how to do that. You have to give it the step by step process. Two, is that it knows how to monitor itself and correct, right? So as it's performing the task, oh, no, this didn't work well. Let me try something else, which some of the assistants knows how to do to an extent, but not really. And three, which to me is the biggest difference is it has access to different tools, meaning it doesn't live in the box it is right now. It can go to your CRM. It can go to the internet and you go to different research sources. It can enter information into a form. It can do stuff that the assistants cannot do at this point. So to me, these are the biggest differences. If you have anything to add or to correct me, I'll gladly hear your thoughts about it. Yeah, no, I think that, I think I pretty much agree with you there. The way that I, I often frame it for people as assistants are assistants are just this technology in some form that we build to help people, right? Like in, whether it was Siri, Cortana, all of these kind of like day to day tasks or work tasks. That's more to me, that's like a closer to the user, framing. whereas you might use agents to do some of that stuff. maybe there's multiple agents behind the scenes, that are powering assistants. so for me, although the word agent is a little bit more hypey right now, I actually think the distinction becomes quite blurred and probably should not exist so much for an end user, like on the consumer side. But, technically speaking, I think you're right on. assistants tend to, be very structured, the way that they've been made in the past. They're highly structured. They don't have a lot of flexibility. A lot of times it's mostly rule based or something like that. and then there was a wave of trying to introduce LLMs into it and some deep learning, but, it was still, it still didn't look like the assistant was making the decision. It looked like the code you were writing dictated what to do, maybe based on a couple models that were classifying which decisions to make. But in an agentic setup for agents, I feel like the key there is the agency, which you alluded to, right? And it's not just a classifier that's set up to decide between a few options. It's this LLM gets to ingest context and decide what to do next in a far more open ended way. Whether that's picking up new tools, or spending more time thinking, whatever it is. the space of possibilities. For that agent is much greater. And so it feels like it has more agency to it. Okay. Awesome. So now that we had this amazing theoretical conversation, let's break it down to earth and let's show people how to actually use these kinds of tools for research. And I know you have different examples for us and different ideas on how to show it. So I will let you the stage and we'll allow you to show whatever you want to show. And by the way, those of us with us live and there's a bunch of people both on LinkedIn and on zoom. First of all, if you're just listening to this recording, we do this every single Thursday, so you can join us live with, with people like Brian, who are really at the top of the game when it comes to AI and implementation of it. And so you can join us here either on LinkedIn or on zoom every Thursday at noon Eastern. So I'll show, a pretty, pretty standard example for us. And if you go to you. com, the one that, what I, the kind of setup that I'm talking about, This is where this research set up, I recorded a little video for us here so that we can, we can just zoom through it. we have smart, we have genius, we have, you can play with different models if you want to. But the one that I'm really passionate about myself is this research mode. That's where I'm pouring a lot of our time and energy into seeing what we can do. in this example, we're just trying to see should we invest in Nvidia? And. One of the things that we try to really emphasize is that users can upload their own data. So you can transfer this over to enterprise use cases as well. But you also have access to public web data. So this one's going to show how we're uploading NVIDIA's Q2, like quarterly proceedings and things like that. and so what the research agent is going to do, is go through that file, go through that PDF. But it's also going to generate a bunch of things that for this very abstract question, you probably should be asking or doing. And this is part of the agentic workflow for the researcher, at least. Okay, we're going to extract these key financial metrics. What's happening behind the scenes there? we're going to parse the PDF. We're going to parse all the images, parse all the plots. We're going to pull those in, summarize them, try to find the key metrics, probably look online to determine even what those key metrics should be. So there's a whole suite of models going into just pulling information from these. We can dig into those too later. then we're going to go do a bunch of research on recent stock performance because you don't want this to be out of date. every minute if you're, in a waking hour, this, these things can change. we're pulling in recent news. We're pulling in, things from the competitors, et cetera, et cetera. All of these are not set in stone. these are just. generated by an LLM, by the agent. For those, just a second, for those of you who are listening to this on the podcast, what we're basically seeing is we wrote a question, should we invest in NVIDIA, uploaded a file, click go, and it generated a list of 10 things it is going to do on its own based on the very short and general requests that we made. and it finds ways on the things it needs to do in order to provide us the best answer. And. To harken back to what you were saying earlier, this is part of bridging that gap for a lot of users that themselves don't necessarily know how to get the most out of the service. This technology yet, right? You should be able to, I'd argue you should be able to ask a very basic question like that and then have the agent take care of the rest for you. So moving forward, these workflows, as you can see, are where you see what's happening with the agent. So there's a thinking step. It's researching. now and actually going out and getting all of these sources live. So we're going out to the web and crawling whatever we need. you can see a little list of the sources. It looks like we're crawling about 140 in this case. so we've gone out, Picked up those pages, mixed it with the PDF financial ordinance like we, we uploaded ourselves as a file. And then instead of getting, a quick answer box or something like that, you're going to get something optimized again for comprehensiveness. And you can see the citations here. These are the things that are specifically pointing back to the file you uploaded. Because they're all pointing to one, the first source. All of these are clickable so that you could actually go into the source and see. where we're pulling that information in. And then as you go further down into growth prospects for AI market leadership, data center growth, now we're pulling in information from source 2, 3, 4, 5, 6. These are coming from the sources that we've gone out and collected from the public web. and you can see that the response just it keeps going for a while. and it took a few seconds. to actually go do that research and crawl all those pages, but the type of answer you're getting is much, much closer to a research report. than it is like a quick Google factoid like answer. And it's got that attribution and transparency baked in so that users can build trust with it over time, which I think is really important as you give these agents more control and more deciding power over what's happening. in the future when it's making use of your credit cards and your bank accounts and all sorts of other things, you're going to want to know and have built up this sense of trust. Trust with, the agent. So that's one that uses the public web and, some files that you upload. And you can upload a lot of files. And if you're in the enterprise setting, you can upload Many, many, many, many files, we've had people uploading 20 million PDFs and 20 million CSVs that go with those PDFs on clinical trial data and then you'll see the agents having, additional tools at their disposal, text to SQL tools. that perhaps we've created or customers have created and that, that gets back into that kind of tool usage that you were alluding to as well. Okay. So I got a bunch of questions. great. because this was great, right? A couple more demos, demo just. If you're in any business that gives you an idea of what the things that you can do, because if you give it your own data, but on its own, it knows how to go to search for relevant data somewhere externally, knows how to combine all of it together into a very detailed report that is structured properly, that easy to follow, that also has links to the sources. That's obviously very powerful. So I have here. I have three immediate questions, and then you can answer them in whatever order you want. the first one is you alluded to the fact that it's Accurate and grounded information. And that's one of the biggest problems that people have with these tools, right? The more it feels like the more. Open. The question is the more they hallucinate, they make stuff up because you give them the freedom to do that. And so I'm wondering in a tool like this, that actually starts with research starts with actual factual information. How much do you see re hallucination reduced compared to when you're just using chat GPT? That's question number one. let's start with that. And then I'll ask you the follow up questions. Yeah. we. have essentially the lowest hallucination rates out there. There are actually a couple of independent research papers, academic research papers, written, that measured us against chat GPT perplexity, maybe being chat, I think as well. And, I can find that and post a link here, but, Yeah, that basically shows that we have the most kind of accurate and faithful answers. I can talk a little bit about how we're doing that. Yeah, I'll look it up while we're talking about it. misquote them, but the, the way that we're doing that is we're tapping into this trade off, right? We're taking a little bit more time. this is a paid version of the service, right? So there's a more complex models behind the scenes that are only really enabled when someone is there's some money going into it, but then it's not just a single. rag set up, it's not, okay, we're going to get a bunch of sources, drop it into a context window and, have the LLM summarize things we're identifying across a huge range of intents, for the user, like what category of question this is falling into, and that basic intent is used for many decisions down the line and that, that intent An understanding system is the system, essentially, that we've been building even before chat. going back to 2020, that was like the first system that has now seen all this data of user behavior change. And, I saw you. com like be very popular with developers and students and researchers broadly and financial. So it's seen a lot of this data on, okay, do I need to go to the public web? Do I need to personalize this answer for this user? Is this a developer type query? Is this a biotech kind of query? And with that information then we dynamically construct prompts. So prompts for us are not really something that's set in stone anymore. It's not something that even the user is really writing. We're trying to get it so they can ask basic questions and not worry about prompting. The system generates prompts. based on all of this information and constructs them and then once we get sources candidate sources Because we take a little bit more time here. We actually go crawl those pages live So it's up to date a lot of search engines or chat services They're gonna use a more traditional search index to pull in Information from whenever the last time they crawled was they're not gonna crawl it live And get the most updated information. so that's another first step towards, making sure there's freshness of the information itself. Once you have it, and you have those different sources, part of what is making research mode hallucinate less is we're not tuning this model for making a decision and making, choices between different services or between different sources. Thanks. It's okay from a comprehensiveness standpoint for it to say, this source says this and that disagrees with this other source, right? So that even itself allows it to not necessarily say, this is right, this is wrong or make something else up entirely. It's just really tuned, say, this is what this says. And maybe these four or five sources agree, so it's doing that work for you, but it's not oversimplifying to some extent. And for any particular fact, we have models that run to determine whether, the LLM's response Is actually entailed by that fact and vice versa, right? Does what the LLM say entail that you try to make sure that there's not any extra information in either direction because that's hallucination. So we're minimizing those as well. and when you do that across, as you saw, 180, 200 sources, you start to narrow in on These are the things that 90 percent of those sources are agreeing on. Maybe there are some outside perspectives, but if you've tuned the model and you prune any part of the response that deviates from being grounded and faithful in that way, you get this kind of research report. Awesome. Great answer. I think it's important for people to understand what's happening behind the scenes in order to get those results. I, the other question that I have, and actually was asked by somebody on the chat as well, is sources. And I, and I have two variations of that question. One on the external world, can I point it to specific sources, either on a setup? I only want to look at this. Or as I'm writing the request in real time and say, I just want to look at these sources, but also in the internal perspective, can I say, I'll give it access to an API, to my CRM or my ERP or my customer service ticketing system, or whatever other enterprise system that I'm using. Can I connect it to it? So it's, Always available to it. So I can query that as well. Or do I have to upload documents and do stuff like that? Yeah, great questions. And then I found that research paper. I'll show you that too. so for internal documents, you can upload yourself through this interface, the way I just showed in that demo, or, In some of our enterprise settings, and this will probably come further and further out into the future. More outta the box solutions as well. You can just essentially have a bucket or a blob or some sort of cloud storage bucket you're constantly uploading data to, and we all of the indexing and permissioning and everything outside of that. So you can have those kind of CR jobs running on your side that are constantly pushing information in. Where there's a limit right now is just on the kinds of connectors we have already pre built. if you want to connect your Gmail or something like that, that one's coming soon. but if there's some obscure CRM service, that's probably not. in our system yet. You can create your own export that every day runs a query, finds the differences, pours it into this data lake or repository or something, and then you know how to crawl that and find information there. So there's still a workaround even if you haven't built the connector yourself. And there are custom integrations or connectors we've built in partnership with companies where they have specific tools that say their biotech researchers use. And then we point this agent at that as another tool it can use. And we've seen really great results with things like, as I was mentioning before, Text to SQL, if there's a large database, like a SQL database, or if they have a bunch of spreadsheets, or something like that. Very effective. and improves accuracy a lot. and some have, their own sorts of dashboarding, plotting, charting kind of technology and being able to just read from Those charts and plots, can also make a pretty big difference over, trying to do some lossy transformation from that into another format and to get another format before you like pass it into an LLM. and I'll show you real quick, this, this paper that was, perhaps surprisingly, pointing out that search GPT was, It's the least accurate system when using citation and they go through and analyze You know, all of these different situations where there's citation accuracy. sometimes there's unsighted sources. Sometimes there's one sided answers. Things like this. there's you, Bing, Perplexity, and SearchGPT. And you can see the most green and the least red is over here on the U. com side. on all of these different metrics, whether it's citations. actually being able to trust the citations, the kind of sources we bring in, which is really important, right? Because if you're citing untrustworthy sources, then that kind of leads to an infinite regress problem. where does this information actually come from? And how do I actually trust it? And the answer themselves, like the actual just, construction of the answer, whether you're being one sided or you're, optimizing for representing the different perspectives amongst the sources. if you're being overconfident, or if you're willing to acknowledge that it's not like a hundred percent true or, a hundred percent confirmed. and the sense of like relevance of each statement as well, because you could imagine saying a lot of things, that are relevant, but then maybe they're in a sea of irrelevant statements as well. Awesome. This is great. Yeah. There's a question. If that research is done on the smart genius or research mode of the product, the particular thing you just showed. This is, I'm glad they asked because actually, this one was, I believe on the smart version. And so the research version is significantly better than that. Interesting. So this is like the low level and it still beats all the other search tools that use it. Yeah. Yeah. Let's do this. I think this was a great, a lot of theory. Let's dive into one more example. So people have a different example than the one we did so far, but it's not investing related. and then we'll be good to, so people have just another point of reference and I think then we can let you go. Yeah. So let's look at, whether you should buy. Some solar panels. yes. if you live in the Bay Area, do you buy solar panels for your house? a little bit of a toy example, to some extent, but you can see it working through this workflow, thinking, again, for those of you don't seeing the question is, I live in the Bay Area. Should I buy solar panels for my house? It's a very generic question. It doesn't give a specific address. It doesn't say what kind of solar panels. It doesn't say how much it's electricity you consume? Like it's a very general question and just like before, and then I'll let you take it from here, but it generates all the stuff that it wants to do. Yeah. So we went out and got 120 sources, which again, already quite a few more sources than like a normal, like even like search GBT and Bing chat, they're not going to go get that many. They're usually not going to be crawling those sources live. and then it's writing out this plan around What do I actually need to know to answer this question? I should investigate the climate and average sunlight hours in the Bay Area. I should research current electricity rates, energy costs, federal rates, incentives. all these different things. And this one's pretty fun because what we're going to see is it starts generating some code here. and this code is going to get converted into a chart, which is being pulled from all this data from all these other sources. And what I love about this is even those charts have the same functionality that if you were to come up here and say, okay, in San Francisco, there were, 47 cents per kilowatt hour, on average in this chart. And you can actually click on that and go to the source from which we've gotten that information. even for these generated, this is also a little bit of that agentic flavor to what's going on here, because it chose to write out this chart, right? It chose which information to pull us in from and being able to do that, And then ask follow up questions that are personal to you. as to whether it's worth it based on like your bill and your income. You can get all sorts of interesting functionality that, you could do this yourself, right? But, and you could even use chat CBT or something like that to try to flesh out the kind of questions you would want to ask, but being able to just have it all done in a matter of minutes. This one too, it's just gonna click on the data point there. you wait a little bit longer, but the results are so much better and so much more work is being done for you that, it's worth waiting. No, I think this is amazing. And I think this is a, I think this is the kind of things people need to see because there's more and more chatter about agents and a lot of people are like either clueless or confused about what that means. And I think the two examples you gave. And I think if we generalize it, think about any business question you want to have, and should we raise the prices of this thing that we're selling? Or should we change our customer service rules? Or, what are the things most of our customers complain about when it comes to customer service? And is it the same in the rest of the industry? And then it will know how to research your stuff, plus whatever external sources and really give you. And what I really like about this is just like you said in the beginning, it's, it doesn't give you the, here's the answer, but it tells you, okay, here are the things you need to know for you to make a decision. which is very different than here's the answer. And I think for any business decision, that's what you want to get. you don't want to get the machine to tell you what you need to do. You need the machine to help you make a decision by providing you as much information as possible in a way that's helpful and that is reliable. And this is what this does. And again, it does it in a, in an amazing way. And like I said, it's a little longer when people say a little longer, it's not three weeks, right? Instead of giving me the answer in six seconds, it takes. A minute. It still doesn't, and we see the hunger, we have some customers that are like, Oh, what if, I don't even need this answer in a minute. like this is a week's worth of work. what if I gave you an hour? there's this completely different shift. In thinking amongst users, which is really exciting. I think that's what's gonna really open up, the new possibilities of everything about, and you see a similar theme, right? With opening eyes oh one models and thinking Yeah, like extending inference time, even though they're focused more on this like mathematical reasoning component. Like we've touched on this similar theme of okay, if we give it more time, what can we do? And then it's really just about keeping it from hallucinating at each step and preventing those compounding errors. So you just have to have some, some railroad tracks for the train, but they're very powerful. new engines. It's just like sometimes you have to have a track there for them. Yeah, Alan in the chat is saying that he can't remember when was the last time he was patient with his tools. So he's surprised by the fact that people are. And I think I'll tell you what I find is I usually use multiple tools at the same time. And I run into other stuff if these tools takes longer to do. So the tools that take the longest to do for me are obviously when I'm rendering videos and that might take, using some of the free tools, 45 minutes. If you sell the pay tool, it's still going to be three to 10 minutes. And I'm cool with that. It's running in the background. It's doing its thing. I have it open in a tab somewhere and I will go to my emails and I will do other stuff just like I always think about it as having more assistance, right? If I send an email to an assistant, I don't expect the assistant to give me the answer six seconds after. I know it's going to happen because they're my assistants and they're going to try to help me. it's not going to be immediate. And I want them to take the time to give me the right answer versus a quick answer. So I always literally, that's the way I view it. either as an assistant or as a consultant, I'm sending an email to a consulting company. They're not going to give me an answer in six seconds. They're going to take the time and they're going to say, Hey, we're working on it, which is what this thing does. And then he will get back to me when it actually has a good answer, which is the same thing. This was fantastic, Brian. I think this was like a highly educational and valuable to people. If people want to follow you want to learn more about you. com want to connect with you like what are the best ways to do that? Yeah. the best way to. You should go to you.com and you should play with it. Yeah, that's the first thing. you can definitely find me on LinkedIn. that's where I do most of my little videos and stuff as well. and, you can go, all my links are actually on my website, so you could just go to my personal website. BrianMcCann. org actually, and you can read about my research, you can read a little bit about you. com, how we started the company, but yeah, you can find, links to LinkedIn and Twitter and all that as well. Okay. there's one last question that I think is cool and interesting. Yeah. Which is why did you call it you. com? And I think it's a good question. Richard and I, my co founder, we were, when we were starting the company, they're I would say it was more serendipitous than anything that, we wanted to start this. next gen search AI company that would be able to do more for you. And there were ideas in our head around, having this be something like a digital version of you, or being able to do all the things that you would do online, be more personalized for you. So this was a you centric product where it was, instead of you being the product, Like you were Yeah. Or are for Google and some of those others. This is a product for you and actually works for you. and then coincidentally, our, our main investor of our seed round, a guy that we had worked with for four years, Salesforce, his name's Mark Benioff. he'd had the domain u.com for 25 years. and Yeah, it just worked out. That was part of his investment. Incline investment. Here's the domain that I own. That's awesome. That's a great story. so yeah. just before we go, if you're still on this call and if you're still interested, again, if you want to join us, there's a link in the chat, but, or just go to multiply. ai and click on courses and you can find it. it's an amazing course. We've been teaching it since April of last year. So about hundreds of people has been through the course and are making a difference in their companies, with AI because of that. but with that, thank you so much, Brian. This was really brilliant. I think those of you who don't know you. com and are listening to this. Hopefully you're going to go and check it out because it's an amazing, research tool. And there's a free version. You can do a lot of stuff with the free version. And, so thank you. Thank you. Thank you for taking the time and sharing all that information with us. thanks so much for having me. Thanks for listening, everyone that's online. Thanks everyone. Have a great day.

People on this episode