Could AI Be Your Most Valuable (Yet Affordable) Business Advisor?
Finding world-class advisors to help grow your business often costs a fortune. But what if AI could give you instant access to an elite personal board of experts...without breaking the bank?
On this episode of Leveraging AI, Isar Meitis unlocks the power of large language models like ChatGPT and Claude2 with Raphaël MANSUY to get tailored advice from an AI-powered dream team of marketing, legal, HR and other genius advisors - all completely customized for your specific business needs!
Raphaël MANSUY is the CTO and Co-Founder of Elitison and has years of experience leveraging AI. He is a seasoned data and AI strategist and on a mission to democratize data management and AI.
About Leveraging AI
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to Leveraging AI. This is Isar Meitis your host, and I've got a special and very interesting episode for you today. Wouldn't you like to have people like Jim Collins Elon Musk, seth Godin, Gary Vee, Jack Wells, Jeff Bezos, Gina Wickman, people like that on your advisory board. We all would want that, or at least some of them, or at least the heroes and the people that we really believe in and that we think have done things in a way that we would like to follow. Well, that's obviously. Impossible, right? Because these people are either dead or will require a lot more money and influence than most of us have in order to have them on our advisory board. Well, in the current era of generative AI, you can actually have these people or at least their digital representation in order to consult with them on any business or strategy question that you have. If you want to learn how to do that, stick around, because this is exactly what we're going to share in this episode this episode is brought to you by multiply. Multiply is spelled like the word multiply, but instead of Y in the end, AI in the end, so M U L T I P L A I dot AI. If you go there, you'll be able to find multiple AI educational concepts, anything from this podcast and also many other ways of education, such as board and C suite level masterminds through courses and consulting services to businesses and how to understand and implement AI from a basic tactical level, all the way to business strategy. At the end of the episode, like always, I will share the most exciting AI news from this week. And now let's learn how you can create the dream advisory board for your business.Isar Meitis:
Hello and welcome to Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Issar Mehti, your host, and we have a very special and interesting topic today. Maybe the most underestimated and underutilized capability of large language models like Chachapiti and Claude and Barth, etc. Is using it as an ideation and brainstorming partner. Most people jump first and foremost to generating content and maybe summarizing documents, but the reality is it's an extremely powerful tool to ideate and iterate around ideas that you want to develop either for yourself or for your business. To put some numbers into this, a recent study from last week, actually, by done by professors of the Wharton school from the university of Pennsylvania. Compared innovative ideas generated by chat GPT compared to their MBA students. So they run this contest between the students and chat GPT. And what they found is that chat GPT came up with more ideas, better, faster, and cheaper. And they've ranked all the ideas that came from both the students and chat GPT and within the top 10 percent of innovative ideas in the experiment. 87. 5 percent of those came from ChatGPT and not the students. So hopefully this intro gives you the thought in your head of like, how can I use That knowledge, meaning the ability of large language models to make better decisions in your business. if that's what you're thinking, you came to the right place because this is exactly the question that we are going to answer and going to take it even a step further. We are going to show you how you can build an amazing C suite or advisory board for your business that will be with top notch A players from your industry without paying them anything, just by using large language models. Our guest today is Raphael Mounsoui. He is the CTO and the co founder of Elitison, and it's a company that helps other businesses take concepts and ideas, Develop them to services and products, and sometimes even spin them off as separate companies. So they've been doing this process at scale for other businesses for since April of 2020. So way before the GPT craze started at the end of last year, which makes Raphael the perfect person to learn this process from, and hence I'm truly excited and humbled to have him as a guest of the show today, Raphael, welcome to Leveraging AI. Thank youRaphael:
very much. Yeah. So I'm thereIsar Meitis:
today. No, go ahead. I really want to dive right in. And my first question is, what problem are we trying to solve here? Because this is obviously the first question, right? Okay. What are we, what's the issue and what are we trying to solve?Raphael:
Yeah. So many small businesses struggle because they don't have the stuff. They don't have the expertise that big company has. before my venture, I was A C E O of a subsidiary of Fortune 500. So when I work, for this big giant company, I, at my disposal, a lot of expert. I share expert, supply chain, expert, legal expert. So every kind of, every time I have a problem, I can find someone in the group that can help me, but it's not the case when you are at the end of a small business. Because you have a small team, they don't have all the knowledge, all the capabilities, and you need to solve your problem yourself. there is this idea that because they are trained on. So many books, so many stuff you can find, you can use the power of this tool to create your advisor board. And as you said before, it'd be very easy. It can be cheap and it creates a lot of value. Of course, it's not always perfect, but I will guide you step by step how to do it and how to start very simple in a simple way using. Shaggy PT or clothes tools that already exists on that. If you're going to go further, you can create very sophisticated model. AI that can even beat the expert.Isar Meitis:
So I love what you're saying so much. I think, again, I think this is maybe the most underutilized part of large language models. And I think you've taken it a step further by basically saying, I'm not going to ask it general questions. I'm going to give it a specific hat, make it an expert on something very specific. And then I quote unquote, have that person on my board or on my C suite or as a consultant, but without paying them 5, 000 an hour, which these people will actually charge you if you could ever get to them. So let's start the process. What is the first thing that I do in order to identify the problems or the people that I want to address? FirstRaphael:
of all, as in a true company, you need to recruit your board. You need to choose and select which expert You can see it on your board. So for example, you can recruit an HR expert, a marketing expert, a legal expert, and so on, and for each of them, you describe who they are, what they are experts of, and very important, you need to find books that crystallize that Contents the most dance expertise on each aspect. For example, if it is marketing, you choose the three top books you love the most about marketing. when you design your pro, you say, okay, my board is composed, for example, of a marketing expert on this expert actually specialize on this. Aspect of marketing on, very well the story that is describing this book because there is different way to do marketing. So you need to show your school on each book is a school of marketing. So you need to select each expert on using what I call a megaphone. Megaphone is a probe that describe who are these experts on what are their knowledge. Oh, that's all for the first part of the poem. Then you need to describe yourself because for LLM, for Shad GPT, context is all because from the context you can ask the right question. But If the model doesn't know who you are, kind of business, you are talking, every day with i, it cannot give you good advice and is the same. If you want to see a lower, you need to describe who the first meeting is. You need to describe who you are, what kind of business you are doing, and what can, what is your problem. The same with G P T, I see very often the people, they just ask very short questions and they just design a very short form. But it doesn't work like this, because if it is, okay, so can you solve my problem? Okay, you get a very, not a very strong answer from that. So you need to describe honestly who you are. What you are doing on what are your pain points? So first part of the prompt, you select your team, you describe who they are. Second part of the prompt you describe what is your business, who you are, and what is your pain point. On third part of the prompt is your specific question. For example, you are the c e o of a small consulting company that are for example, specialize in Microsoft technology on the business is good, but you don't know how to grow. It's difficult to, for example, to recruit people and they are expensive. They often leave your company or something like this. You're described exactly what problem you want to solve the most on you. Ask the question to the board on the very, very funny aspect is because it is a board composed of different kind of profession. So your question will be analyzed from each expert on. You can get extraordinary answer from each expert on. Of course, once you get all the answer, you decide you are the pilot and the eye is not a pilot. You are co pilot. You are someone to help you that give you with all the knowledge. what is the best advice on with this advice? You decide you are the boss.Isar Meitis:
So I want to pause you just for one second, because there's a few very important points that you touch and I want to summarize them very quickly. the first thing is. To make people understand that what Chachapiti knows is, or Claude, or all these large language models is basically any data that is out there available. That could be, like I said, books, it could be research papers, it could be, articles that they've written. So you can identify people. In many cases, people you already follow, you've read two of their marketing books because you think they're brilliant or their, general leadership books or whatever topic you really like. You can use that as the resource that the large language model will use in order to. model, the answers that it's going to give you. So you're not just giving it a name of a person. You're giving it a name of a person in a list of resources to pull from, which makes it very specific because this is a model that you trust and you've seen it to produce results either for yourself or for other people. So this is number one. Number two is. I, it made me smile when you said you, you got to write the mega prompt and not write two lines for it, because think about if you would have done this in real life, let's say you had access to somebody who is a global expert that you would pay five to 50, 000 an hour. to come and consult you, how much effort will you put into preparing for that one hour that you're going to pay five or 50, 000 for, you're going to put a lot of effort in giving them every possible information and explaining exactly what you need. So that hour will be the most productive possible. And yet, when we get it for free, because it's Chachi PT or Claude, we're like, Oh, I am this and that you're this and that, tell me how to proceed from here. I'm like, no, it just, it doesn't work that way. So the more effort you're going to put into explaining, like you said, your business, the client's business, the context, what you're trying to achieve, what the roadblocks are, like write a story, prepare it as if you're preparing it to a very high end consultant, you will get results that are at that level. And if you won't, the results. May disappoint you. So these points are critical and amazing. The last thing you said I also really which is you can build a board, meaning you can create multiple of these personas, ask them the same question and get their viewpoint and their approach on how to do this, which A, lets you consider this from multiple angles, but b, I assume you can then also use Chachi PT as. An expert consultant from McKinsey to help you figure out between all these approaches, how to do this. so what's the next step? So we got all the answers. what do I do then?Raphael:
So once you get your mega prompt with all the stuff, you submit this mega prompt to the system. because it's a mega prompt is better to choose a large target model that support a very large context. Yeah. So I got my best results with cloud too. Yeah. Yeah. Because cloud to support her, 100, 000 token as context. So it's large enough, to Of course, to get your very big story and then to get your very big answer. One important point is when you ask the system to answer, you need to give the system a minimum number of words. Because if not, of course, the system will optimize to give you the shortest answer. But, it's better, for example, if you say, okay. I would like your answer me or at least 7, 000 with 7, 000 world, do you get the most of it?Isar Meitis:
So I want to pause you for two points there, because they're both very important. One is to explain what's a context window for people who don't know that. So each large language model within each chat has. limit memory that we'll use for that chat. And it's counted in what's called tokens and a token is less than a word. So if you have a hundred thousand tokens, that's about 75, 000 words. So that's the limit in, In Claude too. I think Chachi PT now for paid users is like 30, 000, which is still a lot, but it's significantly less than Claude. And what happens, think about it. You wrote your very long question. You get a very long answer. You're going to keep on going back and forth because you want to drill deeper to get better things. Once you hit that limit. the chat, the large language model will forget what happened in the beginning. And the more you add, the more it will forget, which means it will become less and less efficient because it won't have all the information that was included. So it's important to pick one that has a long enough window. So that's one aspect, that, that is very critical. Now let's continue with the process.Raphael:
Okay, so how it works actually, it works because when you use a large length model is just a function, is a prediction, is a function that predicts from a few words, the next word. Okay, but in order that function works in, for your question, you need to const, to make, to put constraints on the input. In order that the story that will be elaborate will be constrained by the input. It's like a latent space, just like a hyperspace contextualize The more you shape your story in a specific area, and it's why it's very important to describe your problem, describe what are the personnel, and how they will answer yourIsar Meitis:
question. Yeah. And I want to add one more thing to something you said before, as far as defining the format, you talked about how long you want it to be. I go a layer deeper. I actually tell it which format I want it in exactly. Use bullet points, use headings, use subheadings, how many levels of subheadings should you use? It should be modeled around this kind of formal standard document that is delivered. It should be written as a business plan. Like you can tell it how you want the outcome to be, and not just how long you want it to be in order to fit. Your need, like you, you may want it as a script for a video that you're going to record with somebody else, like whatever you want the outcome to be, you can ask for that outcome to be in the format you want it instead of you now having to reformat it to the format you want. And if you don't like the outcome, you can just ask it to reformat it or change whatever you want to change and it will do it for you. Yes,Raphael:
exactly. the simple process is to, okay, you, you buy, an access to cloud and you create this mega prompt, but if you want something that is better, because there is this limit of context, maybe it's good to ask the question, not. To all the experts at the same time, but what you can do is you can create your prompt for each expert on you ask the same question for each expert. So if you do that, actually, you can use the 100, 000 token for each expert. So suppose you have 10 experts, you multiply the memory by 10. if you do that, And you can even program it because, if you do that, of course by all, it's not fun, but if you know how to program it, I've created an example how to do this kind of stuff. What you do is, you just ask your question one time and then you send the question to each expert. Each expert has his own context. That's all. And if you do that, You get more than create a big problem. And there is another thing that is good because each expert is very Constrained but by expertise actually the answer would be better that if you ask The answer to all the experts at the same time.Isar Meitis:
I agree. so I want to touch on a few points you said, and then I have a follow up question. One is how do you really duplicate this in several use cases? And I will explain how I do this and I will let you give your solution on how to get this multiplied by 10 or 12 or five or how many things you use. I have what I call a prompt library. Where I have a complete prompts, but I also have snippets of products and I use a tool called magical, which is just a text expander for Chrome in which I save all these things. So I can type a few letters, which is the, shortcut and then it pops up a whole segment. So if we take the use case that Rafael just shared with us, you can have. a snippet of the description of who you are and what's your background and what is your company and what is the problem that will be duplicated across. You can have a snippet of the problem as a separate snippet that you may want to use in some context and some you don't, you want to use something else and then you just combine these together. So when you want to create the long prompt, you don't have to copy and paste it and then make changes to it because it's a different character and a different expert. You literally just use. Slightly different combination of shortcuts in order to get each and every one of them and still get a very long prompt by typing four words instead of 4, 000. So that's how I do this. How do you do that? Like, how do you duplicate across the different experts? Oh,Raphael:
actually I use the same process when I'm doing my experimentation because you need to first need to tell all your prone, it works well. So you need to experiment. But I'm a developer. So what I've done, I've created a full web solution. in this web solution, I can enter my prompt on this system. Actually, is implemented with, what you call a long chain. that use the same is the same. snipets. Some prompt with a variable, and we just substitute the variable, for example, for each expert. So what I'm doing is exactly the same, but I've created an application for that. So it's just like a SharedGPT, but it's a SharedGPT with many experts. So once I ask a question, the question will be sent to each expert. The prompt will be created specifically to represent this expert. And that's all, then the answer will be, collected in parallel because each expert will reply in parallel and I get all the reply and I do a summary of each reply.Isar Meitis:
So I love this. So this basically saved you the effort of opening all the different chats separately and copying, putting in. So for those of you who do not write code, you can do this with. No code, low code solutions like, definitely maker and any 10. I love any 10. It's my new favorite toy. so it's N the letter and the number eight and then end. com. And it's basically a completely flexible flow chart. So if we take this particular use case, you can create a. a open field, like in a form, like a Google form that you will type the question that you want to answer this time. And then think about a view of a flow chart, splitting this into the eight different people already adding the background of your company that it already knows, adding the specific, Definition for that particular expert, the books, it's going to use all the stuff that we talked about before, and then it will send it separately and we'll collect all the answers. So you can do this without writing code just by using, these tools. I want to ask an interesting follow up question because I know you've done this a lot and I know a lot of people also fall short in that aspect. Once you get the first answer, how many times do you go back and forth with each expert to really hit gold? Because I assume in many cases, the first answer is not. Yeah, you're rightRaphael:
on there. Very often, actually, it depends on the expert. There is some expert, actually, they are not very relevant for your question or your problem. They just give you some advice, but okay. But you cannot do better than that. For example, if it is a legal expert and you have a problem of supply chain, the legal expert will give you something. But. Your problem is really a problem of supply chain. So once, once you, you get All your first reply, you choose which expert actually can answer your question, and you go to each expert, actually, where you try to understand, why this decision has been taken, how to implement it, how to implement that, and if it is a plan you have never done that before. It can give you tips how to implement, for example, to optimize the supply chain or something like this, or you can recommend a book, you can recommend training, or something like this. once you get your generic answer, you need to select your, the expert, and you continue the dialogue with each expert, and that's all. And of course, at the end, you make the decision, you are the decision maker.Isar Meitis:
Again, I think I love what you said. The idea is you need to remember when you use these models in many cases, the first answer, even if you've written a very detailed and great prompt, it's going to give you a good answer, but maybe it's not exactly the answer you're looking for. Maybe, or maybe it's a great answer that gives you follow up ideas and questions that you want to ask. Keep on digging. Like it's a conversational solution. It's not a one in zero solution. Keep asking questions, keep iterating, ask for clarifications, ask for a process, ask for a system, ask for recommendations for a book to read, ask for somebody else. You may want to ask about this, like literally what you would do in real life with an actual consultant until you have a solution that can really support your decision. So again, Extremely powerful suggestions. What else can a company do in this realm of concept of building a, an advisory board? Yeah.Raphael:
So actually what I described is just a very cheap solution because we just take a model that already exists on the market. It doesn't give you a competitive advantage because of course, for certain profession, actually, they are very well known. They're very. Generic. But for example, if you walk in a field. Like I work in industrial gas before in my career. Industrial gas is a very specific business. Yeah. It's a oligo pole. There is just three company in the world. Oh wow. That is doing this business. It's a very good business model, but actually nobody teach you in school how it works. You need to be an employee inside and to know how it works. So suppose that you are a company that is unique. And you want to leverage a very specific knowledge. Actually, the best asset you have in your company is your documentation, the documentation of your process. if you took the chance in the past to document very precisely, what is your process? What is your business model? How you operate? And, company, like initial gas company. They have a, I remember my father company, Air Liquide. They have a book called the Blue Book. It's like the Red Book of Mao Zedong. yeah. The Blue Book is all the operational, procedure. So when you are a CEO of a subsidiary and you are far away from the, or the headquarter, you have this book with all the procedure, what you can do, what you cannot do. Suppose you take this specific knowledge and you use this specific knowledge To do what we call fine tuning of a mo a model. So what is fine tuning? For example, if you take, open AI share G P t, if you subscribe to their a p i, what you can do, you can create your own specific model to create your own specific model, what you need to do to feed and train a model with your specific data. So you take. All the information from the book, you create question of answer or other stuff like this piece of knowledge, just chunk, piece of knowledge, the more you have, if you have a very good quality piece of content and you feed on trend, a model with that, you add to the layer that of the model, all your specific knowledge and you can create it. the best expert on the market because only you have this knowledge. of course, it's more expensive, neither, neither IT team to do that, but if you want to compete with the best.Isar Meitis:
Yeah. So this is really the next evolution of what we discussed before. And this is like for a different scale of company, right? So the first solution works amazingly well for small businesses. This solution works extremely well for, it doesn't have to be a really large business, but it has to be a larger business for two different reasons. One is you need a lot of proprietary data, otherwise it's not worthwhile doing. Is you need the resources to do this thing. And to add another point to what you were saying, you were talking about, yes, you can take Chachi, PT and train it. That's true. In many cases, what you'll probably do is take llama, which is an open source model, install it locally on servers that you own that are safeguarded, that nobody gets access to that information. And then you can train an open source model that is now installed on your servers. And when I say your servers, they don't have to be on premise. They can be in the cloud, but it's your server. It's not shared with any other resource. And then you can be. 100 percent sure that it's guarded behind some kind of a firewall and that data is not getting used by anybody else to do anything else with it. And then, like you said, now you can apply the same model of asking questions, getting answer, developing ideas based on proprietary data that nobody else has access to, which means by definition, you're getting a competitive advantage. But instead of before there were three people, the CEO, the founder, the president who knew some of it and you have to go to them now, literally every person in the company that you give access to this has the ability to ask questions and ideate based on the entire history of knowledge of your business, which is extremely powerful.Raphael:
Yes, exactly. As you said before, it helps you to be more creative. Because what, a system like OpenAI or a large environment is some, is very powerful to connect the dots. It's an invention machine, actually. It's exactly that. If you take a very difficult, when you, work for a big corporation, there is specialists of every aspect. And very often they don't communicate with each other. And where do innovation come from? The innovation comes from the cafe machine. Yeah. They discuss, oh, I have this problem, or I have this problem with this factory doesn't work. Oh yeah, it's the same problem, but we can solve that. The innovation, actually, the innovation takes place like this in a big company, but If you, I believe that if you have a model and you have stored all the knowledge of the company in this model, you can connect the dots, okay? What if you can play what if scenario? What, for example, you have to apply this marketing tricks. With this, or for example, supply chain problem on blah, blah, blah. And sometime it can be very stupid, but the system like OpenAI is just like something that create correlation, very easy to find similarity on why it works like this. Because a large language model is basically a system that allow you to do math. It works. Because our large model works, you take a text, what Chachapiti is doing, you take the text, it transforms the text into what we call embedding. Embedding is just a vector. It's just a representation of the world into vector that represents the meaning of stuff, with maybe 4, 000 dimensions. So if two sentences, for example, are related, it's two vectors that are close to each other. When you try, when you ask for example, charge G P T to compare that to that, actually you're just doing math because charge G P T transform one part of your sentence into a vector, another part into a vector. And after do math do just calculation about the distance between the squad and after. Maybe at the intersection of this, there is another concept on defined this concept because it can find a vector on that. On then. It can translate this vector into words. It's how it works. It's just a way to do math, to do ideation with words. And it's phenomenal. This isIsar Meitis:
the best explanation I've heard of this ever. So thank you for that, and I want to repeat that A for myself, So I can let it sink and also for the audience. The idea is the reason large language models are so good at finding similarities and context behind things is because they're doing what computers has always done, which is run statistical models and math. And the way they do this is they convert. Words and sentences into vectors, which is basically a mathematical representation of the words. Now they're playing like they always played. Is this like this from a statistical perspective? If it is, they know how to combine it together and then to do the reverse translation back into words. Absolutely brilliant. I love this explanation. and I want to end it here because I think we covered a lot of really important stuff and I think it's, it was a great. Summary for everybody, either in a large company, what they can do with proprietary data and also for small businesses who do not have access to all of that, but can still gain huge, amazing benefits by using free or almost, 20 bucks a month models in order to get results that were literally impossible to gain before without paying tens of thousands of dollars to really expensive consultants that you can't afford. Do you have anything to add before I let you tell people how they can find you and work with you? Oh, actually you canRaphael:
do more than that. Okay. The next step, suppose that you create the system with all this expert, and then you keep them with specific knowledge because you are fine tune some experts with your specific expertise. Then you give this expert tools. For example, in my former company, I've created. A full of optimization system that can, for example, optimize the supply chain. So it's mathematical model that helps you to optimize. So you can take this expert, they can formulate the, your problem to mathematical equation and after give this mathematical equation to tools that solve the problem. After, what you can do is when you create tools, you can create tools that can give access to your data on your system. because, okay, when you describe who you are, what you struggle, okay, it's good, but suppose that you have a larger business, or even a small one. If the system has access to your accounting system, your Azure system, all your data, this expert can make better decisions. This is not just Your interpretation of stuff, but just access to real data. So the one that can leverage expertise tools on specific data that you own, actually, you can, improve your productivity. You can, design, better way to grow, on the potential for meIsar Meitis:
is phenomenal. I love this. So you're saying the full. The dream, like the full scale thing is to train a large language model, give it access to all your data across departments, across databases, across systems. So your CRM, your ERP, your like all the different components, HR system, whatever you're using and give it access to tools, which are analytic tools, mathematical tools, whatever tools it needs. Now you're really gaining the most benefits because you're allowing it to enjoy all the data and all the tools that you would use in real life.Raphael:
Yes, exactly. Of course, when you do that, you need to be careful because if, okay. If you take the first step, if you take her, a model, you ask question on the question is rubbish. Okay. The answer is rubbish. Actually, it doesn't have a big impact for the world because you are smart CEO. You can make your own decision. You can. See what is bullshit on any way with the cost you thought it'd be bullshit, too So it's not a problem But once you have a system that can take decision that can use tools that can get Data, you need to be careful because some industry are really regulated for example industrial gas is a dangerous business because the factory can explode it explode From time to time, yes. 1, 1, 1 time. I need to go to the US and I cannot go because the factory explode just the day before. So you need to be careful. So the, this industry are related. You need to follow the regulation and of course, if it is a regulated industry, you need to have always remain in the loop. It cannot be a system that can take decision. And there is way to, to implement. System to be safe, but even if not regulated, you need to be careful because you need to design this kind of system in order that at each step, a human is in the loop and take the decision on a system like this, just someone that just, but exactly someone, but it's just. Something that helps you that give you more power, but it doesn't replace you because if not, something tragic will happen on the, we don't want that.Isar Meitis:
Rafael, this was really brilliant. You're obviously have a lot of experience in doing this and you give a lot of really interesting examples and. Stuff to think about. If people want to follow you, follow your content. I didn't say that in the beginning, but Raphael makes the coolest content on LinkedIn today. Like it's always structured very well. It's always very thought after it always has the same style of cartoonish images in it. very easy to follow, but very in depth as far as the thoughts behind it. So if people want to follow you, if they want to connect with you, if they want to work with you, what's the best way to do that?Raphael:
The best where actually you can follow me on LinkedIn. My name is, Rael Mo. You can find me on LinkedIn, writing, blog and Medium. Okay. And of course, I'm a consultant. I'm open to opportunities. I have a big network, so if you want to do stuff with me, I'm always open. large experience because I'm at the same time. I'm there. I'm a nerd in a sense, but, I saw someone that, manage business before, big one. so I'm based in Hong Kong, but I've traveled everywhere in the world. So no problem. You can contact me. I'm always open to discuss.Isar Meitis:
Awesome. Raphael, thank you so much for your time and for sharing your knowledge with us.
What a great conversation with Raphael. This concept is obviously a really cool and be extremely powerful and productive. If you listen to this podcast, you know that I'm a huge believer that. Maybe the most underlooked capability of generative AI is the ability to consult and ideate with it. And the concepts presented by Raphael, just take it to a completely different level, because you can consult with specific people or specific authors or book or sources on specific issues that you have either in your business or personal lives. And now let's dive into this week's news. First news comes from MidJourney, still in my eyes, the number one photo generation tool. And MidJourney kind of released the first mobile app. And what do I mean by kind of released it? Well, Meet Journey partnered with a Japanese gaming company called CZG. And they've released an app called Niji Journey, and the initial goal is to address the Japanese market and allow them to create anime art style. But that being said, it's not limited to creating just anime style. You can create a full range of MidJourney styles. With this app, it is available on iOS and Android. MidJourney did state that they're planning to eventually launch their own mobile app, but they did not mention any timelines or scope for that. As you know, the other and common way to access MidJourney is through a discord server, which is not the most user friendly way to do that, which kept a lot of people away from MidJourney. And that's a shame because as I said, I think they have the best model out there right now. They did say they will eventually once they launch the next version of MidJourney, they may steer it away from discord. The other thing that MidJourney announced is the ability to upscale images all the way to 4096 by 4096 pixels, Which is significantly better than the previous limitations of 1024 by 1024. This obviously comes as part of the arms race in the text to image generation, where Dall-e 3 was announced as part of ChatGPT. DALL-E 3 announced that they can now upscale to 729 by 1024, which is better than they had before. And Adobe's Firefly 2, which was released last week, allows you to upscale to 2048 by 2048. So MidJourney still keeps the upper hand in upscaling as well. At this point, it's a very, very big difference because 1024 by 1024 is not bad, but you cannot use it for high risk demand, such as printing or even large web images. And now you can do that. It takes significantly longer to create through the model, much longer than it took before. So if before you waited a few seconds to maybe 15 to 20 seconds, now you may wait, a couple of minutes in order to get the image. But if you want a high risk image, then you can do it right now in MidJourney. And since we already touched on this topic and we mentioned Adobe, as I mentioned last week, Adobe made a lot of announcements in their max conference, as far as what they're going to be releasing. And they now have released premier elements, 2024, as as well as a new version of Photoshop. And they've included a lot of the capabilities they presented into these products, things such as being able to pick either a background or an object just by saying what you want to grab. And it will take that out and we'll fill in the hole that you've created in the image or making different changes and variations such as transforming skies to different times of day, different weathers and so on. All of these are now available as part of the Adobe offering. They also announced that premiere will be able to create highlight reels from footage that you already have as far as video existing videos, which is obviously really cool and will save a lot of time. My take on this is that we're going to continue seeing this unstoppable, and continuously accelerating arms race across all the big players that knows how to generate these models to create more and more capabilities and integrate them into products and systems. The biggest difference that we're seeing here is obviously that mid journey does only this and provides their engine as an engine While companies like Adobe and Microsoft and Google are going to integrate these capabilities into existing tools that we're already using and provide it in a user interface that we're somewhat are used to. And if we're staying on the same topic of image generation, OpenAI released that they're internally debating whether and when should they release AI generated image detector, so the tool can detect images that was created as of now with DALL-E 3, but potentially later on with other diffusion models, And they're targeting 99 to 100 percent accuracy on unmodified images. What I mean by unmodified, it's pictures that were created by DALL-E 3 and that's it. They're also saying it's 95 percent accurate after significant image modification. If you take an image created from DALL-E 3, and then you edit that with Photoshop or any other tool and make significant changes to it, it still has a 95 percent accuracy saying this was created by AI. The main point of debate of whether they should release it or not is the philosophical debate of what constitutes an AI generated image versus a human generated art. So if you created an image with AI and then you significantly modified it, is it still AI generated or is it And they are actually asking the community of artists and so on to provide them feedback on that question so they can act accordingly. I think this is a very worthy and important discussion. I just wish there was the ability to take all the people that are involved in this, all the big players and have them come up with some kind of a standard that will allow all of us to make sense of what's real and what's not, was created by AI and what's not. Not created by AI, even if it's somewhat modified or created by AI. I think we would want to know that. And maybe there would be different levels of certification or qualification that will tell us what percentage of it was created by AI. But if we have such stamp of approval, that would be consistent across all AI creators, I think it's extremely needed in order to continue operating in the way the human race is used to operating, meaning the ability to trust things that we see through digital communication. Expanding a little bit from image generation to other types of generation, researchers were able to develop a 3D GPT that generates really high resolution detailed 3D models and scenes using text prompts. Now the quality is still not photorealistic, but it shows real promise and it's definitely good enough for gaming and creation of other three D models. And the really interesting thing here is instead of generating the tool that actually creates the three D, they're generating the 3d models using existing 3d software rather than developing their tools from scratch. And my take on this is that might be the most interesting part of this piece of news. And that's why I included it because it means that AI today can use existing tools that we have. So you can develop an AI worker that will be extremely capable in something very, very specific and just teach it how to use existing tools that we already have, which makes the AI development significantly easier and still provides amazing efficiency capabilities with existing tools that we're currently using. So I find this. Fascinating. And I think that's the direction that we're going to see a lot more of coming into 2024, another big announcement comes from China. Baidu, the Chinese tech giant. So if you want their parallel of Google just released Ernie 4, which is their new large language model and it can do a lot of the things, the models that we know of that come from OpenAI and Anthropic and Google, and it can generate text and images and videos and also. It is natively integrated into Baidu's apps and products. So think about the Google suite of tools across from maps and email and communication and sheets, etc. So think about the similar thing but in China, and having AI fully integrated into it, not a big surprise, I don't see any big tech player and then eventually medium and small as well that will not have AI capabilities integrated into their tools and products, because otherwise it will just won't be able to compete because once these tools are mature and make it extremely easy to do things, nobody will find the way we do things today acceptable anymore. So if you want to stay in business, you will need to provide these kinds of tools across everything that you do. And the last piece of news that I want to share with you this week, and maybe it's not news, but more of an interesting point of view that we all need to be aware of, is an interesting debate developed this week between Yoshua Bengio, who is the founder of Element AI, and he's also a professor at the University of Montreal, and Yann LeCun, who is Meta's chief AI scientist and the debate was about AI safety. Joshua questioned the wisdom of open sourcing, powerful AI systems, which Yann LeCun has been the biggest supporter of. So Meta for years, and even more recently have been releasing every research and every tool and every capability that they developed as open source for other people to use. And Joshua questioned whether that's a smart thing with the incredible capabilities that these tools have today and you releasing them as open source enables anybody to take that and use it. And Lacoon's answer was that they emphasize designing for safety rather than imagining the worst thing that can happen with it. The biggest take from this is that we need to understand that there's a huge debate going on right now between closed and open source models. What's safer. is it safer to give a few companies like Google and open AI and Anthropic the controls to decide what's safe and not safe when it comes to releasing these models, or is it safer to just. Release the models on open source and let the market figure out how to stop them and have it in the hands of more people that can understand what safety can look like and help getting there. I'm definitely not the one that will have the right opinion on this, but what it comes to show is that even the smartest people in the world, the biggest experts on this are debating on this topic, which is not necessarily a good thing for all of us because one of them is wrong. We just don't know which one. And currently both aspects are running forward at alarming speeds. And with this not so positive note, I want to thank you for listening. Explore AI, try things out, share what you learn with the world. Share it with me on LinkedIn. And I would be glad to know. And if you want to expand your education about AI, if you want to make real changes in your career end in your business, check out multiply. ai again, spells like the regular multiply, just instead of YAI in the end. ai and until next time have an amazing week.