Leveraging AI

105 | How to maximize ChatGPT: 7 powerful use cases you need to know with Artur Sossin

July 09, 2024 Isar Meitis, Artur Sossin Season 1 Episode 105
105 | How to maximize ChatGPT: 7 powerful use cases you need to know with Artur Sossin
Leveraging AI
More Info
Leveraging AI
105 | How to maximize ChatGPT: 7 powerful use cases you need to know with Artur Sossin
Jul 09, 2024 Season 1 Episode 105
Isar Meitis, Artur Sossin

In this Live Episode of Leveraging AI, Artur Sosin, a seasoned tech leader and AI creative, will guide you through the intricate world of ChatGPT and his latest experiments. With a rich background in research, Artur's insights on prompt engineering and leveraging Generative AI are invaluable. Don't miss the chance to learn from a top LinkedIn voice in AI creativity.

This webinar will cover 7 powerful ChatGPT use cases:
1. You writing style clone
2. Your personal librarian
3. Your research analyst
4. Your image editor
5. Your data analyst
6. Your PPT designer
7. Your data scientist

Artur Sosin is a tech leader with a deep understanding of AI creativity. Artur has worked with AI tech for 6+ years. Spent 100s of hours with generative AI.

Resources:
https://artur-content.notion.site/Maximize-ChatGPT-2dbfcfcc5896449ea1ee4e58435aa6da?pvs=4 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

In this Live Episode of Leveraging AI, Artur Sosin, a seasoned tech leader and AI creative, will guide you through the intricate world of ChatGPT and his latest experiments. With a rich background in research, Artur's insights on prompt engineering and leveraging Generative AI are invaluable. Don't miss the chance to learn from a top LinkedIn voice in AI creativity.

This webinar will cover 7 powerful ChatGPT use cases:
1. You writing style clone
2. Your personal librarian
3. Your research analyst
4. Your image editor
5. Your data analyst
6. Your PPT designer
7. Your data scientist

Artur Sosin is a tech leader with a deep understanding of AI creativity. Artur has worked with AI tech for 6+ years. Spent 100s of hours with generative AI.

Resources:
https://artur-content.notion.site/Maximize-ChatGPT-2dbfcfcc5896449ea1ee4e58435aa6da?pvs=4 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar:

So welcome everyone to another live episode of the Leveraging podcast that shares practical ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Issar Maitis, your host, and many people now understand that AI is not just a buzzword and it can actually provide business benefits, and there's different ways you can gain ROI by using AI. But the reality is that most people don't exactly know how to do that. And definitely not to do it across multiple aspects of the business. but there are a few people who have dove deep into this and figured out really multiple aspects in the business on how you can gain ROI. business benefits by using AI across different aspects of it. One of those people is Artur Sosin, who is our guest today. Now, Artur is a scientist, researcher by trade and an AI superhero. So like Bruce Wayne and Batman, if you want. And it's amazing to see because of his background in science on how, AI can be used for a lot of different things. Methodical he is in researching everything that he's doing. So he came out with multiple ways on how to use chat GPT in a business. And he has documented every one of these into a process that can be followed by other people. And he's going to share seven different, really fantastic business use cases with us today on how to use Chachapiti across multiple aspects of the business from Writing to creating images to data analysis, to creating powerpoints, to being a data scientist and across really every aspect of the business, very detailed and useful use cases. Now, seven use cases in one live show. We've never done before, so that's a record until Monday. So I'm really excited about this. I really love the content that you're sharing, and I know that you have a lot of good stuff to show. So I'm really excited to welcome you to Leveraging AI.

Artur:

Happy to be here, Azhar. Happy to be here. All right. Okay. So

Isar:

I will let you run this however you want. I know you have a list. I know you have a lot that you prepared, and I'll just try to ask smart questions as we move forward. For those of you who are with us live either on the Zoom Or on linkedin, please feel free to ask questions or share your knowledge or thoughts And I will bring this up in the right timing in the show, but thank you so much for joining us

Artur:

All right. Yeah. So welcome everyone. I'm going to present some powerful use cases of chat GPT. some I've shown in my posts, others in my newsletter, some are new. and I'm going to just start with high level with, what is prompting and, what's the difference with prompt between prompt writing and prompt engineering. A lot of people mistake that term and where are the good practices to actually write effective prompts. Most people are just going to stick to writing, not engineering. So writing and engineering, the difference is pretty simple. A lot of people say engineering, what they mean is writing because engineering is essentially when you change the model, when you have a systematic way of testing your prompts and then adjusting model components, or even switching up the model itself. So most of us do writing where we basically write in a certain fashion that the model understands so that it gets the most effective results for our given use case. Just with that out of the way, how do you do that effectively? you got to think of AI as a student, actually generative AI as a student, because despite of, let's say, impressive use cases and demos that we've seen so far, it's still pretty stupid, because it's a probabilistic machine that is just mimicking our language, our interactions that it absorbed by training during alignment. So you have to keep that in mind. It cannot make your decisions. You need to make the decisions. You have to steer it. You have to iterate in that sense. I think like the number one rule treated like conversation, like a good prompt starts a very good results for good result for you, but you have to work on it. The best results I've gotten always when I worked on it progressively, right? Instruction should be clear, a low ambiguity, so really be specific on what you want. Give examples that is the best because then you will have a higher chance of getting exactly what you want and be mindful of hallucinations, especially if you're not using web search or you're not using documents to, support your current, use case, then I think it's something between 20 to 40 percent of the chat GPT will hallucinate. Depending on what paper you choose to trust, and even if you have that kind of reference data, like documents or web search, you still have, I don't know, 10, 20 percent chance for hallucination. So trust, but verify really, and for some use cases, not something I'm going to show today, but I'll still mention it. It's very effective to give, or any other generative model, a personality, especially for use cases, like ideation, the creativity related strategy and coaching as well. if you want to play a coach with ChatGPT or any other model, give it a personality, describe it really who's it, is it a man, woman, child, what kind of backstory, et cetera, like all of that, all that makes, you, so to say, and then you get really creative, more humanized results. So I think I found that very effective for simulating customers or doing ideations. very good. with that in mind, I've crafted I want to pause you just for

Isar:

one second to follow up on what you said. when I do this with clients or when I teach courses, I tell people think about AI as the best intern on the planet, but it's still an intern. Like you're not going to get an intern that just walked into your company for the first time and say, Oh, go do this and expect them to actually do a great job. Like you're going to take the time and explain to them about the company and about your clients and about the project and about the specific tasks they want to do and give them instructions and provide them the tools. Like you've got to do all of that for a beginner intern. Even if it's an awesome intern and he has a PhD in something that's relevant to your field, he still knows nothing about your business, about your customers, about their processes, about your internal, about your needs, about the fraud, like he knows nothing. And this is basically all these large language models. Like in addition to the fact, the other stuff that you said that they hallucinate and all the other stuff, the biggest thing is that they really lack context for that we sometime assume because we know our stuff and we talk to other people and they know our stuff. And that's the biggest gap. And if you do all the things that Arthur said, you will dramatically improve your chances of getting good results. So again, just think about it as an amazing, but yet brand new intern.

Artur:

yeah, absolutely. And what we'll see some examples of those approaches in these prompts for these use cases, but not all of it. But I just wanted to point that out before we start, I'm going to be using multi model prompts specifically because I'm focusing on more advanced use cases where we combine different types of, Inputs and outputs that we can work with and chat. I think the only thing that's not present is vision. And, for multi model, it's actually even more trickier. And I think it's even more important to give the context and the other thing is to break it into manageable task. You can actually do that in the prompt. I'll show you that. but breaking it into task is basically allowing the model to focus all of its capability on a very specific thing you want solved. And then again, your chances are much higher to get a good result. All right. So what I'm gonna be talking about today is. We're gonna copy or a style or rather formatting. So I'm gonna show you that. Then we're gonna look at chat, GPT as a personal librarian. So talking to your documents, research analyst. So researching data, just showing you on a more advanced, use case of web search and how to use that most effectively. How to upgrade your images. So doing image processing right out of, chat, GPT, be it if you have a dolly image or your own image, data analysis. So we're gonna analyze data. Again, you can drag it in there, or you can import it from Google Sheets or OneDrive these days, whatever you choose. presentation, how to create a full presentation from start to finish, with ChatGPT using WebSearch, using DALI, using code, and the file And as of

Isar:

last week, and as of last week, it can export a PDF. EPT format file, which it couldn't be for, so that always helps.

Artur:

Yeah. Yeah, exactly. Exactly. so now you can do that. And the number seven is data scientists. It's something that explored recently trying to push basically code generating, generation capabilities of ChetCPT, creating a machine learning model. And really run it on your data. that is also doable. But anyway, without further ado, I'll just go and run with it. And I will share my screen. And I hope that ChatGPT won't decide to have a bad day on us. If so, we got backup. Let's see. all right. So the 1st thing I'm going to show is copying your writing style or rather in this case, format. So I'm just going to copy the. Example prompt here, over here, and we're going to go through it. Essentially, I'm just setting role, right? Typical, optional type copywriter. We're going to say LinkedIn because that's our example. Task is to copy writing style of the input text. Strictly keep the style formatting. Completely adhere to the structure, titles, paragraphs, other style elements. like that. Respect the flow of the text. Before proceeding, you will always let me know if the analysis was successful. I will provide you a text for for reformat, in this case. What I'm doing is also splitting the task, right? So I could have said, okay, I'm going to give you both like the reference and what to reformat, that has a lower chance of success. That's why I'm doing that. and in this case, I'm just focusing on copying the style. What I'm going to do is I'm going to take my LinkedIn post or version of today's post. And ask it to copy that. And I'm using GPT 4 explicitly because GPT 4 is better at following instructions. Note that down, at least for today, better

Isar:

than 4. 0, 4.

Artur:

0. Yes. 4. 0 is faster, but it tends to run too fast sometimes. And it does something weird or deviates from the instructions already from the first prompt, or at least from the second to third. So that's why, at least for this one, I really recommend, or today, at least, to use GPT 4. GPT 4 rather than 4. 0. Okay, so it already says, okay, it analyzed it, it gives me some notes on what it absorbed, so to say, and now I'm just going to dump a Perplexity output I did, where I researched five presentation tools, and this post was actually about the top three tools I use, Perplexity, Canva, ChatGPT Premium, Okay, so it's pretty similar in the sense of content that I'm putting in there, right? and that's maybe another note I can make is that you can make it format anything towards this kind of structure, but then you get garbage in the end, right? So you have to also think about what you're trying to reformat that it more or less fits because then it's going to invent things and trying to fit your content that you have here in your input to your target format, right? But here it fits explicitly, but it's. Really like badly formatted, right and we want like beautiful copy for LinkedIn and that's what we're getting. Top three presentation tools Choosing the right tool can transform your presentations. There we go So it basically adhered to the copy to the styling and everything Maybe descending ascending is not Always on point, but it's pretty close and gets you 70 to 80 percent of the way. And that's how I write some of my posts actually. Okay. So if I have posts that performed well, then I marked them down and I use them for reference and if I have other content that I want to use that I want to talk about, then I can use that, this approach to make that happen. It's similar to how people before used templates. you could do that with chat as well. You can ask it to create a template instead of just copying. But whatever way works perfectly fine.

Isar:

So a few things I want to add, or maybe summarize and then add. So what I want to summarize for those of you not watching the screen, there were really a two step process. Step one was giving ChachiPT an instruction saying, I'm going to give you another post. I want you to learn the format of the post. And that post had emojis and it had lines between different sections. And it had a list kind of structure with different sections in the list. And literally all he did, all R2 did is he copied a previous post. And Asked Chachupiti to learn the formatting and it did and it said, okay, it's a list. It has emojis. It has this. It has that. And in step two, he gave it a new information to Use the same kind of formatting that other information came from perplexity. If you haven't been listening to the podcast, before it's, I always like to joke. It's if Chachi PT and Google search had a beautiful baby together. It's basically the perfect mix between, large language model and, and a, And a search engine, which makes it the best research tool there is out there today. So you can research a topic, get the content from perplexity that is going to be in their format, dump it in here and create a post out of it very quickly based on a format of the post. Elsa on LinkedIn saying amazing stuff, Arthur, that she really loves this concept. I will add one more thing. Some of you are just getting started and maybe don't know how to create content, but you have other people that you follow and they create great content and you know that their content gets shared and liked and gets engagement and so on. You can use their formatting and their flow in this prompt, right? You can use the same that Arthur said, learn this, but then take somebody else's. Format and process, dump it in here, Chachapiti will learn it and then throw your content unstructured into it and it will create a structured prompt that looks like the other one that follows the same kind of concepts, which will help you a lot if you're not a professional writer and if you haven't created a lot of content on LinkedIn.

Artur:

Yeah, absolutely. This gets you 70 to 80 percent of the way. The only note I'm going to make is that you need to know what's good. You still need to learn how to do good copyright. And you still need to learn what are the red flags of AI typical words, because sometimes it does that. Even though in this case, I don't think it did actually tried it one more time before the show and it didn't, but sometimes it does put the elevate the delve, the whatever in there. And you have to have that recognition to remove that. So to say, so it's trust with verify basically, but it will get you. Yeah. By the way, I find

Isar:

that Claude does very little of that. So when I write content, I use Claude more than I use ChatGPT. I use ChatGPT a lot more to analyze data and do some of the other stuff you're talking about. Yeah. So there's also other tools for some of the use cases.

Artur:

Oh, absolutely. I think I'm a creature of habit. Claude is available only, I think a month plus in EU. So I have to dive deep. All right. So since we're talking perplexity, I think the next one fits very well, which is advanced web search. And, I'm just going to show you how close chat GPT with

Isar:

the

Artur:

updated web search, because they have been updated. Updating it progressively performs compared to perplexity. And what I'm going to do now is I'm going to open a new chats and go to four Oh, because four Oh is actually better in this case. And so for a dig deep, I'm just going to say that the limitations that you have today are you don't get as many sources as you do with other tools like perplexity. there are paywalls that it's. Just, can't handle and just steps out. There's a timeout. So if it searches for too long, it will stop, if the query is just too complex, it'll just drop out and you get little to no sources. Those are some of the limitations you should be aware of. I'm going to drop like the whole Notion doc and all my notes are in there so you can read through them after the show. But, the point is, for some use cases, it still works. If you just need, I'd say, 5 to 10 sources, and you want to look for trends, you want to search for specific information, and you're okay, again, with a few sources, that is perfectly fine. balance and the links have become, I would say, predominantly correct because that was also an issue before that the links were broken. But let's see what we get this time around. So I'm just going to copy 1 of the example prompts and what we're going to do is we're going to set the role of a research analyst. We're going to look for 5 trends and marketing this year. So I'm going to say you will search the web and find 10 unique sources to support them. We need a citation for each source. Use the format. So I'm very specific on that. 20 30 words for trend description. Bulleted list with at least three unique citations from web sources with links. Okay, let's try that. And what's it going to do now?

Isar:

As we're waiting for this to run, I will say two important things about the way the prompt is structured. one, asking for citations is huge because it forces it to ground itself in actual facts versus making stuff up. And two, the fact that you format it and have citations makes it a lot easier to check whether what it's going to give you is actually real or not, because it will structure it in a way that you can click the link and see if the information actually makes sense. Exactly. and it

Artur:

is done. That was fast. as you can see, we have quite a few links for us. I trust HubSpot. They have amazing content and you can even see already some previews from, from hovering, we just go there and let's see, is that being shared also, or yeah, okay. Very good. Yeah. So this looks good. Top marketing trends to watch 2024. Yeah, that looks good enough. but again, I would go through a few of these and see for myself, especially if you want to publish this somewhere. and generally, as I said, it's gotten better. With this kind of structure, it makes it easier and you can reformat this to a table if that's what you're into. but basically for certain use cases, once again, web search and chat GPT is more than capable, to be honest, because in perplexity, sometimes I just feel overwhelmed with the 30 plus sources that I get. I'm like, okay, which one should I check first? so that's another thing because also perplexity hallucinates. That happens too, right?

Isar:

Yeah, they all do.

Artur:

By the

Isar:

way, perplexity depending what you pick Sometimes runs chatgpt in the background. So that's true. It's not like it's a different model. It's the same model just Rigged to do something very specific.

Artur:

Yeah. Yeah. Yeah, I think chatgpt could do more if they just Upgrade from Bing because it's Bing what's being run in the background.

Isar:

True.

Artur:

All right.

Isar:

Yeah. So summary on this use case, you can create amazing summaries of literally any topic you want while getting a short summary and a link to any topic. You want to research very quickly, saving yourselves. Hours of research and kind of aggregating the information in one place.

Artur:

Yeah, by the way, the previous prompt, we can use it here. We could just use it here and say, okay, now copy this style and apply it to web search results that we got.

Isar:

So chained the two together to get an actual post from it. I agree, a hundred percent.

Artur:

All right. So Let's do library, so I'm going to switch to for again, because I've tested this one, sometimes for again, does it's, too fast, but, not too good for some reason, generally, I found if your prompt complexity is higher, like a lot of instructions used for. Really, it's, it's safer. So you get better results. Okay. So what I'm doing here is again, I'm setting it to a research analyst. you could try a librarian. I don't think the role matters that much as the instructions do in this case. the job is to answer questions, using the uploaded files as your knowledge base. And actually, let me just do, let me just do that. I'm just gonna put them on upload and we're gonna read through the prompt. Okay, so I'm uploading some LinkedIn wisdom files that I have, so algorithm report for last year, post from Yasmin Alec on unskippable LinkedIn posts, what about increasing inbound leads from Ipokima, and we're going to just, Ask our files for some of some advice on topics we want answers on. And what I'm noting here is your answer should be detailed and clear. You always go through all the sources to provide the best possible answer of what answer to question. If the files do not contain the answer, simply answer the files do not contain the answer. And I'm defining a very strict output format again. So citations from the documents locations. And then I'm asking, are you ready to begin? Just that it doesn't start running with the files already, but just pause it and let me query it. And this whole fancy markdown, it's up to you if you want to have that. It's just for readability. The main point is citations, strictly saying answer only based on documents. And Locations because the interface can't render pages for us, but it can at least tell us what page to look for. If we want to dive deeper and, side note on this is that I like to pair it with a custom GPT. So I could have, a librarian for a given topic, and they just attach a lot of documents and then, query them and keeping that private of course.

Isar:

I'll add two cents to what you said because I do something almost identical when, and I teach that in my courses. So, first of all, the three main things tell you to only use information from the document as for specific citations. And tell it what to do when it does not find the information because then it's still providing you an answer because these models, they want to answer. So if the information doesn't exist, it's going to either make stuff up or find something from the internet, which is not what you're looking for. but if you tell it what to answer, if it doesn't find the information, it will Tell you that could be anything like information doesn't exist or not in the document or whatever you wanted to say insufficient information, whatever phrase you ask it to say, it will save the information. It's not there. the only one thing that I would add is what I do when I use multiple documents. When I ask for citations as for document name, page title of the paragraph. And then the citation, because then it allows me a lot faster to check it. So if it's a 73 documents, each and every one of the 70 pages, I don't want to go and look where it is. And so if I ask for a very specific format, which again, very similar to what you're doing, it, I know exactly where to go and check the information that it's giving me.

Artur:

yeah. No, of course. based on the complexity of your documents, you can add, you can break this down further. Fully agreed. Good points. Okay, so let's ask it something. what are the top 5 trends on linkedin in 2023? so then I would assume it will go to the algorithm report. Most likely. This looks good and it's giving me citations and giving me the page number. So that looks good. Okay. So I want to start posting on LinkedIn. What should I do?

Isar:

Okay.

Artur:

That looks like solid advice. So some. Insights from the algorithm report again. I would expect it also to use the other documents. That's interesting. how do I format my posts? As you can see, it's not perfect. Not perfect. Okay. Clear, concise language. That looks good. Timing again, referring to the algorithm report. Interesting. Yeah, so that's, that's interesting because it prioritizes the first document so that I can ask it what about other documents? I guess the formatting in particular that you can, yeah, oh, that's interesting. I've never seen this. I've never seen that before. So

Isar:

what we're saying for those of you who are, so those of you are listening to this, the reason we're excited or like surprised is because It on its own kind of created a table with two sides and it's providing different answers or in theory because the other one is not showing up, from the different documents. So it's giving advice separate instead of condensing it into one answer. It's like looking at each of the documents separately.

Artur:

yeah, that's interesting. I definitely prefer this one because now it actually did refer to, the carousel by Yasmin Alex. That makes sense, but I ran this a few times before and it worked also referenced the other documents for some reason now it prioritizes document number one again, as you can see, it's not perfect. And I would just maybe note that, it maybe makes sense to add additional instructions to the prompt saying, okay, go through all the documents, check every document, give me explicit answer for each document, something that basically forces it more to really go through all the documents and not just give you the answer because it's what it's doing. It's giving you the answer the fastest one can get.

Isar:

Yeah,

Artur:

And the fastest one is simply probably the first document that was uploaded for some reason. Okay, what do we have next? Images. Yeah, I think we had enough text. Let's go to images. And I'm going to switch to 4. 0. Basically, it doesn't really matter here because it's simpler prompts. 4. both work really well. upgrading images, what can you do in chat GPT in general, upscale, sharpening, transformations, filters, mostly basic things. So you can't do the background removal or things like that just yet. It doesn't have the power underneath to generate this kind of code, let's say, but anything else works. so anything related to filtering and I actually. Use that from time to time when I generate a dolly image and I just want, I don't know, just sharpen it or make color corrections, something like that, something simple. the prompt is also in that sense very simple. I'm going to show two examples here since they're fast. Generate and execute code to convert the original image to a sketch drawing, display the results. Then provide a download link for the processed image. Okay. And instead of the sketch drawing, you can ask for anything else. Or you could just say, sharpen the original image or upscale the original image by a factor of two, et cetera. You can chain them. So you could start with one image and then say, okay, then upscale first, sharpen, color correct. That would be all applied sequentially. The key here is generate and execute code so that it doesn't just create the code for you or do does something weird. So it really creates code for that task and then executes it since it's multi model. It can't do that, and I'm going to use, outside input image. We could generate a dollar image, but I'm going to just use. An image that I already have, both work.

Isar:

Let's try this guy,

Artur:

Darth Vader, that's what I wanted. Okay, so what is it going to do now, behind the scenes, generate the code to load the image, to sharpen it, and then it's going to show us the result and give us the download link. That's going to take maybe a minute, depending on, let's say, how well the environment's feeling right now. Oh, that's very well. Okay, so that worked. Very fast pencil sketch of the original image. We can also enlarge it and see, I know what kind of transforms it's doing behind the scenes and that's something you can, basically do if, if you know how these transforms work, you can guide it better because sometimes it has a different, let's say, perception of what is it, this, what the sketch image is and, how is it supposed to look like? Essentially.

Isar:

Yeah.

Artur:

Okay. So I get out of here. Yeah. Okay. we also got a download link for that. Just going to restore that. Here we go.

Isar:

And I think the cool thing here that you're doing is not just creating images, which kind of, everybody knows how to do. But we're manipulating existing images. And those original images can be either AI generated or other. So you literally asking for it to create a new version of an existing image to whatever the needs you have. And, sometimes you need it in a different, format. Sometimes you need it with a different resolution. Sometimes you need it in a portrait instead of landscape set up because you're doing a story versus a PowerPoint, like whatever the case may be, you can transform existing images into other outputs using this kind of this methodology.

Artur:

Absolutely. And maybe I'm just going to show one example since it's fast and just show you how you can also compare it side by side. I'm just going to put another image. I have here of a model that I was using for branding purposes and let's try the other prompts. Okay, we're going to sharpen the image and we're going to say compare it saying display side by side. Still getting the download. Okay. So it's basically doing the same thing. It's just gonna input two images side by side, but the rest is the same. And as you can see here, I'm sharpening. I'm not doing a sketch drawing. You could do sepia. You could do gray scale. You could do, I don't know, some kind of toning. Anything like that is pretty possible. And yeah, there we go. So we can't see it from this scale. Eh, it does look sharper. Yeah. It does look sharper. Indeed. It does look sharper. Alright. You could try all sorts of other transformations. They're in the notes. So basically use the same simple prompt and you can exchange the transformations, chain them, and we can switch to, I would say my favorite cases, which involve data analysis and the machine learning. So data analysis is number five. I'm just going to get out of here. And here I actually use 4. 0, but only because I want to use the new beta feature, which is the interactive charts that are native in chat GPT. Again, if you're doing advanced data analysis with a lot of. Instructions like machine learning case use for forget the interactives. Probably they're going to, make it available for all models at some point. And I'm just going to copy the prompt here. Okay. You're an expert data scientist. The task is to perform a detailed analysis of this data set. You will then propose. Different types of charts based on your analysis, and then you propose only charts that support your new interactive feature. Now, as I said, this is optional. At some point, this will be not needed. Then you create the charts, display them. We'll do this step by step. You'll let me review the results at every stage. Notice what I'm doing here. I'm breaking it down because for each chart, it's gonna scroll through the data, analyze it, generate the code, execute the code, visualize, right? And, one note I'll make here is that. You can leave out the specifics, but then it tends to break. So it generates the wrong code executes. Oh, I did this wrong. And then I'm going to redo it. The problem with the environment is that when it does that too many times, it's out of context and for some reason it breaks. So it just says error in generating or something like that for texts. You usually get continued generated. Something like that. But if in one message you get a lot of code generations, one after the other, and it just broke, and then it's trying a new one and it can't finish, then it steps out and then you have to do it all over again.

Isar:

So a few points, I will let you run the prompt as you're doing this. And I'll give a few, again, some additional hints and tips, for best results, don't upload XLS files or, whatever the Google sheets Extension is, but actually CSV, it just works a lot better. So that's exactly what I'm doing. I know, I'm telling the people who are listening or watching us. So CSV files work much better. They're much smaller in size and they're also easier for a computer to read. The other thing, as far as the data you want to upload as humans, we like to tear care, Excel charts, and tables that make them fancy, meaning we add, one column space between each month and two column space between every year and then we call them the black and then we add another table on the side that does something else because we want to reference one to the other one. We're looking at this. It confuses the hell out of the machine. And so if you want to use the data for chat GPT analysis, and those of you are not watching, it's generating amazing charts that you can interact with and change the colors and change the type of charts and do really cool things with it very quickly in chat GPT. But to do that, you need the data to be as clean and simple and sterile as possible, and then you'll get amazing results very quickly, together with the analysis that R2 will share with us in a minute. Two tips, one CSV to clean the data before you upload it. Because then what happens if you don't, it does exactly what Arthur said. It's going to try to clean the data on its own. It's going to waste a lot of time. And in many cases, it's just going to fail. So save yourself that step for people who use regular data that they have in their company regularly, whether it's financial information, marketing information, whatever, OE, ERP data, like whatever data you have exported as a CSV or in Excel, just create another tab that has the same exact information that's in your fancier table, but just the data and upload that as CSV to the model, while still you can run on your fancy version of the data.

Artur:

Fully agree. And we can go through the analysis went through in one shot and we can interrogate it later. So the new interactive feature, as you can see, we have an interactive table so we can actually ask questions with respect to columns or rows or segments of the data that's now possible. What else? Okay, so it recognized what's in there. I used a classic churn data set for us. And then it said, okay, the Number one inside is whether customers churn or not, how many, okay. So it did a pie chart of that for us, then churn rate by contract types. So we have. One year, two years, month to month. And again, we have no, yes. And yeah, you've seen this feature by now. it's still pretty limited. you can actually tell chat GPT to use a library called plotly to create HTML graphs for you. And there you can embed stuff in your data points and whatnot, like much more than this. I'm not going to show it today, but I'm just saying that's probably where they're headed today. It's just colors. And that's not, That may be helpful if you're more sensitive to some colors than others. but I can't even, take out the legend, so to say, toggle, contract types, for example, and focus on one. Or change the

Isar:

font or the size, you're very limited with what you can do.

Artur:

Exactly, but it's it's beta. I think what is helpful is when you have a lot of data Is that you can really go through here and see what the numbers are. That's helpful here it's small, but you have more like this one here I think this is very good like line charts like this So I explicitly asked for the three types of charts that work today with the beta more or less and here we have churn rate over tenure and Compared churn and no churn right and we can ask it You Questions about this, right? We can ask you for more charts. We can, what else we can ask you to regenerate. These are just them in some form. But what I'm going to do is I'm going to go here and just ask it a question about the, the data. For example, I'm going to say, monthly charges. So how do. Monthly charges

Isar:

relate to churn or not churn, for example,

Artur:

right? And this product hadn't used any advanced techniques, nothing, like literally nothing. So it's really like from a high going in from a high level. I have a structure, it's structured. Yes. It's a table CSV data set. So that helps. What is our set is very important. I think. So without structured data is going to have a, an issue essentially is going to try to structure it. Most likely fail. So if you have tabular data, perfect, then it can really work wonders with that. And you can really talk to it like you would talk to a data analyst, essentially, right? okay, so generate a box block for us distribution of monthly charges for customers have churned versus those who have not here. A few observations. Customers who have turned tend to have higher median monthly charge compared to those who have the knots. Okay, there's a wider spread of monthly charges among customers who have churned. So as you can see, we could go on, right? We can interrogate all the other characteristics that we have in our data sets. We can do other plots and basically have a intern data analyst at our side, right?

Isar:

Yeah, I'll say two things about this because I'm with you. I think this is one of the most incredible capabilities that they've added and. I used to run a pretty large travel company, like a hundred million dollar travel company. And I had a data scientist team and we had huge databases and did a lot of cool stuff. But every time I needed something, I needed to write a really long detailed email to explain exactly what I want from the data team. And then it will spend. Either, a few hours or sometimes a couple of days to figure out how to write the code to make this happen and then create the dashboards for me so I can see what I wanted and literally now with this capability, I can get the answer faster. Then it would have taken me to write the email, forget about, wait for them to actually write the code and do the thing. And the other thing that happens is the iterations is not the other thing that Artur mentions. You can keep on going because every time what would happen when I go to the data team, they will give me what I asked for, which sometimes would be exactly what I envisioned, sometimes it wouldn't. So whether I had to change it or sometimes it was exactly what I wanted, but then I had a follow up question that I didn't think about because I didn't see the data. And now you have to write another email and you have to wait another day. And here. It all happens in seconds or minutes, and you can just iterate and keep on diving deeper. And you can do this for sizes of like data that you cannot even do in Excel. Like I played with this with data files of 250, 000 rows, which you just cannot upload to Excel, just won't run. And you can do it here and you can get results. It takes a little longer, but okay, you're going to wait a minute and you've got to get an answer. It's absolutely magical. Absolutely. Absolutely.

Artur:

Yeah. Many use cases. So customer churn, yes, but you can think leads, you can think, I don't know your marketing campaign data, whatever,

Isar:

financial information, hiring float, like literally any data that you have that is, that is, numerical data, you can run through this through as many years back as you want. Pretty

Artur:

much. Yeah. Until it runs out of, RAM, I think. By

Isar:

the way, the other interesting thing going back to this, before we switch gears to the next one, you can upload two separate sets of data. And if it has some connecting information, you will know how to connect them together. So a simple example, if we look at churn, okay, we upload this turn data, but let's say you also have data from your financial platform saying who paid late. And if you have the same customer names, then you will know how to connect the two together. And we'll give you, now you can do more analysis. And let's say you also want to upload external information about how the economy is doing. So now it's going to connect date as far as the connecting information. And it knows how to do all of this on its own. And now you can do more and more detailed analysis, combining different data sources. Without having to know how to do the actual technical work behind the scenes to know how to do that. Absolutely.

Artur:

Absolutely. having the domain knowledge helps because you can guide it better, but even if you don't have it, it's you, this is how you get, I explicitly showed this first. the number seven will have more domain knowledge, let's say related to data science, but this one specifically starts very high level. to get you going

Isar:

awesome.

Artur:

And now we're going to switch to presentation. So I'm just going to start a new chat and four again, since it's more complex. And we're going to look at GDP per capita in the EU, right? So

Isar:

I'm going

Artur:

to let this run and then we're going to read through the prompt. I think. Think I'm going to make this shorter so that it doesn't overload the message gap. Maybe I'm just being mindful of the last use case that we, that we have. I'm just a thought that came into my head. So we have five slides here, so I'm going to put it to, maybe four. Essentially.

Isar:

Yeah.

Artur:

maybe even three. Okay. Now let's keep it. Let's keep it four. So we have, introduction page, trend description. I'll ask you, but five too many things to change. Let's keep it five. If anything, we switched to four. Oh, it's always a good fallback. So no input here. No input here. it's going to do everything natively. So what are we doing? we have a, Thing that is skilled in presentation creation. I'm not giving it a very specific role. I don't know, a comms expert or something, as I said, doesn't really matter. The task matters and the instructions matter most. I think at this point in the models evolution. Anyways. Okay. Striking PowerPoint presentations, that's what we want. We want a short presentation about GDP per capita in the EU, aimed at a non technical audience, and a step by step guideline. Retrieve the data by searching the web. Write clear and concise content for X number of slides. Then there's a description of the layout, okay, and the formatting. Then we have five supporting images that we want. Let me review the images and the text before proceeding so that, again, splits the flow into segments so that we can inject our corrections if we want to and so that we limit the model's attention to a subset of this task and not generating everything with one shot. If you do this in 4. 0, it will do that. Despite me entering, let me review this everywhere, it's still just You know, just goes. Okay, execute code to convert to PNG because we have WebP as a native format and that is not supported for exporting as PPTs. That's the last step we're going to do is just going to combine all of that. It's going to write code behind the scenes to, combine the text and the images into an exportable PPT file for us. And it's already started. So as you saw, I searched three sites. Okay. Got some data on. The GDP per capita. Already created layout for us and yeah, started generating images. It's selected the images for us at this time. It didn't let me review them. How nice. Okay. so yeah, so as you can see, like some of these images could use some work for sure. And. especially like with the texts and everything. So I would, basically edit this out. So either here natively, or I would just let it regenerate. But for the sake of time and messages, I'm not going to do that. I'm just saying that this possible, if you're not satisfied with the images, I chose minimalistic style explicitly so that not, we don't have excessive complexity in them still it inserted some data, some names, since we're talking about. Countries and GDP, right?

Isar:

Yeah.

Artur:

but you do see recognizable, landmarks here. Okay. So what then again, repeating the layout and asking me to review it, or I'm gonna say, okay, just proceed. As I said, we could edit it. We're not gonna do it today.

Isar:

I'll add, two things about editing. So because it's, we're doing multiple steps, you can edit specific steps just by, there's a little pencil thing next to your prompts and you can go back. if you guys don't know that, then you can edit specific things. But in this particular thing, the really cool new feature new, they had it for a month, it basically gave, it basically gave Artur five slides, like what's going to be the header and what's going to be the topics and the text in each slide, and let's say like all of them other than one, you can literally highlight a section of the. Of the thing that Chachapiti wrote of the answer, so highlight the section and then little two quotation marks show up and then you can ask you to change just that so it gave you five slides. You just want to change slide number three. You can highlight slide number three and then everything else comes up. In exactly the same word to word the way it was before it's not really regenerating it. It's only regenerating the section that you highlighted and changing that based on what you're going to tell you to change. So that's a very important use trick that they have that's now available both here and in Gemini for the last like month and a half or so.

Artur:

Oh, you see it made an error. That's what I was talking. So that happens. That happens. So it's a generate faulty code and it's retrying that. But anyways, I would say here, if you really want to get the best out of it, I would break this down further and add details to layout specifically and to the images that you want. So really. Do it iteratively, so maybe they kick started with the general task, let's say, or maybe step one, but then, research, get the content in and then starts one, one prompt after the other. Okay, let's work on the text and the layout. Polish that then go to Dolly make detailed prompts for each slide. Then say, okay, I'm happy with this now package it. So just use elements from this prompt to package the final results, because then you will have a better raw components to work with, so to say, And that's the whole point with this. it's not going to give you a polished presentation, even if I be, if I'm very explicit with the details of the layout, et cetera. It's still not going to give me a polished presentation. I think the value here is two things. So when you get raw components in a PPT format, you can work with it there. And we have it now see what we got. Just going to download it here and open it. So that's one benefit. The other thing is, I think I had, one comment saying on this post, yeah, what's the point? presentations are not about this. They're more about communicating with the audience. And I agree, but in some cases in companies particular, that's where I learned the fact that you can do code and generate PPTs is that we use it for documentation. So that's a documentation medium in the company for certain things. And then it's not about being polished or, like one message, one slide kind of thing. No, it's about just documenting in a PPT format because that's the recognized thing in their organization. And then you don't want to spend time on your results, right? Doing all this manual labor of dragging things in PPT, resizing, whatever. You just want a standardized way of your results being deployed to a PPT. And that's where something like this can also come in handy.

Isar:

Yeah. And by the way, connecting this back to your very first example, you can upload your existing presentations and say, these are the formats we're looking for. So it's not going to follow, the color guides and stuff like that, but it is going to follow the layout. So if it's a, Always side by side, 50 percent image, 50 percent text, size of the header, all of that. It will know how to learn and create that. So when you open it in PowerPoint in your company, all you will have to do is apply your company template, and it's not going to make a Everything looked really weird because it's going to more or less follow that same formatting, to begin with. So you can do that as well as far as saving yourself steps and manual work afterwards.

Artur:

we have also dedicated tools for this, like gamma, like Tome, for example. So you have options, I would say.

Isar:

Yeah. Okay. So now

Artur:

it important. Yeah. Okay. This is not the best result I've gotten, but I'm going to show it anywhere. so it's as unscripted as it gets. Yeah, there we go. I think that's visible for some reason. We got an empty slide. Never mind that. okay. GDP for cabinet and this is what I'm talking about. So just dump the image over the text, right? The text is over here. I have gotten better results where the text would actually be. On top of the image or to the side of the image. And that's what I'm saying. I'm not being specific enough with respect to my layout in this prompt. That's what I recommend as a direct upgrade. But in general, it took all the images that it had per slide, took the text, right? Made some conclusions, observed the trends over here, and we have our raw materials to work with. And then you can just apply templates here. And continue from here, it's going to shut this down. And then last but not least, I hope this runs or I hope this runs without break. Most more likely, machine learning, right? So this is like building on the case number 5, where we just did simple data analysis. And I'm going to do something more advanced. So I'm going to use. A bank loan data set, which is related to leads. Actually, we want to identify the customers for a remarketing this case, based on the features. And what I'm doing here is what is our set. I have two data sets, which have the same type of columns with one exception in the test data set, where we don't have a loan approved column. That's what we're trying to figure out. So in the train data set, we have. An approved loan column, which we're going to use to train the machine learning model that we're going to apply to the test data, which is basically new data that's coming. Think of it this way. And I have a template here. but the example is already pre filled. So the templates, let's say, more free to use. So you can really do a lot of things with this. But the important part is two things. So one is again, a step by step, instruction, right? And it's much more detailed now. I'm just gonna launch it so that, we can read in the meantime while it's running. So expert data scientists, your goal is to use the data set described in context. The context will contain the data sets and their descriptions. You will use the data sets to train a machine learning model in five steps. You will avoid displaying intermediate information while performing each processing step. Let's play the summary of each step. Stop before proceeding to the next step and let me review your progress. You will ensure that all variables are passed from one step to the other and nothing is forgotten. Here are the steps to train the machine learning model. And then I break it down into steps. you can do Even more granular breaking. each of these steps contain a couple of sub steps, so cleaning data sets, then we check for outliers, not a number of missing data, whatever feature engineering. So this can be custom to your data set. In this case, I'm really using some of the knowledge I have about the data, what is there. And I think that's important for successful results. I highly recommend that most of the time you do know your data, you do know your variables. So include that. much better performance this way. After that, we perform preprocessing. we split into input output, we split into test and train. I make some references to libraries, since it doesn't have all the libraries in its environment. I make some, let's say, nudges here, because sometimes it just Leaves out all the things that it did in the previous step for some reason. So I've got to remind it. Okay. Avoid the, not a number, et cetera. Even so it might break. Let's see. Then you train the model. So everything's prepped, right? So you clean the data. We've done some feature engineering. We've done the preprocessing. Everything's ready. We just have to fit the model and that's what it's done here. And then some parameters. confusion matrix, ROC curve, receiver operating curve is one way of measuring performance for machine learning models. We're going to look at that to see if, our model is doing something weird or it's looking good. Then once I'm satisfied, then we can apply it to the testing set, make sure that pre processing matches the one performed for the training set, and then display certain visualizations. you can pick and choose whatever you want, and then suggest the thresholds for hot leads based on conversion probabilities. all that to get that threshold, all that to get to a point where I have a model that makes a prediction for me, On which customers are hot leads for me and gives a recommendation based on that and reasoning for that actually, okay The context basically contains all the columns that I have in this data set and the csv and their descriptions Okay, and this will help. you can ignore that and then let ChatGPT figure this out, but this will just guide it better and reduce the probability of it breaking down in the middle, right? Two data sets, train, test. And you notice that I'm using curly brackets or over, this is just to focus the model on certain aspects of the prompt, because this is a big prompt by now, right? So we want it to really focus on certain key elements of that prompt. So that's why I'm using these curly brackets. And yeah, it's done data cleaning, did a summary for me, and asking me to proceed. I'm just going to say proceed. Because I've seen this way as

Isar:

we're waiting for it to do its thing. I will again, summarize this very quickly. What this allows us to do is to take a preexisting data that we have and use it to train the model without really knowing anything about how to train models. And again, as Arthur said before, if you know what training models is, it helps because you know what to ask for and you know what you beware for. But even if you take the prompt as is from Arthur's document that he's going to share with us, you'll be able to do. Most of this, just by adjusting it to your columns and so on, but let's take it back to the existing example. You can take the data of all the customers you have, who churned together with all the information we have about them, when they pay, what they paid, what they've used, what licenses they have, how many people like all the data that you have, put it in a table. ask Chachapiti to train on that data and then load the data of all your existing clients. And it can tell you which ones are likely to churn and why based on the historical data, this is obviously a goldmine for predicting anything, whether it's financial information, marketing information, spend, inventory churn, like literally any information you can imagine that you have historical data for, and that you want to project based on your existing data.

Artur:

Yeah, absolutely. And I know one more thing is that I used to spend hours on this, like literally, because I've done machine learning. I've done deep learning. I've done generative AI machine learning, although it's less complex, it's supervised, you still have to put a lot of effort into feature engineering, into pre processing, and this is just faster. So again, this might not deliver the. Say end result that I'm satisfied with, but it gives me a head start. I know I already have clean data. It tells me what it's doing, what is wrong with the data, et cetera. So like usually you have to investigate that and you have to write code for that. That's how the process worked. And that would be ours. Now it's minutes, literally minutes. And what has done so far is that. Went through the cleaning. It did feature engineering that was successful. It did pre processing as we instructed. So it split it into a training subset, validation subset, did class rebalancing, or rather it's trying to do class rebalancing, and, I hope it won't break. And this is what I'm saying. So this is already gets to that level of complexity where, It is retrying despite the specific instructions, because it used the library that I actually explicitly said not to use. So I said, use something else, but it's still using, in blur for some reason. But now it's done it, so it stepped aside once, twice. But we have pre-processing. And now we can actually proceed to training the model. And I'm using a logistic regression here. Any machine learning model can be trained in this environment. some neural networks, basically lightweight. some small unsupervised approaches like clustering also work. But deep learning, you will make that because the RAM is limited and you don't have GPUs. Even if you don't have GPUs, you can do CPU, but the RAM is just limited. Those models are huge. Thank you. The image data is also big, so that will not run, but that's just an environment limitation. It's not a Jared of AI. Limitation per se.

Isar:

No, I think this is amazing, right? I've done very little of this to that level. And I think it's absolutely amazing. Next thing that's like next level stuff where you have the data and you can now do stuff that before you needed an actual data scientist on your staff or hire a third party company to do this for you. And as you're saying, This may not be good enough for everything you need to do, but it's good enough for some of it. And it may give you great ideas on how and where to start to find useful information.

Artur:

Yeah. And you can always tell it to explain it to me. I don't know anything about data science or whatever, so you can break it down further, abstracted away. And I think that's where the value lies. One thing though, again, like back to hallucination is that it can hallucinate code sometimes. It helps to at least check that it actually is working with the data that you have and not inventing something now with huge data sets. I haven't seen this, but when I was working with lists sometimes, like when I was doing lists from web search, it literally generated fake entries into the list rather than adding the links. So when I looked, it happened

Isar:

to me as well. So I can verify that it's doing it. I was trying to use it to quote unquote, scrape data that I needed for a test case. So not using actual client data to, for something I teach in my courses. And I was trying to collect data from a website into a table and it created a perfect table and it seemed perfectly fine. And literally a hundred percent of the data was made up, even though it all existed on the website, I give it. So yes, it definitely does it sometimes.

Artur:

Okay, now it says it's taking too long and our dimensionality is too big. So apparently I think we're good.

Isar:

I listen. I think people get the point, right? You can do basic machine learning and then data analysis using data sets that you have. And whether this one works this time or not is not a big deal. I need to show

Artur:

you. How this looks like, because yeah, the

Isar:

outcome looks like, yeah, that would be awesome.

Artur:

Yes, exactly. So sketch churn analysis, what do we have here? Yeah. Machine learning. There we go. Okay. So that's where we're at, I think is here, right? So training data set. Yeah. So if it succeeds in there, it would give you the confusion matrices. It would give you its interpretation of the parameters. It would give you the visualization of those parameters, essentially. the seekers that we asked for, again, interpreting those as well, then it'll proceed to the testing set and predict the probability of. conversion in this case of approved loan. And then it says to me, okay, yeah. Threshold of 0. 5 is good because then we don't get too many false positives. And then I say, okay, optimize, adjust the threshold. And it says, okay, these are the options that we can have. And then I say, use cost benefit analysis to optimize it. And then it does that. It makes some assumptions on the cost for a false positive, a false negative, and the gain we can get from a true positive. There's some optimization and. With that, I also asked it to visualize that and the visuals actually, for some reason, not rendering, but the point is that it really went deep and you can actually give it something rather than use assumption and say, okay, the true negative or the false positive would cost this much, but the true positive is that much gain for us. So these are your boundary conditions and then it would do that. So that's how it can look like, as I said, like this prompt is pretty complex and sometimes it just like somewhere in the middle, it just dicks up. A detour, let's say.

Isar:

This is awesome. Arthur, this was amazing. Like I really didn't think we'll be able to do all of this in an hour, but you. Prevailed, we covered a lot of stuff and really a short amount of time, but in a lot of depth and a lot of detail, our tour is kind enough to share that in a file with all the stuff that he just shared. So that will be, in the show notes. Once the show goes live and, on the leveraging AI podcast, as well as on the multiply AI YouTube channel. So on both of those, we will have a link to the document. Artur, this was absolutely amazing. Like literally. Pure gold across multiple aspects of the business. A lot to learn from very practical, useful stuff for literally every aspect of the business. If people want to follow you, learn from you, work with you, what are the best ways to connect with you?

Artur:

LinkedIn is the best. Just reach out. I'm there almost every day.

Isar:

Awesome. Thank you so much.

Artur:

Thank you. Thank you for having me.