
Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
190 | Behind the Curtain: AI Image Generation That Actually Works for Business with Ross Symons
👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey
Most business leaders know AI can generate stunning visuals.
But turning that potential into real, consistent, on-brand creative?
That’s where most teams get stuck.
In this live session, we’re pulling back the curtain. Ross Symons will walk us through the exact workflows he teaches marketing teams worldwide. You’ll learn how to pick the right image-generation tools for the job, how to maintain visual consistency, and even how to integrate your product photos into ad-ready AI-generated assets.
Ross has spent the past decade blending creativity with code — from viral origami animations to helping global brands reimagine content with AI. He’s not just using the tools — he’s pushing them to their limits and building systems others now rely on. And now, he’s here to show you how it’s actually done.
If you're a business leader, marketer, or content creator looking for real, tactical value from AI — you’ll want to be in the room for this.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello, and welcome to another live episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to improve efficiency, grow your business in advance your career. This is Isar Meitis, your host, and as you probably know, everybody loves creating visual content with ai. First of all, it's really fun. You can create really cool stuff that you just couldn't create before. But more importantly, if you know what you're doing, you can also make it a very effective business tool to promote whatever it is you want to promote. But the key. In that sentence is if you know what you're doing. And to be fair, most people do not exactly know what they're doing. We're kind of winging it when it comes to creating graphics with AI or videos with ai, myself included. And there are some very serious issues when you try to create images with AI for business purposes. One of the biggest ones is consistency. And consistency is key when you're doing business related image generation because of several different reasons. One, it's your brand, right? There's some brand guidelines they're trying to say. I. aligned with. And the other is, if you're selling a product, well guess what? The product needs to look like. The product, your logo needs to look at your logo. the people who represent it needs to look consistent and so on. And the same is also true when you're doing things just for fun. If you're just trying to do storytelling, whether with your kids or for any other purpose, then the characters needs to stay consistent. And so knowing how to do that is an interesting mix between science and art, right? On one hand, you need to understand the science of how these AI tools work so you can make things look consistent. And on the other hand, you need The other side of your brain to be creative and come up with ideas that will be interesting and will capture people's attention. And so as I mentioned, it's like this balance between right brain, left brain kind of use of AI and your brain. I guess, and this is why I'm really, really excited about our guest today, Ross Simmons. So Ross started his career as a developer writing code and shifted to the creative side where he's been really successful. And so he really. Is this amazing mix of left brain, right brain success, which makes him the perfect person to first of all explore on his own, but also share with us specific ways on how to do this effectively. So Ross became obsessed with creating images on AI before even Chachi PT came out so way before most of us. And he has poured himself into this and he is delivering amazing training to people and creative teams in enterprises on how to leverage AI to generate business related images and videos. And today he's gonna walk us through the process and the key points that we all have to know in order to create effective AI visual content that maintains style and brand and character consistency, which all of us desperately need because it's. Future. And it allows us to create amazing content much faster and much cheaper than we could have done it so far now since this capability is something everybody needs, I'm really excited to welcome Ross to the show. Ross, welcome to leveraging ai.
Ross Symmon:Thank you Sa, thank you so much for having me. brilliant introduction. Thank you so much. I'm quite humbled by that. interesting. I'm good at
Isar Meitis:introductions. All I need is then smart people to do everything else
Ross Symmon:so well. You've got me when I'm smart or not, we'll decide, you know, through the course of the day, but thank you so much for having me. Yes. You know, as Isar mentioned, I come from a technical, creative background. did many things across my journey and I run a company called Zen Robot, and we focus on, AI training for creatives, marketing teams, and we also do creative production, which is, you know, a field I was in many years ago. yeah. You know, and just being part of this whole, journey and the whole AI space and sharing ideas with creatives I'm very active on LinkedIn and I feel like, you know, when I create a piece of content, it's work for me, but it doesn't feel like work. it really is just something that I have such a, passion for and such a, I don't know. There's just, it just feels like there are so many options. There are so many different ways of using this technology. I think it stems from me having a computer in front of me since I was, you know, six years old. So, you know, just continuously working with it at the time. You know, you kind of, you get this thing and you think it's just for games, but if you sit behind it for long enough, you realize it's pretty powerful. So, yeah, it's an honor to be. Yes. So, thanks for having me.
Isar Meitis:No, I'm really excited. Like I said, you know, I create images with AI tools, you know, the usual suspects between, me, journey Flux, ideogram, ChatGPT, Gemini, like, depending on what I'm trying to create, and we can talk about all of that afterwards. I'm sure we will. but I get what I want, many cases, but I struggle in others. And part of it is because I don't have a very well-defined system and process. I experiment more or less every time from the beginning, depending what I'm trying to do. And I know you're the other way around or not the other way around. You're just a few steps ahead of most of us in this exploration process. And so I think learning how to approach this in a way that is consistently, working will be fantastic.
Ross Symmon:Amazing. Cool. So let me share my screen here. So what I'm gonna do today is I'm gonna show you. Look to explain what you've just asked in like 45 minutes is near impossible, but I'm gonna do my best to try and build a foundation that, you know, hopefully anybody listening to this will be able to walk away and go, ah, okay. That guy said the one thing and this is gonna help me going forward. So we'll get into the brand stuff, towards the end, but I wanna take you through three tools that I use regularly. One is midjourney. The other one is Image fx, which is, Google's, I guess image generation tool and the image generation portion of chat GPT, which there's, you know, I dunno if it's Doley three or if it's image gener, you know, it's chat GT image. There's, regardless of what it's called, that's the thing. So what we're gonna do is, well look, firstly a couple of challenges that I find, you know, obviously consistency. Like you mentioned, consistency in imagery and consistency in style is always something that, you know, being part of a brand team is very important. So I'm gonna take you through and show you how I would build on a prompt. So starting with a very basic prompt and adding slowly, I guess, parameters and just more context to the prompt.'cause all of these AI tools, all of it is about context. It is about a. How much information you can put in, how much relevant information you can put in there's an old saying with developing, which is garbage in, garbage out. And it's exactly the same with all of these tools. and many people get frustrated because, you know, I've asked so many people, have you used my journey? They're like, yeah, I tried, but I just got bad results. I'm like, well, problem's not mid journey, unfortunately. It's actually you. and I'm not pausing any judgment. It's just there's a new way to engage with these tools. and it's a new understanding that we have to develop in working with that. And that's a lot of workshops that we do is, it's about that. It's not about which tools you use. Because that's one of the other things, and this is why I'm gonna go through these three tools. I don't see these as the most powerful. They are more tools that you can use. But hopefully, I give you an understanding of, you know, what you are able to do, which tools are better, because that's another thing. We get asked all the time, like, what's the best tool to use for images? It's like, I don't know, what's the best car in the world? You know? Yeah. What's the best clothing brand? Like there's no right answer. And we are at the stage now where, you know, the maturity of these AI tools has reached a point where there are best tools for specific jobs and they are better tools for other jobs. which is great. I mean, if you, right at the beginning it was so frustrating, like using tools where it was just, you try and create something, but it was just like, come on, surely it can be better than this.'cause, you know, it's a computer and surely the computer should be doing better than this. Like we in the future and it's not working. So right now we are at a place where I think you can, from a visual perspective, maybe not so much with video, but definitely with imagery you can create pretty much anything. So, I'm.
Isar Meitis:I'll add a couple of things. First of all, I want to thank everybody who's joining us live. So I have people with us live on the Zoom and people live with us on LinkedIn. And I want to thank you spending the time with us. feel free to ask any questions. So write them in the chat, either on LinkedIn or on Zoom. and for those of you who are not watching us live and cannot see the screen, I wanna say two things. One, we'll explain everything that we're seeing on the screen. We're gonna read the prompts, we're gonna explain in words what we're actually seeing, but also if you have the option to watch this, meaning if you're not driving your car or walking your dog. Right now, we have a version of this on our YouTube channel. There's a link to the YouTube channel On the show notes. So you can literally click a button from where you are right now and go and watch the YouTube version of this, as well. Or you can do both. You can listen to us while you're in the car. And then when you get home, if you wanna see the actual images and the outcome of everything that we're doing, you'll be able to do both. The other thing that I wanna say that is to piggyback on two things that you said. One, everything in these tools, everything including text and everything is context, is everything. Like if you give it more context on who you are, what you're trying to do, what's the target audience, what's your brand, what are like, all of that just adds to your ability to get better results. And the second thing that you said that is very important, is, and I actually say it a little different than you, yes, there's different tools who are better for different things, but from a visual perspective, all these tools should be good enough for probably 90% of use cases once you get to edge use cases. Yes. There's. Mid journey is gonna be better for this and Idio Gram is gonna be better for that and Chacha is gonna be better for this. But when it comes to, I don't know, 80 to 90% of what we need on the day to day from marketing, all these tools will be good enough, some will be easier, which is a whole different story, right? You can get to the outcome. It's just gonna be easier with this tool versus that tool. But now I will let you dive into the details because I'm sure people are really curious to see what you have to share.
Ross Symmon:Okay, cool. So yeah, like I said, I've got these three tools that we are gonna work through. So let's start with, midjourney. So I have created a simple, Prompt. Here is a portrait of, lemme just check here. Portrait of all. Yep. So now if you look at this, it, is it a portrait of a woman? Absolutely. is it a hyper-realistic portrait of a woman? Absolutely not. Is it a painting? Maybe Is it an illustration? I don't know. I. Context. How much information have I given the machine? I've told it I want a portrait of a woman and it has delivered. And I think it, this is where it gets frustrating for people.'cause they're like, this is not what I asked for. And the reality is, it's exactly what you asked for. They don't get it. these tools have been trained on vast amounts of data. I don't even think you can really comprehend how much information has gone into training these models. So the deeper you can get into what it is you're asking for, and the more specific you can be from a physical standpoint, that's another thing that people forget is you are describing physical attributes of an image. You're not, leave out the fluff, leave out the beautiful and the fantasy and all these wonderful words that you think are gonna help, they would help a human feel, an emotion towards something. You know, excitement. Create an exciting image of a portrait of a woman. What does exciting mean? Exciting. Maybe there are elements that would, you know, suggest that this person is excited, but are you excited because you're seeing the image or is the image showing you someone that is excited it doesn't know? It can't translate. You get where I'm going with that? So this is a, you know, an example of Portrait of Women. Amazing. Cool. We've got the right, what it's asked for here. So exactly the same prompt in this tool, which is, Google's image effects. Okay. Portrait of a Woman. Exactly the same thing. A couple of things to point out here is, firstly, these are hyperrealistic, in my opinion, hyperrealistic photographs. They're all looking directly at the camera studio sort of vibe. There is one thing that I have to point out here is there is racial diversity here, which the other tool doesn't produce. I think it's just white girls in the other one. It's the truth of the matter. So,
Isar Meitis:so again to, for those of you who are listening, we have a red head. We have a darker skinned woman with a short hair. We have, very white skin girl. And we have an older woman, she's older and she's wearing more formal clothing as well. But all of them is definitely a portrait that looks like a picture from a camera by a professional photographer
Ross Symmon:and all a portrait of a woman. There we go. And all a
Isar Meitis:portrait of a woman. Correct? Exactly.
Ross Symmon:Cool. so far, all of these tools are adhering to what we've asked for. Let's go to a good old friend chat. Now here, chatt works kind of differently. I was under the impression that it also used a diffusion process, which is the process. I'm not gonna get into that, but it's the process of generating an image where, Chatt, for example, is a large language model. So it uses text as opposed to imagery. So behind the scenes have worked out a very clever way of generating images. It's gonna be, I think, the standard going forward for a lot of these tools. but if you've ever used chart GPT image, it used to be terrible. It was really bad. Like it was Dali two and Dali one were unusable, in my opinion. It was fine for making little kid pictures and, you know, random sketches and whatever, but when they upped their game to this, which was about two and a half months ago, where it just went viral and everybody knew what it was doing.'cause you could turn yourself into a boxed character in a, you know, in a, or a you or yourself as your dog, or yourself as a cat or a Studio Ghibli version of yourself. Anyway. We're not gonna get into that. So I've just said, yeah, go ahead. Please create an image. So just for the sake of speed, I've generated all these images already so that we don't have to wait for the process. On that note, out of these three, the three that I've just shown, my journey, image effects and, chat, GPT chat, G PT is the slowest. It also only produces
Isar Meitis:by a big spread. And you get only one image every single time. So you only
Ross Symmon:image. So, so two things there. Context is extremely important here. and you're probably gonna waste your time and get frustrated because you're not gonna get the thing unless you're asking for something specific, like I said, the Studio Ghibli version of yourself, which was a trend a while ago, whether you saw or not. So again, portrait of a woman. Do you have a portrait of a woman? Absolutely. this, I think is maybe kind of similar to what was produced here. I mean, not exactly the same, but they could fall into the same thing. Another thing that Chate does, which is a bit frustrating for me, is it puts the SPI tone on the images. I don't know if you've noticed that it creates this yellowish tone, which I have to take into a, like light room or Photoshop and remove every time there will sort that out. But that's just one thing I have noticed. Anyway. Okay, let's not get too bogged down in this. So now we've got a portrait of woman. Let's go. Okay, so now we have, let's go to the next version. I've added context to this. So I've said a closeup portrait of woman. Okay. da adhere to the prompt. Absolutely.
Isar Meitis:So again, for those of you who are not seeing this, the image is now the face of the woman fills up the entire screen. Like you can't even see her entire hair. You cannot see her neck. You cannot see what she's wearing. which was all the previous images were that way.
Ross Symmon:Yeah, exactly. because contextually we haven't told it. Well, we haven't programmed it in this case'cause Midjourney requires some, referencing. We haven't told it to choose a specific character. We've just said create a closeup portrait of women. So again, it's definitely adhered to the prompt. Let's look at this. We've got, this is image effects, portrait of closeup, portrait of a woman. Again, we've got diversity here. the portraits are a little bit more, AI is, I mean, you can see that they have the skin is there, there is something about that AI skin look, which is just, it's just too perfect. But there is, there's a workaround for that as well. oh my bad. Oh, we, there we go. That's just disappeared. there we go. so that is image effects looking good. By the way.
Isar Meitis:Another interesting thing, in this particular case, yes, they're a little more close up than we saw in the first version of the image effects. But they're not as cropped in closeup as the midjourney one. So Midjourney really cropped just the face. Yeah. And Google did just a slightly more zoomed in version than what we got before.
Ross Symmon:In my opinion. It honestly feels like Google is going for the very, very safe option where it's kind of like, yes, we don't wanna suggest too much in terms of diversity, in terms of the actual image. It's not gonna go too far. I often get blocked with images as well. It's like, this does not adhere to our, you know, content policies. and I don't even, I try and search for the word that could have blocked me and I still can't find it because there are some words Yeah. Naturally is gonna block the
Isar Meitis:same thing happens to me in Chachi piti sometimes weirdly, like on image of me. Like, but it's me. Okay. And I told you it's me. Like, why are you blocking this? Like it is? And then you ask for something that should have been blocked, and it's like, yeah, no worries. Like a Mickey Mouse version of whatever. It's like, yeah, sure,
Ross Symmon:totally. So we've got portrait of women. Then I've just said now, because we in one context window, okay, this is another thing. When you're creating images in, chat, GPT just know that if you create one image, if you wanna create a completely different image, you need to open another browser window. This is a very important, factor because what it is gonna do, and you'll notice through this thread, I've said, a closer portrait of a woman. It's taken the woman that I created initially, and it's created a closeup portrait of her. So now. If I get upset because I'm like, oh, but that's not what I asked for. It doesn't understand that contextually you are in the same browser window as you were at the beginning. You've asked me to create something, this is what I'm creating. And I think from a safety perspective, in terms of it wants to create consistency chat, GPT, if you're gonna create one single image and try and keep it in one thread, you will get consistency provided you have the right prompt at the beginning. And if you aren't getting the right prompt, move to a different tab and open it up, in a completely different thing. So you can see where I'm going with this now. I mean, as I'm building on, say
Isar Meitis:one thing about what you said right now, which is to me right now, the biggest benefit of using Chachi PT versus the other tools, and by the way, you can use the Gemini image, imagine three image generator in Yes. A Gemini chat, which then gives you that benefit that you're getting from, Chachi PT as well. which is, it does. Follow a conversation, which none of the other tools do. Like if you're in mid journey, you just gotta prompt better to get a better version or a slightly different version. Where in Chachi PT you can explain in simple English and iterate on stuff because it's a single conversation and it understands what you mean. Does it always work? No. Does it work? Most of the time, yes. Can you do magical things like upload your brand guidelines, PDF 20 page long and ask you to adhere to that and it knows how to read that? Yes. And you cannot do that in Mid Journey. So when I said some tools are now easier to use, they don't necessarily give you a better outcome, but they're easier to use. One of the reasons Shafi is easier to use because it just understands English. You don't need to know how to prompt, as good as you need to know how to prompt in, mid journey as.
Ross Symmon:Exactly. And my journey from that angle for me, allows you so much more control. it does give you more options. It's quicker. I find when I'm working on a very creative project, midjourney is my first port of call. I'll use chat PT to help me write my journey prompts maybe, and then bring it in here and be like, okay, cool. Let's iterate on that and change. Okay, so you can see where I'm going with this. Now. I've created a whole bunch of prompts here, and as we are going up, we have a closer portrait of a woman looking directly at the camera. So now these maybe could have fallen into the same category, but we've just specified we want the woman looking directly at the camera. Great. Okay, so now we've got more to the prompts. We've said close up of a woman looking directly at camera, natural, soft light, 85 millimeter lens, shallow depth of field. So whether you know what a lot of that is like natural, soft light. I mean, that's pretty self-explanatory. But the 85 millimeter lens is a very closeup, it's good for closeup photography. This is something that I don't think many people. Bother with, but it's not essential to know. But if you want to get really granular in terms of creating and generating images, understanding a little bit about photography, about angles, about lighting, and which lenses to use and which cameras are best for certain things will definitely help you tell better stories.'cause at the end of the day, that's what we're in this for.
Isar Meitis:And I agree with you a hundred percent. There's actually a question that I didn't ask because it came a little too early, but it's the perfect time to ask now. So Stefan, who is a regular on these shows and also on my Friday Hangouts, when I show them stuff like that, and I show them that I'm using actual camera definitions and this definition, like how the hell am I supposed to know that if I'm not a photographer? And the reality is you don't have to, but it helps a lot. And you can look for. Photography cheat sheet for AI image generation, and you'll get a, an amazing page that will explain which lenses, which lighting, which cameras, which film, if you wanna be really specific, that you need to use in even setups of the specific aperture value and speed and stuff like that. So you can find multiple cheat sheets out there today and just experiment, you know, and just play around and say, oh, now I understand what he's doing, and then you'll know. But the cheat sheet is a very good starting point.
Ross Symmon:Absolutely. You know, we've got a masterclass that we do, monthly and the, like, that's a massive part of it. It's like a cheat sheet where you've just got a list of all the things.'cause I'm not gonna remember all of these. So it's kind of just a touch point that you can use. So as I'm scrolling up here, I'm getting more granular in terms of the details. So the last one is a closer portrait of a woman looking directly at camera. So that we've got, we've kept the lens, the soft, natural lights, shallow depth of field. Now we are adding physical attributes, which are gonna enhance the realism. Okay. being highly detailed skin texture. okay. Light freckles, realistic POS clean, minimal hairstyle, no makeup. not a neutral background. So you can see now how this becomes a lot more real for me, this is as close to photo realism as you'll get, when you're, sort of making the images bigger and, scaling them up. Choosing a specific tool. Like there's one tool I use called Magnifi, which is amazing, that you lose a little bit of detail in getting a bigger, are you gonna use this for a billboard ad? I don't know. maybe we are gonna get that at some point. But for social media, website, banner ads, anything you need to use online content is, this, is perfect. So yeah. So that is mid journey. That's done that. Okay. Let's go to image effects. And I'm gonna scroll through to the same prompt. Okay. Where we have, and this did something very interesting, this, which I thought was, it is just fascinating. So it's exactly the same prompt, close up with all the details. It, the character looks almost exactly the same in all these images. Now, has it picked a character? Where did it draw this inspiration from? I have no idea because I haven't specified the ethnicity. I haven't specified the, yes, the hairstyle I have, but these could be sisters. Yeah. it's just very, well, well,
Isar Meitis:it's exactly the opposite of what we've seen so far. Right. So far it gave us the biggest diversity In images. And when we became very specific, they all kind of look the same.
Ross Symmon:Exactly. And what you can and can't do here, well, the difference between this and mid journey, for example, is maybe in Gemini you could use an image reference. So please reference this image and then build an image. Or create an image based on the aesthetic features of the image I've uploaded. you can't do it directly in here. I just like to use this because it's quick. it's easier for me to use. I started here. I don't use Gemini much, so that's just for, you know, for the reason I do that. So let's go to chat your pt Now I've just kept building on here. So you can see the closeup portrait of a woman and the realism that gets achieved here is pretty crazy. Like, I mean, I think, adhered to the prompt pretty well. Um. it is a very realistic image in my, I mean, the first one was pretty realistic already. yeah.
Isar Meitis:I think this one definitely going back to both the character there is in the face as well as the level of details, the smaller details, like the, you know, the black lines on her eyes because she doesn't have makeup, because that's what you asked for. A little bit of wrinkles here and there, like it looks, the last two images are just incredibly good,
Ross Symmon:detailed, and that's because I've added, you know, light freckles, realistic pos, clean, minimal hairstyle. it's those details that you have because we've never in our lives before. If we had to describe, yes, we've had to describe something physically, but not on a level that a machine now has to understand, if that makes sense. Yeah. So it's a, it's this new language we were developing.
Isar Meitis:I wanna add one more thing, which is something that is very interesting to me, especially with this specific example that you gave, and I love the way you're building this, all these women on mid journey and. On Google look like models, like they're all extremely beautiful. The one on Chachi PT looks like a common person, like somebody that is your next door neighbor. Yeah. and you didn't ask for one or the other in any of the tools. And you can obviously specify one or the other for any of the tools if you want to get go for that look or the other. But it's interesting to me how the self-selection is very obvious between the different tools.
Ross Symmon:It's amazing. It's crazy. It's, yeah. Yeah. I agree with you and the thing that, what I was alluding to earlier about being specific, I mean, if you say, you know, create an image of an ugly person. Ugly is relative. Create an image of a beautiful person. Someone might look at it and go, I don't see that as a beautiful person at all. But someone else might look at that and go, Ooh, you know, not my, not quite my tempo, you know, but it's all about describing physically what, like, in your mind, what, you know, maybe someone with like massive scars on their face is something someone who's unattractive, you know? But someone might find that quite kinky, so they you know, it looks yeah. All it comes down to taste, it comes down to context. So, yeah, I don't wanna hop on too much about this now. I've done the same thing this is just to show that you can add text into images. It was something that only up until recently was almost impossible to do. any, if you ever try to put text in any of these tools, it would very seldom come out, unless it was a single word, like hello or Hi or welcome or whatever the case was. whether you've used this, you know, to try that or not. Like, hear from me. It was bad. It was, the text was just all over the place. And that's because I think these tools are image generation tools and not text and design tools. which you'll see in examples going forward now. So yeah, this is just a basic example of, and I've built on the prompt, which is, a cozy living room now, cozy living room. We've got, you know, it is, it's a cozy living room. Are they? This one is animated, looks like something from a, you know, story of some kind. again,'cause contextually we haven't told it to be specific about what it is that we're looking for. Then I've gone into add a dog on the couch. this isn't even a, this looks like a painting or an illustration of some kind. Same here that looks like a cat
Isar Meitis:and it's not on the couch, it's on the floor and next, not on the couch either. You know,
Ross Symmon:So where I got to here is like I'm, I started with Midjourney because midjourney is the worst at putting text into images. So I've asked it to say, so the prompt was. A cozy living room in the late afternoon. Warm, natural light, soft shadows. A golden retriever sitting on the couch, which it got right books and plants in the background, realistic textures. Framed quote on the wall that says, welcome home friend in handwritten text now. So if you can tell me what that says,
Isar Meitis:it's the welcome,
Ross Symmon:says welcome.
Isar Meitis:Other than that, it's not exactly
Ross Symmon:what it needs to be. It's not exactly, so, you know, and it's tried on all four images. That's one thing with, these tools is they usually generate four images because one of them maybe is the right direction that you wanna go in, but chances are it's probably not. And this one kind of got it. I think this is not a bad job, but now the dog is not on the couch, the dog, and it's not a
Isar Meitis:golden retriever,
Ross Symmon:it's not a golden retriever. It's got a weird head. There's too many plants in there. Anyway, you see where I'm going with this. So, that's just what that was able to do. So if we move forward now, let's go to image effects. Very safe options. I mean, very realistic images.'cause I assume, you know, if somebody types in that, that they want a photo of what it is they, are looking for. So cozy living room. In the late afternoon, there's a dog on the couch, golden retriever on the couch. So, again, very similar. You see, it's interesting how in both cases the more granular we became, the more, similar the images became, which I've only noticed that like as we are speaking now, which is just an interesting,
Isar Meitis:but that's specifically on, Gemini, like on image effects. Yes. the images became almost, I, the four images became almost identical.
Ross Symmon:Almost identical with, from the same angle, the same dog and the welcome home sign at the back wall. Welcome home friend. which it got right. I think it did a pretty good job there. I mean, I think on all of them. So all of them are spelled
Isar Meitis:correctly. All of them looks like handwritten picture. Yeah.
Ross Symmon:Yeah. let's start thinking about from a branding perspective. If you wanna sell a specific couch or a specific dog for that matter, or a specific, you know, you wanna sell your art on a wall, this is how you start thinking about that. Like, maybe you don't want to have welcome home friend. Maybe you wanna place your image or your art piece, or whatever it is, or a pot plant that you're trying to sell, or a cushion of some kind. I mean, it really does it to, for a single use case. This is like, it's impossible to explain, but it is very useful across the board. and then, so that is from a, you know, from an interior perspective. The last thing I wanna go through is a, let's have a here. I'm conscious of time. Anthropomorphic, try and say that 10 times in a row. Anthropomorphic fox, which is just a humanesque. Anthropomorphic just means human style fox. Okay. We've got an illustration, which he's got horns, which is kind of weird. He is got a top hat or some sort of ballad on, and a coat cool looking fox. One of them is, you know, again, we've asked it for that. And to get more specific, we've asked it for a second time. This time he's wearing a cloak. Okay. So now it's a, the fox wearing a cloak standing in a misty forest again. So this is starting to get a little bit storybook ish. Yeah, we are all, we are still in mid journey here. now we've got him in. Which I didn't listen to the prompt at all, looking toward the viewer with a calm expression. None of these images are looking toward the camera, but or toward the viewer, which again, this is where I think it gets frustrating for people who are using AI for the first time, particularly in my journey, because it does require its iteration. It does require a, an understanding that you're gonna be sitting there for a while, especially at the beginning. But the structure that I'm using where I'm building on the prompt, this makes it so much easier for you to go back at a specific point and go, okay, change the fox to a squirrel. Change the, misty forest to a beach. So where you at least there, you sticking to some sort of, format. Also, if you want things to be more prominent. For example, if you want a wide angle shot, put that at the beginning of the prompt as opposed to the end of the prompt. These things that you want. This just works particularly with Midjourney, really well take, and I use this with angles and composition. So if you have a wide angle shot, of a person on a beach or a fox in a forest, put the wide angle at the beginning as opposed to the end. Just a nice little hack treat there. so now we've added more details. This is getting a bit more real. So we've asked for the same fox wearing a green coat, standing in the forest, holding a golden crystal or a glowing crystal in one hand, cinematic lighting, 85 minute lens. So cinematic lighting automatically implies that it's gonna be realistic, so you don't have to put things in like realism and, you know, explaining the details that make it look real. So this fox, I mean, not quite fantastic mister, but I mean, he's there. He looks real ish. He's, he's quite charming. He's still not, well, although we didn't specify that we want him looking at the camera. getting to the last section of this, we've asked it to look directly at the camera, because I've put it at the beginning, so I made sure that I put, yeah,
Isar Meitis:let's look, let's read the entire prompt, because that's actually very different and the outcomes very different.
Ross Symmon:So I said, looking directly at the camera, an anthropomorphic fox in detailed fur texture, wearing a green coat or a green coat with a gold embroidery standing in a misty forest holding a gold, glowing crystal in one hand and the cinematic lighting with 80 millimeter lens. And it didn't get it right with all of them, but he's a lot closer to looking toward the camera. Cinematically, you breaking the fourth, I'm getting deep here, but you're breaking the fourth, what they refer to as the fourth wall. When the character in the movie looks directly at the camera, that very seldom happens. So if you bring cinematic lighting in, it's automatically gonna make the subject look away from the camera.
Isar Meitis:Yeah. One, one more thing that I wanna say to people. so here we're going for the look of the fox, right? We said that it has a calm look. If you're trying to capture detail of faces and expressions, the best way to do it in mid journey, and I think it's the only tool out of the four that has it out of the box, is actually to do a closeup. Of the Fox's face and then out paint out of that. So you can go and zoom out on a picture on Midjourney and ask it to then add whatever it is that you're gonna describe. And that's an awesome way to start with a highly detailed fox face because it's the only thing, it's painting like the whole picture. The whole prompt is describing his face and his look and so on. And then you zoom out and you can add the forest and the mist and all the other stuff. so that's another little trick when you're trying to create, expressions or very detailed face of a person or an animal in this particular case. And then zoom out from there.
Ross Symmon:Yeah. Out painting, just, I'm just doing a quick example here is you basically, well this essentially is in painting, but you're painting out the section that you want to change. So whatever you put in the prompt box now will be filled in the background there. Little Yep. Very fancy little trick that, people have used here. So yeah, I don't wanna, really wanna get into the subject stuff, but just a quick, so I have an image here, which is, I like the style. Let's say I like the style of this image, which is, it's pink clouds in the background. There's an ocean, there's a, you know, non style door leading from away from a chair. I like this image. I want to take the same fox that I had. I wanna use the exact same prompt, but I wanna add that style into it. So simply, you just add the prompt in, you drop the URL of the image that you using, which is the image that I had there, and that allows you to, oh man, you've got options. To put it as a style reference. You can use it as an image prompt. So this is where you start getting a real control. Over what your image is gonna get. And also consistency. you're not gonna be doing a fox in a green jacket for your brand, maybe you are. But the idea I'm trying to get across here is this is how you crosspollinate between, or you grab images from anywhere on the internet, upload them and use them as style references to guide your final image. So you can already see here that just using this, just simply dropping that in, it's changed the style of the image in Tivy. midjourney has something called SRF codes, which are super handy as well, which help you guide the style. So here we've got like, it, it just looks very cool and it's definitely adhered to that. And you can crank up the, you know, how strong the style is, and. How much you want to change. Again,
Isar Meitis:for those of you not watching, just to give you a little bit of understanding what we're doing in mid journey specifically, you can upload an image or use a previous image that you created with AI and say, I want you to capture either the character, like the subject, or I want to capture the style of the image and then apply that to the next image that you're generating. And that's basically what Ross did. He took a beautiful, kinda like sunset pink clouds on the ocean and said, I want to use this style with a fox that you have from the other image. So you basically mixing. The subject from an image with the style of a different image to get a new image that really adheres to both concepts, which is a very powerful tool to get exactly what you want. Because instead of trying to explain with a million words the style, which is something very amorphic, and it's sometimes hard to explain, you can just bring an image, say, okay, this is the style that I'm looking for, but I want this character or this, product in this style, and then it's gonna do the thing for you.
Ross Symmon:Exactly. And off the back of that, you mentioned like a style, or a character consistency. So I picked this character as the fox in the, you know, in his little green, cloak. And with the image that I had, I wanted to create a, like, I basically wanted the fox. In the same, sorry, in a different scene, but I wanted the fox to be a consistent character. So what I'm scrolling through now is the same fox, he's in different positions, but he's got, well I've used the same style across and this is how you start bringing consistency in elements, objects, sub, but particularly with characters, when you are bringing in a subject in, I didn't do it exactly the right way here, but if it's got a clean background, so you just have the character, get rid of the background, Photoshop it, remove the background, make it white, and then it focuses entirely on just the character themselves. And the more detailed you have, a detail you have in the image itself, the more it's gonna place that detail into the final image of the character itself. That is just a quick rundown of what's going on there with, ma journey. Then the example we have here, so I, an Anthropomorphic Fox, it's done exactly the same thing, but here we have no control. Now, you know, it's, this is like a. Don't know, these images are kind of cool. We've just built on that. Again, there's a, something I wanted to point out here. The difference between this image and the other one is purely the aspect ratio, it's the orientation. And that has a massive influence on this is where understanding a little bit about film and a little bit about, how to take photographs. If you say you want a portrait of a fox or a portrait of a person, place it into a portrait. Don't make it a wide shot. It's not, it understands better. That's a little, just a neat little hack that, it's often helps you get the, the results you want. So we're just building on our fox here. He's getting a little bit more detailed. He's getting a bit more real. Definitely adhering to the prompt, but very similar. Like you can see that it's just picked a character and it's just made him across it very safe. Yeah. Like this is the character we are going with. so, you know, do with it as you please. If we go and have a look at what chat GPT has done. Done. Oh, we actually missed chat. PT with the welcome home. You can see the welcome home here is perfect. That's one thing Chatt is, yeah, so I, I
Isar Meitis:think from a text perspective, Chachi PT right now is the best tool of adding text to the images. Number two was number one for a very long time, which is ideogram. And then number three is Gemini. so if you want to get text right in the image itself, go to cha GPT, you're probably gonna get it. 19 out of 20 times it's actually gonna get it right, even if it's a lot of text.
Ross Symmon:Yeah, exactly. So here we've got our fox and it's made it illustrative. You've added more details made, and I think it's come out with an image that's pretty cool. I mean, again, very kind of similar, I mean a little bit more moody than the one that we've got in, image effects with a yellow SIA tone, which is the, yeah, like the ate, which will get changed. Anyway, I wanted to get into, so yeah, I mean if there are any questions or anything, you very welcome to. Speak about that, but, okay, so, oh yeah,
Isar Meitis:the next one I think is awesome. So I think now we're gonna dive into product photography, right?
Ross Symmon:yeah. So look, I just, this is not a real brand I created. I just said I need a brand to, you know, it's a fictitious brand. I came up with the word fleece. I Why That popped into my head, I have no idea. But it's a fleece and it's a fragrance brand. Cool. So what I did is I took the, image and I cut it out and I said, in charge UPT, I said, cool. Okay, place. this is just a workflow. from here, there are a million directions to go. Is there a best way to do it? I'm not gonna say there is, but this is one way of doing it. getting your product into a photo is like, it's very difficult. To get it perfect. If you want, if you're looking for perfection, then hire a designer. But if you want content that is, usable that on social media and just for quick hit content, this is one process of going about it. So you cut the image out, make sure there's no background. Okay. I've asked, place this because Nam can speak in normal English language or any language. Place this fragrance bottle on a mat at stone Pedestal soft lighting editorial style shot. Cool. So. The difference here. I mean, you can clearly say it's just squash the bottle. For some reason it's just made the bottle too small. I mean, it still has the branding, which is great. the, all the text is very legible and it's a nice image, but the bottle is not, I mean, if you're gonna be selling this on, you know, I. Anyway, Amazon or whatever. If the product doesn't look the same, then people are gonna get it and be like, what's going on here? Like, this is not what I asked for. You get where I'm going. Yeah.
Isar Meitis:Two, two things about this that I've learned that works really well in Chachi PT specifically when it comes to trying to place a product an actual product in an actual image. two things that work for me very well. One is I ask it to pull the text off the product and write it just in text. Tell me row by row, what does it say and what kind of the style of the font and how big is the font? And then you kind of get a description in English of what is written on the bottle. In this case, it's just two things, but usually a bottle has, you know, the size and how much liquid it has in it, the percentage of alcohol, if it's a, sunscreen, then all the different and like, like there's a lot of stuff that's going on, usually on, on a product. And if it writes. On its own, all this stuff down it usually then you tell it. Okay, now use everything you've written, write it back in the same style on the product. It usually gets it right again. About eight out of 10 times it's actually gonna get the text. Exactly right. The other thing that I've learned, going back to the same thing, is ask it to describe the product in detail, including color palettes, specific color scheme aspect ratios. Like I have a whole thing that I'm asking you to describe, and when it's describing it, it's basically becomes a part of the next prompt. that actually works really well. It doesn't always come out perfect, but it increases the chances of it coming very close to the real thing.
Ross Symmon:it's faster than doing a photo shoot, that's for sure. Yes. So you see where I'm going with this. I've said to it, make the bottle look exactly the same as the image above it still didn't adhere to that. Make the bottle taller longer. Then it just. Messed the whole thing up and made it completely, no, it's way taller. Yeah, now it's way taller. Okay. So I was like, this is my sort of creative mind. And this is something you need to think about when you're using these tools, is you need to, I don't think this is gonna be like this forever, but for now we have to switch between tools. So what I did is I went into Mid journey and I cut out the image, and I dropped it into, I basically took this image, which is one of the images that I, that images that I created, cut it out, no background. And I said, I asked Chacha you to write a prompt for me. Create this prompt, put this in here using the omni reference, which is the subject reference, and. You can see the text is not great. The images look amazing. Like, I think they look great, like in terms of you're trying to sell a product, this is great, but you can't have the now one option is you take the image into Photoshop, you do a bit of Photoshopping, I'm not trying to get rid of designers jobs here. I have a lot of friends who are designers, and I know a lot of people get upset about this, but the reality is, a designer should know how to do this without having to go into Photoshop. This is the truth. So anyway, I'm gonna rush through this. we've got a whole bunch of these images, so I tried a whole bunch of different weights and whatever, but it wasn't working. It really, it screwed the bottle up a little bit. It started getting a little bit strange here. I changed the background. Then I went with like an ocean background, and then to Zas point, I took the image back into Chachi pt. I said, explain in detail what this bottle looks like. That's when I started getting a bit of a breakthrough. Now it started looking a little bit more like the bottle. Eventually it was like, Hey, now we getting somewhere again. The text is not perfect, but which tool does pretty well with text chat GT. So now I've got this image. Go back to chat, GT, drop it in and say, make this image more realistic. So this is the image I dropped in, and boom, we've got it. Still got that SEIA tone, which I hate, but you get where I'm going with this. So to me, looks like the bottle. But this is how you have to think about using the tools you have to mix and match. You have to know like, that's better at that. That's faster that I'm going to, as opposed to just asking the same question and bumping your head. You're in the same context window. Chacha PT doesn't understand that you want something different. I then dropped in another image, so this was another one created in Midjourney, asked it to make it look more realistic. and now it's not loading the image
Isar Meitis:of course, but yeah, I can, yeah. Yeah, we can see it when it's not zoomed in. So, so I wanna pause for a second. oh, you know what? Let's go to your next thing because I see what it is important. Yeah. So
Ross Symmon:the last one is turn it into a banner style ad that could be in a magazine, on social media, whatever. And this is without, I just said stillness as a scent. Flee. To me, this looks like something you'd be in a find in a magazine. It's not loading, but yeah, you get it. So it's used the same image hasn't changed anything. Well, it's made it more spi and there you go. There you have something that you can use now. Swap out the product, swap out the background. Go play. this is what's available. This is happening right now. This is what you can do and you can do it at scale. If you've got four people running this, the amount of content that you can generate in a day is probably what you could do in a month about two years ago.
Isar Meitis:I agree. And I'll say two more things. first of all, do a quick recap of everything we talked about, because I think it's important for people because you, we shared a lot of things. First of all, gotta learn how to use these tools. And you can start very simple. Start with a very simple prompt, but then what you add in is a structure that includes describing the subject, the fine details of the subject, describing the lighting, describing the fine tones of the lighting. Is it soft lighting? Is it bright lighting? Is it daylight? Is it a studio? Like what kind of lighting do you have describing the background. In more and more detail, in me journey, specifically, as you mentioned, it's important to start, like, think about it as the highest weight in the image is gonna be what you mentioned first and the way it goes down as you go further into the sentence. So the start with a thing that you care about the most in the image, and then work your way in the sentence over there. and what you'll see people are sharing cool pictures of you, Ross as a fox in the chat, you can check that out. So build the prompt with the details of the different components of things you wanna do. Provide it as much context as possible. In Chachi pt, it's easier to provide context because you can have an entire conversation. You can give it a story about your brand and your target audience and previous images that actually worked on social media or on an ad campaign. Like you can give it all of that and it will take all of this into account where a midjourney, you have to explain that in a single prompt because that's the way Midjourney works. On the flip side, on the single image, midjourney provides you a lot more control. You can bring reference images, which you can also do in Chacha piti, but not in the same. Level of understanding of what a reference is. they now introduced an actual better way to do character reference, which allows you to keep even more consistent characters, in mid journey. So advanced mid journey usage with actual, parameters of midjourney allow you to get more granular in your control over different components of the image. That is not as easy to get to on chat GPT, but I think the bottom line is, and now I go back full circle to what we said in the beginning. You can get to decent results on all these tools today. It's probably easier to get to an eight out of 10 on Chachi pt just because you can explain yourself without being an expert. You can probably get better results as far as image quality and more sophisticated look on. Midjourney than you can on either Gemini or chat GPT. And I can tell you myself and then I'm gonna ask Ross the same question Today, right now I create and I switched completely. So until two months ago, I created probably 70% of my images with Midjourney, another 20% with Igram, and then the rest with like flux. Now I'm creating probably 65 to 70% of the images with Chachi PT, just because I can get to a good enough image with Chachi PT and the rest are between probably Flux and mid Journey and Ideogram, depending exactly what, and I only go there when I fail getting to the output I want on Chachi pt. So I'm curious about kinda like how you see things and how your work is right now.
Ross Symmon:Yeah, so I'm I mean, if I have to break it into percentage wise, I'm still majority midjourney because I just enjoy the tool. I find myself on the explore page all the time, and I draw a lot of inspiration from that. I would say it's probably 40% my journey. And then 20. what's the rest there? 35. 35. Yeah, so like, no, not 35. What, how is 30? 30? I would say that it's majority my journey and then the other two, image effects and, chat, CPT. But again, it depends entirely on what I'm doing. Yeah. Now talk, we spoke about it earlier, that js ON structure, which is a coding language structure where you can now use that to build prompts for your images. That changes everything for me and maybe for someone who's more technically minded, because I can see in code, almost in a code codified format, what the image might look like. I can read that like a, sheet music almost. I'm like, okay, cool. I can see where to change that change that. It just makes it so much, just the developer in me just appreciates that.
Isar Meitis:It provides a lot more structure that is very visible versus just a sentence with commas in between, there's a very clear structure to everything. there's an interesting question in the chat. which is how long do you invest in a quote unquote good image?
Ross Symmon:Well, good is relative, but a relevant image. It depends on what's personally, what I'm using it for. If it's for a brand campaign, I spent on one image, I spent three weeks. That was for one key visual where I, it was backwards and forwards with the client using multiple tools. This was four months ago, so I could probably do it in a week and a half now. That's the reality of it, because I know better tools, but that's what it can really take. But once I have an idea and I'm able to ask chat GPT to spit out different versions of that idea, then it's a lot. it's quite simple. I can get it, the image I'm looking for within a couple of minutes.
Isar Meitis:Yeah, I agree. and that's, by the way, these two answers are awesome because they're, they serve two different purposes, right? One, they serve, I need a quick, something for an ad or a social media. And the other is the client. A client has a very specific output they are looking for, and they want it to be perfect, and that's just gonna take more iterations and trying to understand what they want and showing them different samples and understanding their style and a lot of other stuff. So I think those three weeks, or a week and a half are not three weeks or a week and a half of working in Midjourney. There are three weeks or week and a half of back and forth with a client getting their feedback, meeting with them, getting their comments. The actual time probably spent on the tool is probably a few hours, I don't know. Two, four. Yeah,
Ross Symmon:exactly. Yeah. Yeah. I forgot that I actually created this as well. So this is just a quick, let me just do this quickly, quick
Isar Meitis:video. Oh, so this is the next level of all of it. I mean, out of the scope of what we, yeah. But. Yeah, so again, for those of you listening, we took the ad that was created for social media that has this square around it that sells stillness, has a scent, and it says Fleece, which is the brand that Ross invented. So don't go buying this because you won't. and then took the image next to the ocean, kind of like on a rock next to the, on the beach and used one of the video generation tools. I don't know which one, to just make it live. And it looks awesome, like it's a slow shot that zooms in and pans to the side. the background is blurry, but you can see the waves kinda like coming in. It's just a beautiful scene.
Ross Symmon:Also, just so you know, I also took this into After Effects and separated the two images from the video and the outline itself. So this is another thing that I think people forget. It's like you don't have to be fully ai, you can't, yeah. Like the reality is what you understand about the traditional tools that you have used if you're a designer or any, you know, any sort of tool that you've used, whatever gets the work done the fastest and the most, effective way. For me, I'm an After Effects, you know, motion graphics artist, an animator. So for me, this is logically like, just pull all these in there, I can get way better results and at half the time as well. So, yeah. Just a little note on that.
Isar Meitis:there's a great question from Stan on LinkedIn, and I think that's gonna be the last thing because we're running out of time is Can you show what is the prompt that you're using in Chachi PT to have it create a better prompt that you can use in mid Journey?
Ross Symmon:Ah, yes. that is, that's a great question. That is a great question and I hope I have it here somewhere. Let me see if I can find it.
Isar Meitis:Yeah. and by the way, I do a lot of that as well and kinda like cross between creating prompts in one tool to use it in a different tool, and then I see the output and then I go back and forth.
Ross Symmon:Ah, here we go. can you see my screen? Yeah. Cool. So I said create an image. So I didn't realize, but my journey has this create an image button. So that just saves you from having to type, please create an image of, create an image of a mock, fragrance bottle. The brand's name is fleece, in a studio environment. And then I created this, which I clip the images out of. which is just like a product shot of multiple angles. I then said, describe this bottle in detail for Midjourney. I then said, here's a detailed midjourney prompt for the free fleece bottle. Now it said a studio shot. what? It actually described the entire image as opposed to just the bottle. I then said, just describe the bottle and nothing else. And then it said, square glass perfume bottle with rounded edges, filled with pale blue liquid. It has matte black cylindrical cap with a white rectangular label on it with the word fleece. This is very important with the word fleece because That is what helped Midjourney like, okay, cool. I need to structure whatever's in the inverted commas onto this bottle. On the label itself, fleece in bold, black, uppercase letters, the bottle has clean lines, thick walls, and a flat base. Now that is a very descriptive. Way of taking that into a descriptive prompt that you can use in Chat two pt along with the image reference, which I've used as a style, object reference. That's how you get as close to the product shot as possible in Mid Journey. Take it out of Midjourney, put it into Chat two PT can then create the final product for you.
Isar Meitis:I love it. I'll say one more thing that again, I've learned that works well. When you start to do products, ask it for aspect ratios and ratios between the things. So what's the aspect ratio of the bottle, what's the aspect ratio of the cap, and what's the ratio between the bottle and the cap, or whatever it is. This could be, again, a dog on the couch, right? Like, it gives in the description the understanding of what this thing is, and it dramatically increases the output of not squishing the stuff that you don't wanna be squished. it doesn't guarantee anything, but just increases the chance. Ross, this was fantastic. I think we covered a lot of stuff. I think we gave a lot of examples to people, and I really appreciate you coming, spending time with us and sharing all your experience with us. If people wanna follow you, learn from you, take your course, connect with you, what are the best ways to do that?
Ross Symmon:Yeah, man, please, thank you. And no, it's been an absolute honor. Thanks. thanks for having me. I'm very active on LinkedIn. I'm Ross Simmons. On LinkedIn, I run a company called Zen Robot, which you could find on LinkedIn as well. And yeah, we are running a four week masterclass, which is a four week masterclass in content, I guess gen AI for content creation from the beginning image generation, all the way through to creating images and a final, or multiple, portfolio pieces that you'll have at the end of it. So yeah, the link, you'll see the links on my pages, but, otherwise you can go to Zen robot.ai and that'll push you to the, the course page as well.
Isar Meitis:Awesome. Thank you so much. Thanks everybody for joining us live. great, chat and cool images of people posting that never happened before. So you inspired people to do stuff. Amazing. So literally I've been doing this, once or twice a week for a very long time now, the first time that people are actually posting images. So that's, that was really cool. thanks everybody again for joining us. Thank you so much, Ross for being here and sharing all your amazing experience. And the last thing that I'll say, go experiment yourself. Like the tools are now production ready. That's all that I could say. All of them, like regardless of which tool you pick, and especially if you, try with all of them, they're production ready, not for everything. Like if you need to do a building side printout, then maybe that's not, the tool, but you can still use it in the inspiration phases and in the approval phases with the client and then go to, upscaling or actual, photo shoots and stuff like that. Awesome stuff. Thanks everyone. Have a great rest of your day. Thank you.