Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
236 | How to create amazing visual content, at scale, in minutes with Isar Meitis
Can one image really power an entire visual campaign — from product mockups to video ads — in minutes?
Absolutely. In this episode of Leveraging AI, host Isar Meitis breaks down how a single visual asset can be transformed into high-converting static images, promotional content, and even multi-scene videos using one powerful tool: Weavy.
Forget hours with Photoshop or coordinating creative teams — Isar reveals how to build a full visual content machine with no technical skills, no coding, and no graphic design background. Just drop your image, and let the AI do the rest.
Whether you're in e-commerce, branding, or just want to playfully test new hairstyles on yourself (spoiler alert: Isar does), this is a must-listen masterclass on creative AI automation.
In this session, you'll discover:
- How to turn one image into 10+ ad-ready visuals in minutes
- The secrets behind automated, scalable visual workflows
- How to create full product mockups using only a logo and product photo
- The AI tools Isar stacks to generate multi-scene videos from scratch
- A surprisingly easy way to generate photorealistic interior design edits
- The fastest route from input to ideation to final visual asset
- How to leverage AI templating + community knowledge to save hours
- Bonus: Isar tests AI hairstyles on himself — and it's glorious
Try Weavy today: https://weavy.ai?ref=isar12
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to the Leveraging AI Podcast, a podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we have a really fun. Episode today, I very rarely do episodes in which I focus on a specific tool versus specific use cases because I believe we all start with the use cases. So yes, we are going to focus on use cases. However, I decided to share about this tool because it is extremely valuable, extremely powerful, extremely easy to use, and really, really fun. I couldn't resist telling you about this and how I'm using it and how other people are using it and so on. So this episode, it is going to be about how to create visual assets at scale for your business and or your personal life, whether it's ads or. These visual assets could be anything from static images to ads with or without text, or any kind of combination of things that you want, promise, I promise you that you will find this absolutely fascinating. And like I said, once you find how much fun it is, you'll just start creating stuff just for the fund of that, which is by itself worth learning. So let's jump right in. So the tool that I'm talking about is called vy. There's a link in the show notes that will take you directly to the tool. And the idea behind Weavy is it combines together multiple different tools. So basically any image generator, any video generator, any. Prompting of any LLM all into one, very easy to use user interface, plus some additional really cool tools that enables you to build these processes or use cases in which you can automate the entire process beginning to end. So I picked several different examples that I played with in the past few weeks and. And I'm gonna walk you step by step exactly how it works. What are the different components? Now, the reason I picked these specific use cases, or built them specifically for this demo is because while each and every one of them represents a relevant use case in real life, each and every one of them will also show you at least one new capability that I did not show in the other use cases. So it's gonna give you a very broad idea. On how this actually works. Now, since this is a very visual tool, feel free to later on when you're not driving or doing your laundry or walking your dog to switch to our YouTube channel and watch it over there. There's gonna be a link in the show notes to our YouTube channel as well. So let's dive right in. So first of all, let me explain to you what is IV before we dive into this initial use case. VY is a canvas. That you can add multiple steps and string them together into a process that becomes a reusable machine that generate consistent visual outputs. These could be images, videos, with or without text, et cetera. You can choose any image generation tool, any large language model, any video generation tool, and combine them together to generate the outputs that you want, combined with several different internal tools that I'm going to show you as we review the different processes. So this particular process is called baby clothing, but in reality it's a product promotional process, and I will show you what it does. The input that it gets is just an image of a onesie in this particular case, could have been a shirt, could have been anything else from the internet that I just dropped into this. You can see that the type of the box is file, so I just uploaded the file and what it's doing. Then. It is trying to understand what's in it. So it's reading the actual image, understanding what's in the image, and then I'm gonna use the understanding of what in that image, in the following steps. So this is where I add my own prompt. So this is a prompt box as you can see, and I wrote the following prompt. This is me adding a prompt. Describe 10 distinct 16 by nine new scenarios for the scene, each with a different environment. Each should be a description of a potential ad. The baby must wear the same exact onesie in all images. And I ask it to separate all of them with an asterisk between the different ideas. So what I connected to it is I connected to it, the original image plus the prompt that I just read to you, and I connected it into a large language model box, which large language model, whichever one I want. So right now it was done by GPT-4 oh. But in the dropdown menu, you can see, I can choose from almost any large language model out there. And so what it is actually doing, it is now going to generate 10 ideas for ads based on the image of the onesie that I uploaded. So I'll give you several example of what it created in that, because the onesie says, thank for for family. It understood on its own because it's a large language model and it saw the image and it saw the description, it understands that it is a family oriented campaign. And so the examples, I'm gonna read a few of them, but not all of them. The first one is the image depicts a baby wearing a green onesie that reads Thankful for Family lying between two adults on a bed. The second one says, in a solid meadow, a baby giggles in the onesie as butterflies flatter around celebrating family joy. The next one, and this is my favorite one as far as the output, as you'll see in a minute, on a cozy couch by a crackling fire, the baby yawns. In the onesie surrounded by cuddly pets and so on and so forth. Now you see that the prompts that it's generated are really, really short because that's what I requested in my prompt. I said, keep every description under 30 words. I could have done this significantly more detailed, but as you'll see in a minute, that is not required. So the output that I got so far and to this step is 10 different prompts of ideas for ads for a baby onesie. Based on just the image on the onesie that uploaded nothing else, the next tool is called Array. What array does is it literally just breaks the 10 different prompts that showed up in one box into separate boxes, and then there's a tool called List that really breaks it down. So now what I have is I have 10 separate boxes, each and every one of them with a prompt that I did not write. That will change dynamically if I change the image of the original onesie. This is the magic of all of this. The dominoes can fall all the way through with me changing just the one image of input. So now I have 10 different prompts. What did I do with the prompts? So then I connected a box from nano Banana, Gemini 2.5 Flash, which is an image generation tool. So what kind of image generation tools they could have picked. Here in the menu, you can see all the different ones that exist here. So there's uh, three different options from Google. One from Chachi pt, rev, Higgs Field Flux. So many open source, one SD three Ideogram. Literally any model that exists out there. And if it doesn't exist yet, they will add it later on or sometime in the near future. So I can pick from, I dunno, 15 or 20 different image generation models. I pick nano banana in this particular case for a reason because nano banana allows me to put several different. Image inputs in addition to the prompt. So how are these images created? Each and every one of these images, and again, those of you who are watching the screen, can see that there are 10 different images that were created based on the 10 different prompts, but each and every one of them gets three different inputs. One input is the prompt that was automatically generated. The other two are the original image of the onesie. And the reference image of the original image I created separately to get it some additional idea of how this looks like on a baby. So every image gets. The onesie as a flat item, the onesie on a baby, and then a prompt that I did not generate, but it generated on its own. And you can see how cute this is. You can see the baby laying on the blanket. You can see the baby in the park. You can see the baby next to an old car in a picnic. You can see the baby with the family on an outdoor dinner. Everybody's smiling. It's kinda like a sunset, beautiful lighting on everybody and so on and so forth, baby in the market and so on. My favorite one, as I said, is the yawning baby on the couch. It's just the cutest thing ever, and you literally want to get teleported right there into this room to look at the yawning baby with the pets lying with the pets lying next to him. So that could have been the end of it. I could have added text on top of it. So there's a way to do that. But instead, I wanna show you how to take this and create a video. So again, I don't wanna be. A blocker to the process. I wanted to be able to create videos regardless of what is in the image, because I don't know what's going to be in the image because the prompts are generated dynamically based on the item I uploaded in the beginning. So what I'm adding now, I'm starting with a box that's called Image Subscriber. And what it does, as the name suggests, it describes in detail what's in the image. You can use it with any image, not just with the ones that are generated. So. It says a cozy image featuring a yawning baby in an olive colored, thankful for family onesie. Seated next to. An orange tabby kitten, both on a beige couch with a tan blanket on it. A sleeping bulldog rests against the kitten, and a golden retriever lies in the foreground. In the background is a lit gas fireplace with a wood mantle and decorative objects on it. A very good description. Those of you who're not watching it can kind of imagine how it looks like this is how good this is. So now I have a description of what's happening in the image. Then I wrote a prompt that again, I built as a generic prompt on purpose. You are an expert video add script writer. Please create a script for a short video that starts with the image that I attached and continues for about eight seconds. The goal. Is to highlight the baby's onesie and provide a cozy family feeling. Include instructions for the background, music and sound effects. And the reason I ask for that is because if I wanna use VO three or another image generation tool like Sora that knows how to create sound as well, I want that to be included. So now I have the image of the baby on the couch, I have the description. In words of exactly what's in the image and my prompt on what I wanted, on what I want the tool to do with it. And I collected all three of them into, again, a large language model tool. In this particular case, if I click on it, you can see on the right I selected philanthropic 3.7 sonnet because I think it is more creative than the other ones. And it created a prompt that is a script for an ad, and then all I have to do is connect it to a video generation tool. And one of the cool things here is because I have all the video generation tools, I can test all of them in parallel and see the different outputs. So on the top you can see one from Higgs Field. That is really, really cute. And again, all we're seeing is we're seeing the yawning baby and we see the fireplace moving and the pets are moving around a little bit. The second one has a lot less motion and is more boring, and that's from runway. And the third one is from Minimax, which is also really cool with the baby. More moving, but it distorts a little bit as it zooms in and out, which I don't like, by the way, for each and every one of them, I could have ran it x number of times until I get an output that I like. So if I now wanna get a completely different output, all I need to do is. Change the onesie in the very first step, and it will automatically come up with 10 new ideas that are related to what's on the onesie as far as the topic. Break it down into 10 different prompts, create 10 different images, pick the best image, and generate three different cute videos in three different image generation tools when all the work that I have to do is change one picture. This is one aspect of this. I want to go to a separate use case that will repeat some of the stuff, but we'll use some other capabilities. So in this particular case, many of my clients are in the branded merchandise universe, and what they do is you want to purchase a product and you want to put your logo on it, and you want to be able to see how it is going to look like. So what I created here is a machine that does exactly that. And creates promotional assets to promote it. So the first input is a product that you can upload. In this particular case, I took a white metal water bottle that I found on the internet. The second input is my company's logo, my second company. So Data Breeze, which is the software company that generates reconciliation, automatic processes with AI agents. Uh, that has nothing to do with what we're doing right now, but it's a logo and a brand name, uh, as a p and g with no background. And I wrote a prompt, and again, you'll see that the prompt is very generic. You're an expert in promotional product logo placement. I would like you to look at the product in the image I'm providing you and consider multiple aspects where should the logo be placed in order to make the product appealing while the logo still appears on the product in a clear way, and then apply. Your decision and place the logo in the location on the product, make sure that the logo is accurate and that is laid on the surface of the product in a photorealistic way. The entire product and the entire logo must be visible in the image. So again, I don't know which product, and I don't know which logo, but because I wrote the prompt, very generic, it'll work. So then you can see that I took Gemini. Then I took nano banana, which again, is the same image generation tool, but I could have tried other ones as well. And I'm giving it the three inputs. I'm giving it the prompt, the generic prompt. I'm giving it the product, and I'm giving it the logo, and I get an output that is absolutely amazing. It looks completely photorealistic, and it has my logo that looks curved on the water bottle. Placed in a logical place to place it, which in this particular case is mid height on the water bottle. But then I use the same trick that I used before. I wanna show the product with the logo in different cases. So I wrote a prompt that, again, is generic. You are an expert marketer in the promotional product industry. Your goal is to come up with 10 different detailed descriptions of scenarios that can highlight how to use this specific branded product. Again, I don't name the product because I don't know what somebody will upload to this process. Then I gave it several different examples and then make at least two of the ideas funny. And if the product requires manipulation for some of the images, make sure you mention that in your description. As an example, if it is a water bottle and somebody's drinking. Filling it up, the cap should be off. The reason I added that is because I did not do that in the beginning and then it didn't work well, and so I added that function in the end. And just like in the previous steps, I'm attaching it to a large language model. What is the input of the large language model? The prompt I just read to you, plus the image of the product with the logo on it that was auto-generated by this process. Very similar way. It creates 10 separate prompts. Then I put it into an array, which creates a list. So now I have 10 different separate prompts, each and every one of them in a separate box and in a very similar way to what I did before. I'm connecting them to nano banana and nano banana. This case has two different inputs. The prompt, one of the 10 prompts that was generated, and the image of the water bottle with the logo on it. You can see how cool this is. And so for those of you who are not watching, I will describe it in the first one. There is a lady businesswoman dressed up in business, attire in an office with the background, has an office space with two people in a meeting room, and she's standing next to a fancy water fountain and she's filling up the water bottle and the logo is clear in the image and the. Cap is off because she's feeling the water bottle because I prompted it before. Then there is another image of a marathon runner. Again, female. A lot of people are cheering on the side. It actually did something very, very cool, which on the background and on the shirts of other people are cheering to her. It also put Data Breeze as. Something that is everywhere in the image. It's not over the top, but she's there. And another lady on the side of the track is filling up her water bottle from a jug of water. The next one is a business person dressed in a suit. And he's holding the bottle. He's sitting next to a desk and while something in the background seems like a Zoom meeting that he is on, and he's showing off his fancy bottle. The next one is a marathon runner. The next one is a couple on a hike, and they have matching bottles and they're standing happy, smiling on top of the mountain. And I didn't create the prompts for any of these. They were all auto created. So here I did something similar to what I did before, but I wanted to show you. Other things that you can do with this tool when it comes to video generation. So I started with a similar process to what I did before. I asked the image descriptor to describe the image, one of the images. In this particular case, what we see is the bottle right in the middle, in the background. There's a beautiful scenery of a sunset next to a lake, and you can see it's standing on a yoga mattress, and you can see the legs of a female person that is leaning down to pick up the bottle, but we're not seeing her because it's focused on the bottle. It's actually an awesome ad if we wanted a static image to promote this. So the image descriptor described this image, and then I wrote a similar prompt to what I wrote before. You're an expert video marketer who specializes in creating short videos that captures people's attention. Your goal is to write a detailed script for an eight second video, et cetera, et cetera, et cetera. So again, I'm not telling it what to write about because it's gonna learn that from the image in the description that I'm uploading. Then I send it to a large language model. In this particular case, Google, Gemini 2.0 Flash. Why? Just to show you that I can, so I'm mixing and matching different tools, and then I connected the output that is now a detailed script of exactly what needs to happen second by second over this entire video. So it's a very long description that is broken down into seconds on exactly what needs to happen. I took the output. Off that script that was auto-generated, connected it to the image of the feet of the female standing on the yoga mattress and created a video. If you're looking, she's picking up the water bottle and she's drinking, and there's birds flying in the background, and the whole thing is really, really beautiful. Now here what I forgot to add is the whole idea of opening the water bottle, which as you can see, she's trying to drink from the water bottle when the cap is still on, but that's an easy fix that already showed you how to fix with prompting. So this created this video, but here is where the next step of the magic happens. As you know, all these tools, the video generation tools have a limit on how many seconds of videos they can create. But sometimes you want to create a longer video, you need additional scenes, and you want the second scene to be either a complete smooth transition from the first scene or at least something that is connected. So they have a tool called Extract Video Frame, and what it does is I can select. Which frame from the video so I can scroll the video left and right until the frame that I want. In this particular case, I chose the very last frame, so now I know exactly what the last frame in the, in the video is. And what I did then is I started with the same process image descriptor. So I have a detailed description of what happens in that last frame, and I have the last frame itself. I then added several different text messages that I wrote. The first one says, you are working on the second scene of the video. The first scene of the video is, and then there's a column and nothing after that. And then the last text box is you need to write a script for the scene starting with, and then again, nothing after that. Then I'm using a really cool tool that makes it even more powerful. That is called the really weird name of prompt Concatenated, which basically what it means is it knows how to combine several different prompts together. So what did I combine? The very first prompt is my original prompt, literally just taken from there that described you are an expert video marketer who specializes in creating short videos, blah, blah, blah, blah, blah. So I'm reusing the same prompt that I used in the beginning. The second prompt that I've used in the list is you are working on the second scene of the video. The first scene of the video is, and then I took the description that was created in the original step. The one that created the original video, there's a script for that. I added that into my prompt. The third thing that I wrote that I connected into this merger of prompting is you need to write the script for the second scene, starting with, and now I connected the description of the last frame and I connected it into a large language model and it created a script. That now is the second scene that is perfectly connected to the first scene because it had the input of the description and the exact first scene, script, and so on and so forth. And then I added that to a new video generation tool. And in this video generation tool, I connected two things, this output of the new script for scene two and the last frame from the previous. Video as the first frame of this video. So if I merge them together later on with any kind of editing tool, whether using Cup Cut or something else, it will seamlessly connect to one another because the last frame of the first is the first frame of the second video, and I can keep on adding more and more scenes. I can obviously provide any guidance that I wanted in the middle, but the cool thing about the way it is right now is that it's completely generic. Which means going back to what I said in the beginning, I can now change the product from a water bottle to a baseball cap and it will change the ideas for the images. It will change the images, and it will change the videos, including two different scenes showing off the hat on one of those images. This is absolute magic. What I'm just showing you would've taken a team of people a few days to create. It took me about an hour to generate this process and now to regenerate it takes five seconds for me, and then about 10 minutes of the different models to run one after the other because they do need to run in a sequence. Let's go to a completely different kind of use case. This one is more of an interior design use case. So the way I'm starting here on the left, I'm starting with an image of a room. It's a nicely designed entry foyer of a house. There are two doors. There's a green wall in the background, a mirror and a mantle with several different things on it. And what I've added is I've added a text message that's saying, remove the floor tiles and keep it plain off white. And then I used a different image generation tool that's called Flux Context, which is an open source model that is amazing in keeping consistency across different images. And as you can see in the second image, or as you can believe me, if you're just listening to this. The image is the exact same image, only without the tiles off the floor. Then I created an image of a orange couch, and I combined the two of them together. So now what I did is I combined the input from the room without the floor, or without the fancy floor, and I added the couch inside that room and it knows how to size it in a way that would make sense. I then took an image of tiles off the internet that shows a different kind of tiles, and I wrote a prompt. Cover the floor with the black and light floral pattern carpet and keep the rest of the interior view intact, maintaining architectural clarity. And as you can see, it did so now I have the original room, but with a new kind of floor and with the orange couch in it. But you can go even further and add more steps. I've used a mask technique, which I will show you in the next example, so I'm not gonna dive into it right now. To replace what was the mirror with a window looking outside into the woods. And I have a completely different view of the room. And you can experiment with interior design or any other kind of design this way very, very quickly and iterate very, very quickly across multiple options in just seconds, which used to take with Photoshop hours to do so now to my final. And I told you this can be fun, so I wanna show you something fun that I did. So several different people that I know, including my sister and several others have used once they've seen nano banana to try different hairstyles on themselves. So you take an image of yourself and then you take an image of a specific hairstyle and you ask Nano Banana to combine the two together. Now, as you know, because my image is on the cover of the podcast, I don't have any hair. I'm completely bald. So I found it to be really, really funny to play with different hairstyles on me. So I wanna show you how simple that can be to get different hair stars. So I started with an image from a conference I was at last week, it's a selfie of me and one of the fans of the podcast that wanted to take a selfie with me. So both of us are in the image and you can see the background of the conference hall, with all the different lighting and so on. So this was the input, just an image, a actual live image of me plus another person. Now there's the concept of masking when it comes to graphic manipulation, and that basically means cutting things either in or out of the frame. Now, doing masking, if you have Photoshop is still an art, but you still need to know what you're doing. What they have here in vy, they have a really cool tool called mask by text. So instead of selecting visually, you explain in words what you want the system to do. So. It has two inputs. One input is an image, and the other input is a prompt. So I took this image of me and the other guy, the selfie that has both of us, and I wrote a prompt. Keep just the guy on the right. That's it. That's the prompt. And now what you can see, the mask is basically just a black and white representation of what it is going to do. So you can see a cutout. Of me without anything else. And then there is a tool that's called Merge Alpha. Again, you don't need to know what that is, but it's basically taking the original image and applying the mask on it, which means it's just gonna show myself and nothing else out of the image. This is a process that would've been significantly more painful in any other way. So just doing this is worth doing, uh, on its own. But now what I did is where the fun actually begins. So I took pictures of really famous hairstyles from history. I've got Marilyn Monroe, I've got Amy Winehouse, I've got BA from the A team. I've got another Marilyn Monroe. I've got, uh, Farrah Faucet. I've got, uh, a bronze statue of Caesar. I've got another Amy Winehouse, and then a bunch of other famous people, Cleopatra, uh, and so on. Even Gwen Stefani with her space buns on both sides of the head. And all I did is I used a third party tool. And again, just a tool that exists inside of weve, that comes from an open source tool that's called Face swap. That's it. So all I had to do for each and every one of these images is to connect the now face of just me that I cut out using the really easy masking tool. And the other input is a known hairstyle. And what you can see is, and again you're not watching this, but this is absolutely hilarious. So in the first one, I've got the great really big hair of Amy Winehouse in the same style and in the same smile and everything, and the same green dress, only it's me, and then me as Merlin Monroe in black and white in the famous dressing. Uh, and then. Me as far fast, and then me as the bronze statue of Caesar with Caesar's hairstyle and then so on and so forth. Gren, Stefani, et cetera. Okay, so now that we've seen multiple different options, I wanna do a quick summary of everything that we have seen and why this is so powerful. Almost every one of the steps. That I showed you before would've taken a lot of time using traditional tools. Combining them all together in many cases required several different people with different expertise. People who are graphic experts, people who are prompting experts, people who are. Image generation experts, people who are video generation experts and so on and so forth, and it replaces all of them with a very flexible workflow, with simple inputs and simple outputs that a monkey like me can use. There's zero technical skills required to use. All the stuff that I showed you, and once you start playing with it and you understand that by mixing and matching the different tools that they provide, you can go from any input to any visual output you want in a way that is now scalable. You want to do the same process every single day or every single week or whenever. All you have to do, if you do this correctly, is replace the input. That you created, whether it's a prompt or an image, or several images and several different prompts, whatever the input is, and it will give you all the different outputs that the process knows how to generate. This is an insane time saver, and it is even more incredible when it comes to ideation and experimentation. Now, by the way. They do have all the different upscales that are up there. So while the original image that is generated is usually not high resolution because it's generated by nano banana or SD three or whatever, you can upscale them with really powerful upscales and get significantly high resolution images. You can change the aspect ratio. Of any image in out paint, so it's not blank. So you can take a square image, change it into a 16 by nine or by nine by 16 or whatever you want, and use an open source model to out paintin the rest so it still looks like a complete image and so on and so forth. Literally, anything you can imagine you can do with no technical skills, set it up once and then run it as many times as you want. This is the magic of AI at its best, so. Go back to the show notes. Click on the link to get VY and start playing with this. I think you get X number of free tokens to start playing with this for free. One more thing that I will say that is really important to know. There's a growing community of this tool and many of these people are sharing. They're templates. So if you don't know how to use something, just go to a large language model or to Google if you're still old school and say, Hey, I'm looking for a VY template that does 1, 2, 3, and you'll most likely find one. And then you can duplicate it, learn how it works, make whatever changes you want, and make it your own on just, or just take components out of that template and combine it with components of other templates to create your entire. Workflow. So the community aspect of this makes it even easier and even more fun. That is it for today. I really hope you found this helpful. As I mentioned, I highly recommend if you listen to the entire podcast to when you have a little bit of time, go watch the YouTube video as well. It will give you a much better understanding how this works, but I hopefully, uh, was able to describe everything that's on the screen and how it is working and. And if you are enjoying this podcast, please hit the subscribe button so you don't miss any episode that either myself or a guest or the news that we share every single week. I do the very best that I can to create, to bring you the best, most practical content on the web right now and on podcasts for sure. So subscribe and share the podcast with other people. There's a share button on your podcast player. Just click share, share it with a few people that can benefit from it. I would really appreciate that. And until next time, have an amazing rest of your week.