Leveraging AI

252 | How to Use Microsoft Copilot to Standardize AI Workflows Across Your Organization with Nate Amidon

• Isar Meitis, Nate Amidon • Season 1 • Episode 252

📢 Want to thrive in 2026?
Join the next AI Business Transformation cohort kicking off January 20th, 2026.
🎯 Practical, not theoretical. Tailored for business professionals. - https://multiplai.ai/ai-course/

Learn more about Advance Course (Master the Art of End-to-End AI Automation): https://multiplai.ai/advance-course/


Is your AI strategy quietly creating more chaos than clarity?

As AI experimentation spreads across teams, what starts as innovation can spiral into disconnected tools, inconsistent outputs, and rogue workflows. Sound familiar?

In this episode, former Air Force pilot and enterprise transformation consultant Nate Amidon shows you how to bring order to the AI chaos  using Microsoft 365 Copilot agents to create smart, scalable, standardized automation across your business.

You’ll learn how to design simple, no-code agents that align teams, increase efficiency, and improve consistency  without ever writing a line of code or relying on IT.

Whether you’re just starting with Copilot or trying to make it actually useful at scale, this episode is your practical, tactical guide.

In this session, you’ll discover:

  • Why Copilot success depends more on structure than speed
  • How to build Copilot agents for project management, process documentation, and product planning
  • The difference between informational vs. process agents (and why you need both)
  • A sustainable way to manage agent updates without breaking your workflows
  • How to write “AI-first” documentation that trains agents better than prompts
  • What NOT to automate — and how to avoid creating new bottlenecks
  • How to extract value from meeting transcripts, emails, and docs using Copilot

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Isar Meitis:

Hello, and welcome to another episode of the Leveraging AI Podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and we're going to cover an incredibly important topic today, and that is the topic on how to build tools that delivered standard results across the organization. Now, one of the big issues that companies are facing as they start implementing ai. Is that they're basically generating lots of local chaos because there are a few people who are excited about AI who create local initiatives that they start using, which means many different people are doing many different things in many different ways. And the bigger the organization, the bigger the problem becomes. Because as an organization, you need to have standardization. You need to get consistent results. Otherwise, you can't run your business regardless of what you're doing, whether you're writing project plans, generating reports, writing proposals, you want them to always be standard. And so you can actually use AI in order to increase standardization and actually be, get consistent results better than even humans on their own before AI was ever created. So this is obviously music to the ears of any manager, whether middle management or senior management, of how can we use AI to create consistency across the organization So there are standards that everybody's actually following it and not just in a folder somewhere. And to help us to learn how to do this effectively. We have with us today Nate Amidon, who is an incredible person with a really unique history. So he started his career as a C 17 pilot in the US Air Force. Those of you who don't know what a C 17 in is one of the most impressive airplanes ever built. It is gigantic and yet incredibly powerful platform. And as a Air Force pilot myself, I can tell you that it's a big deal to know how to fly these things. Now, after that, he spent 10 years in leading large projects in companies such as Boeing. So he has the right experience to know what large organizations actually need, how they work, what are the pitfalls and how to solve them. And in the last seven or eight years or so, he's been running his own consulting company where he helps enterprises develop effective teams and processes implementing what he learned in the Air Force. He's also hiring ex people from the military, which I highly appreciate, again, as a veteran myself. So that by itself, says a lot about him as a person. Now, what he's going to share with us is two different automation tools that he has built for actual client, for enterprise clients that help deliver clarity and consistency across multiple aspects of the organization. And he's going to show you exactly how you can develop these on your own. So you can do this as well. So this by itself is a very important topic that I'm sure you're excited about, but to make it even more exciting, he's gonna show you how to do this in Microsoft 365. Now, why do I find this exciting? Because we very rarely do Microsoft 365 copilot things in the show. We've touched on Gemini and Claude and ChatGPT and open source models, but we very rarely do anything with copilot 365, and I know many of you are copilot users because you work in large enterprises and about 70% of large enterprises use copilot. So there's two very good reasons why you should stick with us and listen to this episode. One is because we're going to show you how to build consistency across processes and projects in the organization. And the other is we're going to show you how to do this with Microsoft Co-Pilot. You can obviously take this to any other tool you want, but if you are in the co-pilot universe, you get another bonus and this is why I'm really excited to welcome Nate to the show. Nate, welcome to leveraging

Nate Amidon:

ai. ai. Ah, thanks, Essar. Happy to be here. You know, o one thing you missed on the intro was that, uh, the c seventeens the best looking aircraft as well. Uh,

Isar Meitis:

debatable. Debatable. Hence why I did not mention that, because I knew this is gonna go somewhere, this be real 16, pretty cool too, may derail the podcast, but it is an incredible, incredible platform. I will definitely give you that. Uh, okay, so, so tell me, yeah, let's, let's really dive right in and, and let's talk about, uh, the first example that you, that, that you want to share with us. And you can, you can maybe start with how maybe some of your clients are using it and then we can dive into, or what it is, how your clients are using it, and then we can dive into actually how to build it.

Nate Amidon:

Right. Uh, one thing you said in your intro that I think is really important is that AI can, is really, can really just create chaos. It can, it can in some ways be a chaos generator. Um, and especially in the lar, the larger your organization, the more people you have that can be off creating their own things. And as these tools become, you know, more readily available and user-friendly, pretty soon they're gonna start sharing those out. Right? And if you think about the person in your office who you definitely don't want creating the standard ai, that person could be the one that carries it out. So I think organizations need to be thinking a lot about, Hey, what is our, how are we actually going to handle this? How are we gonna, how are we gonna build our own internal governance process? To make sure that, that the AI is being used in a way that, that we want, that fits our culture, that fits the, the objective of our, of our team, of our program, of our organization. And, uh, and so I think leaning into these early is gonna be a good idea. So.

Isar Meitis:

Agreed. Yeah. So, yeah. I, I, I think the, the, the grassroots aspect of AI is awesome. Mm-hmm. From the aspect that it allows more people to be innovative and do things for the organization for sure. As long as you can control it. Once you lose that, then like you're saying, it's just gonna create more chaos than you had before. Right. Which is definitely not the goal. The goal is to create more productivity, more efficiency, more better results, uh, but not to create more chaos, because we usually have enough chaos in organizations as it is right now.

Nate Amidon:

Yeah, absolutely. And the. The um, it's, it's really a balance because you want to let people, you know, go off and do science projects. You want'em to, to experiment, come up with, with things that are valuable. And a lot like in the software development realm, like, you know, those hackathons, even those are valuable for software engineers.'cause you just come up with ideas that could be implemented and save the company huge amounts of money a hundred percent. But at the same time, you, you can't have a, a team that's spending six months building a science project that isn't validated and isn't something that, you know, is so, it's a huge balance and I think AI is gonna look a lot, a lot like that. I think we, we need to do AI with a lot of the agile principles that we used in software development. And we should treat ai, we should treat our AI, um, agents as software products.'cause that's what they are. Yeah. Yeah. Yeah. Um, so what we've been doing with our clients, uh, and, and lemme just preface, I'm not a software engineer. I don't have really strong technical chops, all right? I mean, I, I get scared with Excel macro sometimes. Okay? So like, I, I don't go super deep in the hands on keyboard realm, but we really think a lot about optimizing process and driving alignment in an organization. And for us, and as Air Force pilots, it's really important to know what the mission is, who's doing what. And when you have multiple aircraft in the airspace, they all have to be coordinated in talking. And that same principle applies to technology organizations or really any organization. That's, that's in the complex domain. So, uh, so, uh, we've always done automations through, uh, like Azure, DevOps and Jira and through, you know, just through general processes. And so when, what's really happening is with, especially Microsoft and Gemini and GPT are doing the same thing, they're kind of democratizing some of the technology ability to people that aren't, that aren't hands on keyboard engineers.

Isar Meitis:

Yeah.

Nate Amidon:

So a lot of the things I'll show you, you could have built before ai, AI's not really unlocking it, it's just making it a lot easier for people to, to do it themselves without a ton of technical background.

Isar Meitis:

A hundred percent.

Nate Amidon:

Yeah. So, um, and so let me just give a quick overview of kind of our, our. Our philosophy of how we do this. I'll do it really fast, but I think that it is really important to kind of set the stage for what these are, because these, like I said, these aren't really technically impressive. It's more of how they're applied inside the process. So we have kind of four key areas that we focus on. The first one is that we should be building these AI solutions to make humans better, not replace humans. That should, it should be enhancing human ability. And that's important from a change management perspective when,'cause that, that, a lot of times that can stop people from even adopting. Uh, we, we, anytime we're gonna build an automation, it has to be valuable, which is the biggest common sense thing ever. But it has to be valuable it, it, we define that as it saves time, but it, or it increases quality. So you could create an automation that makes. It takes longer for someone to do something, but it's more valuable because the end product is a lot better and that improves the systematic value. So we need to think about big systems and value, and they, they can't create bottlenecks. So if you can automate your task and do it really fast, but it causes 10 other people to make their task a lot longer, you haven't, you haven't added any value, right? You've just, you've just moved your work to somebody else, if that makes sense. Yes. So, uh, a valuable is important and then we think about this incrementally. So we don't always try to build the fully complete automated solution. We'll do things in like an 80%, in an 80% stance. So if you can automate parts of your task, you should do that. It's okay to roll out small pieces of functionality and automation instead of trying to build a big thing. And then the final thing is we should always think about sustainability. Especially in these larger, larger organizations, if you build automation that, uh, doesn't adapt and change with underlying business processes or changing market conditions, and no one's managing those, they're gonna drift and, um, become best case useless, worst case, counterproductive. So,

Isar Meitis:

yeah. Great points. I, I, I think the, uh, the interesting thing about what you're saying is that all of it makes perfect sense, and yet a lot of people or organizations do not follow, not follow these things. Yeah. Uh, but yeah, I think human-centric and value is, is too really critical aspects. Uh, you want to give people superpowers. Then they will be a happier employees and b, more productive, uh, which will drive the organization as a whole. Uh, and you wanna make sure that what you're developing is not just a cool project, but actually provides value to the organization. And the interesting thing is, and I'll say my, my 2 cents, and then I would love to hear yours. How do you measure value, right?'cause value can be measured in several different ways. So when I work with clients, uh, I measure value. Three of them are very obvious. Again, uh, you know, one of them is, it, it, uh, drives up the top line. So this is simple. The other one is, it reduces cost, which reduces the bottom line, which makes sense. Uh, the third one is it removes tedious tasks and frees people to do more value jobs. So this is still. Simple and logical, but not directly immediate correlation to like dollar value. And the last one is even a further extension of that. I believe that having happy employees is one of the goals of an organization because I believe the goal of life is to be happy. And so that's just an extension of that. We spend so many hours at work, we should try to make people around us happy. So whatever we can do to alleviate employees for things that makes them unhappy is valuable, right? So it may not be captured in a KPI or an OKR or whatever other acronym you want to put on it. I find these things valuable, but this is the way I do things. I'm very curious to see how you organizations you work with, uh, capture or define valuable work.

Nate Amidon:

Yeah, I, so we, we work primarily with software development and technology organizations, usually in larger to mid to large enterprise, multiple team type scenarios. Yeah, yeah, yeah. Um, and defining value, forget about AI for a second. Defining value is one of the biggest hurdles, uh, the effort, especially in these really large enterprises, right? Like Boeing for example. Like, um, there's so many moving pieces and, and when you get down to the team level, the question of what is it that you actually do and how, why does your team exist is a hard question. And I think it's a, and so knowing, knowing how the smaller your team, it's, it can be a lot simpler. You have a startup, Hey, are we making more money or not making more money? Yeah. But like, you can't correlate that to maybe a backend API team in a super integrated system. In a large enterprise. Like, it becomes really hard. So defining value, uh, is, is a real problem. And you're what you brought up really made me think about back in the eighties when manufacturing companies just got automated robots into their assembly lines. And there's a book out there called The Game, if you're familiar with that or the goal, sorry, not the Game, the Goal. Um, yeah,

Isar Meitis:

yeah. That's that's way before the eighties.

Nate Amidon:

Yeah. It's, it's so, it's back there, right. Is, you know, yeah. Before we were born. And they, what's crazy is they had these same problem.

Isar Meitis:

Yeah. Which

Nate Amidon:

is, they're automating things, but they, and they're like, Hey, this one task, it's way faster. We're, we're getting a huge ROI but at the end they weren't at all because they were just creating bottle.'cause there's a

Isar Meitis:

bottleneck somewhere else. And your ability to create additional capacity makes no difference.

Nate Amidon:

Yes. So, so putting on your lean hat is, uh, like the Lean Six Sigma stuff's gonna become way more important and, uh, the process of implementing AI is gonna be more important than the actual AI capabilities itself. Especially in the larger enterprise.

Isar Meitis:

So, agreed. A hundred percent. So let's dive in. So what, what's this particular first automation task?

Nate Amidon:

Okay, so yeah, so let me just say there's two kind of areas that I focus, uh, and these are really lightweight automations. And I'm not gonna, I'm not gonna wow your, your audience with my technical skills here. But, uh, there's really kind of two ways, uh, I think that some of these capabilities can help. And the, i, I look at them right now as you can have kind of agents that are informational and agents that are kind of process improvement agents. And so I'm gonna show you, uh, uh, one of each, all right. Um, and, and, and kind of just go through what they look like, what they are and the use cases, uh, for them. So, uh, I'm gonna start out first with what I, I'm calling it like a ways of working agent. All right. And so the, the problem statement here is that there's a ton of different process steps, uh, in these bigger, larger enterprise organizations. There's so many, uh, regulations. There can be so many. Uh, PMO might have its own structure. You have an agile, uh, coach, or you have a, a director, and they all want certain things done a certain way. Uh, and so following the process isn't hard, but knowing what the process is is hard. Okay? So, um, and so what we've done is built an agent to help, uh, make it easier to know how we're supposed to do things, and then also to get information about the process. So that's the, the, let's set the stage here and then let me, uh, share up my screen and we can walk through it.

Isar Meitis:

Awesome. For those of you who are listening and not watching this on YouTube or something, uh, we are going to explain everything that's on the screen so you can follow along with us. Uh, you can also always go and watch this on YouTube. There's a link in the show notes, uh, that can take you straight to the YouTube channel to watch this, but if you are walking your dog or doing the dishes or jogging on a treadmill, then you can stick with us and uh, we will, or driving for sure. You can stick with us just listening and then we will tell you exactly what's on the screen.

Nate Amidon:

Perfect. I'll try to, I'll try to give progressive directions here as we go. Okay. So first what we're talking about is Microsoft copilot. So if you have a Microsoft account, let's say on the larger enterprise, um, organizations, you probably have copilot. Okay, so you can get that through your teams instance. On the left hand panel, there's a copilot tab, uh, or you can get it through your M 365 account, which is those nine boxes up at the top left.

Isar Meitis:

Yep.

Nate Amidon:

And when you select these, uh, you can, um, find the copilot tab, which I already have open here, and it'll pop into, uh, your M 365 copilot instance. Now, uh, having copilot, uh, I'll talk a little bit about licensing. Essentially the gist is, uh, you, you may, you may not just automatically have access to agents in an M 365 copilot instance. So the way you can check is if you look on the left hand side and you see agents, right? You have it. If you don't see agents, you don't, and you should go talk to your system, admin to, to request a license. They're not overly expensive. They're maybe$20, uh, a month. I think it's somewhere around there. It depends on the contract that you have with Microsoft or your organization has with Microsoft.

Isar Meitis:

Question to you as, I'm not an expert in the co-pilot, uh, universe. If you have the agents on the left tab, it means you can create them and use them. Or if you just want to use them, you don't even need that kind of a license.

Nate Amidon:

You, you, you can. It's, it's all or nothing from my understanding. Got it. But things are pretty dynamic in Microsoft's billing world. Right. But right now, if you have any, so if you wanna

Isar Meitis:

use agents, even if somebody else built them, you still need the license. Yes. Got it. Okay.

Nate Amidon:

Yes. And so, uh, but I know how big organizations are. Sometimes you have to, uh, create a full business case that takes you more than$200 to build, but for your$200, uh, license,

Isar Meitis:

okay? Yeah. Yeah. Yearly, yearly cost. Yeah. I'm, I'm with you.

Nate Amidon:

So if you have, uh, agents here, um, then basically there's a button on the left that says New agent. Okay? You can click on that and that will help you create a new agent. Okay? Now, if you're familiar with, uh, GPTs, uh, in the chat GBT ecosystem or the open AI ecosystem, they're very, very similar to, uh, a copilot agent. And I really use Agent loosely here. Okay? But that's what they're calling them. So they're an agent. Uh, and when you, when you get, when you open up a new agent. Okay. It'll give you two options. You can either describe what you want the agent to do and then it will build it for you. Or you can hit the configure button. And the configure button will let you fill in your own instructions. Now, technique only, I like to just fill in my own instructions and the agent, and I do that with the use of another LLM.

Isar Meitis:

And so we are exactly the same. I've used the, the describe option in custom GPTs exactly zero times in Vi Nova. I've created dozens of them between my company and client companies. And I teach people how to use a regular conversation with whatever your choice of tool is to develop the instructions for the custom GPT. So we're a hundred percent aligned. And guys, we did not coordinate this in advance. So two separate people who do this for a living tell you this is the right thing to do. Maybe that's the right thing to do.

Nate Amidon:

Fair. Well, we're both type A former pilots, so that's probably why. Fair.

Isar Meitis:

Fair. Okay. So, uh,

Nate Amidon:

so when you go through, um, to your point, I like to, to do that because then I can, um, uh, I can test it and then update the instructions on the agent. Um, and so when you are creating a new agent, you can, you have to give it a name. I like to give it something that sounds useful.'cause remember I'm building these not for my, myself. I'm trying to build these for a team or an organization. Right. And I'm using these to help an organization become more aligned. So I want it to be to, to, to be named something that makes sense.

Isar Meitis:

So there's be beyond the makes sense. You said it's, there's a marketing aspect to it. Yeah. Right. It needs to sound valuable.

Nate Amidon:

Right. Right.

Isar Meitis:

Yeah.

Nate Amidon:

Um, and then there's a description of your agent below it. This is really informational. The agent doesn't use this. Okay. And it basically says, what is it that, um, you know, it's just so someone opens it up, they can just see what it does. And then the instructions are where you make your money. Okay. Uh, and, uh, Microsoft gives you about 8,000 characters, which is enough. Yeah. From my experience. Okay. And remember when I said earlier, these need to be valuable and then also incremental. So when I'm building these, I'm scoping these to a specific business problem and I want to keep them limited in scope. Now that's my technique. So I don't want an agent that does five things. I'd rather have five agents that do one thing.

Isar Meitis:

I'll say two things about what you said, uh, one about the last thing, which is I agree with you a hundred percent for two different reasons. A, they're easier to build. B, they're a lot more consistent in the results. If you try to build one agent that does five things, you'll most likely get them to do roughly the five things instead of actually doing each and every one of them perfectly. Uh, and you can always just then string them together with some other automation tool that will know how to transfer the data from one step to the other. Or you can do this manually. Uh, you get a lot more control and much better results. That's one thing. The second thing about the 8,000 characters, uh, and I know we're gonna talk about it in a minute, but there's also the knowledge base, which is a bunch of files, and this allows you to dramatically extend the instructions. So if you wanna give it examples of what good looks like, you don't have to write the example of the instructions. You can say, look at example one and example two in your knowledge base, and these doesn't count to your 8,000 characters. And then in one sentence you added 20 pages of examples. Yeah. And so this is another way to cut the amount of text you have inside the instructions themselves. If for whatever reason you hit the 8,000 character limit.

Nate Amidon:

No, that's a great point. Right. Um, and I'll talk about knowledge base in a second. Um, uh, the, the final thing, one thing I also mention is. The other, the other kind of pillar we talk about is sustainability. So the same reason engineers went to microservices is the same reason you should have, you should have kind of one task agents whenever possible. Okay? They're easier to update. You update one. You don't have to update the whole monolith of an agent. And, um, okay, so, uh, you make your instructions and then, um, at the bottom, uh, you can add in what, uh, information you wanna train the agent on, okay? And you can do this directly off of your SharePoint. And one of the reasons I love this idea of building your own M 365 agents is that it's all inside of the Microsoft firewall ecosystem. If you can send it an email, you can build it in an agent. And so, uh, I don't deal much with security concerns because I'm just using capabilities inside the fence. So, uh, you can add in, uh, different, um, websites, URLs, documentation, that's in the SharePoint. You can upload actual documents, um, but generally huge

Isar Meitis:

channels as well, right? They're now connected, fully connected into teams. So you can train it on like a specific channel. Like there's, like you said, anything in the Microsoft universe. It could be a source of data for the agent, which is very powerful.

Nate Amidon:

Yes. And I'll say Microsoft is rapidly prototyping and updating this. So this could be, this could look totally different than what I said tomorrow, okay. But, uh, generally the capabilities, um, allow you to upload and, and there's a huge limit on files. Now, if you're doing this in a, in a Gmail, Gemini space, right now, they're limited to 10 source documents. All right? Um, but, uh, Microsoft, it's like 10,000 or something ridiculous. Yeah. Okay. Uh, the other piece I wanna say, uh, that I think is interesting here is this idea of only using specified sources. Okay? So there is a tab, uh, underneath the knowledge base. And if you click this tab over to say only use specified sources, you are telling the agent only to use the documentation. You loaded, no guessing, no going out to the internet. And so for the agent, I'm gonna show you, I think that's a valid button to select because if I'm talking about agile processes and how we do business, I don't want every other agile coach out there giving me their slightly horrible idea of how to run a sprint planning or whatever. So, um, when you select this button, you've essentially turned it into a chatbot.

Isar Meitis:

Yeah. Right? And so to explain to people in, in a broader terms of be very specific, you limit the universe of the. Agent to a very specific data source and only to that data source. So whether it is you're doing, uh, agile planning like Nate is talking about, or you're doing any kind of project management or employee handbook, how do I, uh, apply for maternity leave? Then you wanted to just go to this two, these two HR documents and that's it. And this is how you force the AI to do that, which is extremely powerful because in most cases, unless this is a research agent, that's what you want. You want it to only go to the sources you give it. Uh, and the combination of that together with the fact it's already in your Microsoft universe, you already have these documents, just makes it really, really easy to work with.

Nate Amidon:

Absolutely. And, um, I, the, the use cases for that type of agent, which I'm kind of calling an information agent, um, is really broad. And, you know, I was less excited about it at first, but then when I started using the application with clients, it was like, oh, this is actually really powerful. The ability to, uh, do HR processes, uh, if you have a huge regulatory environment, getting all of the regulatory environment in there so you don't have to spend hours hunting through, um, uh, even spec like in a software development space. Even like specific, uh, user manuals for your software development product.

Isar Meitis:

I, I'll give you, I'll give you another broader example. I have several clients are in the manufacturing space or in the product space, and they have a product catalog the size of the United States. And you wanna know the part number for this and that skew from this and that year for this and that product. And to find that historically, yes, you can open 17 different documents and write a search on each and every one of them until you find the correct thing. Or you can write one sentence in this and it will just give you the result.

Nate Amidon:

Yeah, no, that's absolutely, so there is a huge use case for this. Um, and, uh, I'll show one briefly here, uh, just to wrap up how to create'em. Uh, you can also create suggested prompts. I think this is important if you're sharing this across abroad, audience of people in an organization. So it lets them know what, what this agent can be used for. Okay. So you just put a title and enter a message. And then essentially, by the way, these

Isar Meitis:

show up as like a button. So people basically see a suggestion on the screen. So if they don't really know how to use it, it gives them ideas on what they can do with it. So your most common use cases can already be there, and so people just click the button,

Nate Amidon:

right? Um, and then once you feel like you have it, uh, you can test your agent while you're building it on the right hand side. Um, and then when you're ready, you can create it all. Right? Um, and so that's essentially how you build a new, you know, build one of these. Okay, so, um, I, I'm gonna jump in and show a an example here. The, the first one, I call it the ways of working guide. The context for this is I, uh, was working, uh, and this is just kind of a replic, uh, a demo replica of a, of one I use with the client. But, um, you know, we built an entire program of how to do software development. And all the way from how we get product requirements in to how we do releases, um, you know, how we do our ceremonies in the agile world, how we do reporting, how we do metrics, uh, all of these things. And so, uh, but people are always changing in and out of an organization, especially if you have like a hundred person organization. Uh, and so having a way to keep everybody's standard on the process, knowing what the process is, and then using this agent to help kind of democratize that across the organization. So this agent is trained on, uh, some process data, right? And it's process data I built, uh, or we built. And, um, and it's configured to the ways that we do things. Okay. So, um, I can ask questions like, you know, what, uh, what sprint are we in? Right? Um, and this agent, which is, I'll show, I'll show you the backend after I do a demo here. But, um, we'll look through the, the documentation, um, and we'll give you what sprint we're on, what, what, uh, holidays are coming up. Um, and it just, it is just a quick way to find the information you need. Okay. Um, you can also, this one's

Isar Meitis:

actually interesting, so I have a question about this. Yeah. Because, uh, we were talking about, uh, information on SharePoint, which is usually static data. Yeah. Uh, meaning like, like guides and instructions and, and Yeah. And stuff like that. And yet the sprint number is a dynamic parameter, right? So those of you don't know sprints because you don't come from the, uh, software world. Uh, software companies work in sprints. These are usually one or two week segments, and you define the scope of work for the development team for this sprint. So it's a very short amount of time. It's not like a, you know, six months or two year project, uh, with a very clear defined scope per down to the person, uh, and what they need to do. So knowing which sprint you're on or what's the inside, each sprint is something, you know, at the beginning of these two weeks and not before that. And so this is a dynamic question. What does the data come from?

Nate Amidon:

So, uh, great point.'cause there's, uh, you can sort of blur the lines a little bit between dynamic and static, and it really comes down to how, what documentation you expose and how you manage that documentation as an organization. So, uh, one of the things I'm a big fan of in large enterprises is a standardized sprint cadence. Okay? So I want team A and team B to be speaking the same language, right? It's so in our Air Force world, like the com card that we all know what frequency we're on, okay. Between different aircraft. So if, uh, someone on, on team A says, Hey, uh, when are you gonna be done with this dependency? And that, and that person says, sprint 24. I want team A to know what Sprint 24 is. So we have a standardized sprint cadence with a lot of our clients. And we'll load that sprint calendar ahead of time for the year and we'll put in what holidays are gonna be on there. Um, and so that's part of our process documentation that the agent's exposed to. And if for some reason we had to, you know, we had a big issue, we had to reshuffle the sprint calendar, we can do that in the documentation and the agent will automatically update with the source documentation.

Isar Meitis:

Cool.

Nate Amidon:

So now what this doesn't do is go inside like this one, I'm not connecting this with other systems or backend APIs with Azure DevOps. So it won't tell me what is in the sprint.

Isar Meitis:

Got it.

Nate Amidon:

And so there's a balance. It's a spectrum of what you can, what you can do. But if you have like a, a reporting mechanism for, let's say an executive stakeholder report that is dynamic, that's like, let's say it's a PowerPoint or something or some sort of Word document that tracks things. As long as that's a standard one source of truth document, you could expose the agent to that and then it could start to actually give you more dynamic data.

Isar Meitis:

Yeah. So, so in theory you could take the dynamic output, in your case, the content of the sprint, but in other cases could be, uh, the weekly team meeting summary or the, uh, announcement from the leadership, whatever it is, and replace the file in the relevant folder of this current data. And then we'll have access to the current data as well.

Nate Amidon:

Yeah, absolutely. And, and so, um, and so I think we get what this does. Yeah. I mean, you can do a lot of things, but let's look at the backend.'cause I think that that's, um, important to what we're talking about right now. Um, and, uh, and so I build this, I have hardly, hardly any instructions really. Okay. And I could do a much better job of beefing this out. Um. The secret for this agent isn't the instructions. The secret for this agent is how and what data is exposed to it. All right? And, and so as uh, as we're building these, we need to really be thinking about documentation and process the way software engineers used to think about data or still think about data. And so if you're in the engineering world, clean, standardized data is critical to a functioning software development application. So if, if you've, everyone's heard trash in, trash out, that's the same thing here. Okay, so now one of the things I view as a technique, and I found this to be, um, helpful, is if I can organize the data to organize the underlying processes and expose a folder that means. I, I, I can use everything inside that folder as the source data. Now, a lot of times you can upload individual documents as this for an agent, but if you change one document, you, you delete version one and add version two, the agent won't automatically pick that up unless you reload version two as a source documentation. So my solution to that is to upload a folder. So as process documents change from version one to version two, it automatically picks up the information. And that's part of the sustainability thing that we talked about.

Isar Meitis:

Yeah. So you share the, the folder level and not the document level, because that allows you to replace the documents. Yeah. Uh, without having to re-update the agent.

Nate Amidon:

Right. But the discipline of making sure we know what's in and outta that folder. It's pretty important now. Okay. So, um, and so the, a lot of, like I said, a lot of process integrity is important. Okay. Um, let's

Isar Meitis:

look at, let's look at at least a little bit of the instruction so people understand the general idea of what the agent knows how to do or assumes what to do.

Nate Amidon:

Right. And, um, basically what I put in this agent was very similar. Okay. It was very simple. Uh, it's basically saying, you know, you're only, uh, a SharePoint documentation. Don't use any outside knowledge, even though I've already clicked that button to not let it do it. Yeah. I feel better about saying it. Okay. Um, I like it to cite source documentation. And this is really important, especially if you have these like huge regulations. You know, you have lots of different source data. Um,'cause a lot of times you may not get the exact answer you want, but then you can click and quickly find the document that that answers in. Uh, I tell it to just stay away from HR policies. Stay, don't guess, you know, and have a, hey, this a, a, a canned response that says, Hey, we don't cover this. Go talk to somebody else. Um, or if you think that something's wrong, like a lot of times you'll be working inter interacting with an agent and you'll be like, Hey, that your answer's wrong. Like, I know your answer's wrong. And so I want them to prompt whoever owns this agent. I, I just wanna let them know like, Hey, reach out to this person and fix it.

Isar Meitis:

Yeah.

Nate Amidon:

So, um, and then just some stuff around tone, formatting, audience. Um. And that's basically it.

Isar Meitis:

Perfect. Uh, I'll add my 2 cents as far as the citation part of it. Uh, when I teach people to engage with documents with ai, whether inside an automation like this or just upload the document to regular chat, I have four bullet points that I have saved in like a, like a prompt library segment that I use again and again and again. The first one tells it to only use the information from the document. The other one tells it not to use any other source of information. I know that sounds redundant, but it's just enforcing it. The third one, uh, the third bullet point tells it to tell me something specific when he doesn't find the information such as not available, or something like that. And the reason for that is these tools are built to please, and if you ask a question, you're gonna get an answer. And if it doesn't find information in the document. You are risking it, that it will make up an answer, and if you tell it what to say when the answer is not there, such as information not available in this document, put in the quotations, whatever you want, it will tell you that in a much higher percentage of chances. So your chances of hallucinations are lower. And the fourth one is exactly what you said. I asked for very specific citations in a very specific format, and it's always the same thing. Document name, page number, section name, and exact quote. And the reason I do this because then A, I can verify the information very, very quickly, and b. If I want to find additional information around that area, I know exactly where to go. But as far as verifying the information, copilot used to be really bad at getting quotes until a few months ago. Like it would literally make up quotes. And I've done this recently in copilot as of the last few weeks, and it's actually really, really good. Yes. So when it gives you an exact quote, you can copy that quote, go to the document, the original document, hit command f, paste it in the search line, and it will jump straight into that segment in the document. And if it really is in page 73 in segment, uh, you know, 20 point a 0.7, that's called something, then, you know, it actually gave you the correct information. Uh, and so a, it gives you, uh, more context of what's in the actual document, but b, it gives you another great way to check the information.

Nate Amidon:

Yeah. And, and that's very important of when you're building these things, is knowing what's your risk tolerance. Yeah. For wrong information. Um, uh, yeah, so great. Um, and then I, like I said, I use only specific sources. I hit this tab. Um, and then from the documentation perspective, this one's pretty light. Okay. It's a demo, uh, documentation. But, uh, a few things I've, I've figured out and I actually would love, uh, your take on a zsar,'cause you've done a lot, uh, these with other, um, LLMs and is, uh, I found like bulleted word document, word documents, essentially like documentation that's set up the way government documents are naturally set up where it's 3.1, 3.1 0.1 3.1 0.1, 0.1 0.2. It is really beneficial for the agent to help categorize and find things. Um, and so, uh, the way I think about this. Is I'm building documentation with less pretty pictures, with less process diagram flows and more just straight bulleted step one, step two, step three. And I'm, I'm thinking more about how organizations document their process as an AI first documentation strategy that humans can also read. Alright. Interesting. Um, and so, uh, so for example, like I uploaded a sprint calendar here. That's how you saw that. Now it's on me or it's on someone that owns the process to update this in 2026. Right. Um, and then it is really just bullet ster Sarah or bullet statements that, um, are easy to, to kind of categorize. Right.

Isar Meitis:

Yeah. I, I I love that. I I was actually about to ask you a question later on. What about all the visual aspects? So when you do processes, flow charts are a big deal, or charts in general, uh, yeah. Are, are a big deal, you know, org charts and stuff like that. How do you deal with those When it comes to the AI's ability to, a, understand them and they're very good at understanding them at this point, but more importantly, to show you what they are.'cause as far as I know, he doesn't know how to pull an image out of a document. Do you upload the documents separately? Like if you want the AI to be able to show you the flow chart, do you upload the flow chart as a separate document or you haven't even handled that use case?

Nate Amidon:

No, uh, I have handled that use case. Uh, I had a client where I was, uh. Basically, uh, had a well-known consulting firm come in and do a full transformation. And, uh, they came out with like 200 page PowerPoint with all pictures and arrows and, you know, it was great. Uh, but nobody knew what was going on. And so I essentially built one of these just to be a, what's going on with this by the aftermath of this transformation agent. And, uh, what I found beneficial was creating, turning that PowerPoint into a process, documentation, a document. I would keep a lot of the pictures, but I would just take that picture into a separate LLM and say, Hey, take this process from what you can tell and turn it into a bulleted process statement. And, uh, be, and it was probably 70% accurate. It's just, it's hard for, I, it's hard for AI and LMS from my perspective to pull visual meanings. It, it is, this doesn't know the natural flow all the time. So I think it's best to go through and, and create'em into a more documented hierarchy type of, uh, of, so you

Isar Meitis:

convert them into text and then you include both in the actual, so, but that means that the, the agent cannot show you the flow chart? It can describe the process,

Nate Amidon:

yes.

Isar Meitis:

Okay. Yes. Now, if

Nate Amidon:

you, if you needed the flow chart to be part of it, you could play with that in the instructions and say, anytime there's a process with an associated flow chart, describe the process and then, you know, try to render the image. I haven't played with that. Um,

Isar Meitis:

yeah. Yeah. Okay, cool.

Nate Amidon:

Um, so anyway, so that is essentially the documentation. Now, one other thing I'll say is we talked about. Um, a little bit about microservices and how we should have one agent specific tasks. I think when you're creating your documentation for your organization, you should do the same thing. I, I think it works better if you have, so all of these, let's say I had, you know, four documents in this demo, um, they could all be one big How we work document, right? But I think by breaking them down, it's the LLMs are more effective, but more importantly it's more sustainable because it's easier to update when you have a change. And so the process for updating, uh, building this agent isn't hard to build, but creating, actually knowing what your processes are and having them documented in a way that sound, that's the lift, like that's, I love it. I love it. Yeah,

Isar Meitis:

I think two things you said that are really interesting to me that I never thought of, and obviously you doing this with large enterprises brings this immense value. One is to break down your documents just like you're breaking down the agents. So if you have a document, don't make a document about 50 different things. Make 50 documents about each thing separately. And naming conventions, I'm sure plays a big role into naming the documents. So then the AI knows actually, yeah. Oh, I should probably look at this document first to find this kind of information. Uh, so naming conventions would play a big role if you're breaking down the documents smaller. But the other thing you said is also about the internal structure of writing the documents, as if you're writing them for ai, making them as structured as possible, knowing that humans can also read them versus. The other way around, which right now all of our documents are built for humans and we hope or assume that the AI will make sense in them. And I think these two things are extremely important when it comes to preparing the data so that the agent works very well. And I, you summed it up perfectly that if you do that, then you can be very lean on the instructions and it will still gonna work really well because the AI will understand exactly what it needs to do and where the information is. Yes. And the other way around sometimes becomes, and I've had this happen multiple times, where you start getting into very specific intricate instructions and repeating things three times because the agent will do it and stuff like that. And I think if you focus on the data preparation, you actually get, again, assuming of what you said is you get better results with a lot less effort on the instructions.

Nate Amidon:

I think absolutely. It's, uh, you know, seven hours to sharpen your axe for one hour of the cutting. Yeah, right. Um, and so, uh, no, absolutely. I think those are all, all great points. And, um, and I think we're gonna be thinking about AI first documentation for the, for the next, well, for a long time.

Isar Meitis:

So let's jump quickly into the other example. Now we, yeah. People understand how this is built. No. And so I think this other

Nate Amidon:

example is, uh, also interesting. This one is more what I would call a process agent. Um, and I know we're running short on time, so I'll, I'll do this, uh, fairly quickly, essentially, in the product management world. So if you're in a software development organization, right, you're building new features and functionality for a product, and the good idea train is full, meaning everyone has a great idea for the next new feature. And so if you're a product manager and you're the person deciding what to build. You're gonna get inundated with, Hey, you should add this feature or that feature. Alright? And so I, what we do with organizations is build a process for how those product managers go from an idea to ready. Like how do we go from your good idea to figuring out what you're actually asking for to something that is consumable via a technical team to actually build. And a lot of organizations don't deal with this process very well, but, uh, that's what we do. And we're a big fan of product canvases, okay? And when a product canvas, if you're not familiar, is a one page, uh, basically chart, it's a one page document that is an alignment tool. So it asks some very simple questions like who's asking for it? What's the problem we're trying to solve with this feature? What's solu? What's good look like? What a data source is, what's in scope out scope? What are risks, dependencies? And it's a one, it's a way to force product teams to actually flush out ideas. Before they get to the engineering team. And this from a, a value perspective for me is, uh, the biggest ROI you can do in the software development program because engineers will, will build something, uh, that's not valuable if you tell them to. And it's on, I think, the product team to really make the best use of the engineering engineering team's time. Okay. So that's the context. Um, and so before

Isar Meitis:

you dive in, just to broaden this to people who are not in the software world, this is the same for any project, any initiative that you do in the company, whether it is how do we improve our services next year to what kind of products we want to manufacture, to how we improve our manufacturing process to any initiative in the company is the same thing. Like you need to find a systematic process or way to. A lens through which you analyze requirements, needs and visions in order to translate it into something practical that you're actually going to do in an effective way the following quarter, month, year, whatever it is that you're doing.

Nate Amidon:

Yep. And so this type of agent is really just to help that process out. And, and even to broaden larger, it can be any process. If you're in the PMO and you have to do a, a gate documentation, if you have to always send a report in a certain way to your CEO. Um, there's just these process steps that have to go through, uh, and you can build agents that are trained in that process that can help you, uh, do it faster. And that's essentially what we're doing. So for this one, uh, what's cool about this is I can, uh, say I wanna build a canvas. And it'll, uh, walk me through, uh, each one of the steps in each one of the questions. It's trained to know what's a good question, what's a bad question? Um, and, uh, and it will just be my guide. Now, that's mildly valuable. What is more valuable is the ability to upload information into this and have it spit out, uh, spit out the canvas information. All right? So, uh, you can take a meeting transcript or an email if you're in the space. A lot of times you'll get like a 30 page BRD type business requirement document thing that talks about all the things that they could ever want in a new feature. Um, so I'll just take a, a thing here where I dropped in a, a, a simple, um, uh, a simple, uh, call transcript, all right? And then the agent's trained to take that information. And, uh, and spit out the first draft of the product canvas. Okay? So just, it can take a lot of organ, a lot of data and information and organize it into the process step that you want to help you determine what to do. So, uh, so it goes through each one of these. I won't go into the details of each step, but, uh, you generally get the idea. Um, one thing I will say though is, uh, a lot of times I like these on like a one page PowerPoint canvas, right? A a, a specific document. And so these agents aren't great with this capability of building those because you want'em in your company brand. You want a certain font. Do you want'em to look the same? Uh, and we've, we tried to do it a lot, uh, and we figured out how to do it, but we'd have to do it either through Power Automate or through some other separate system. You start to get into more advanced capabilities of agent building. And like I said, I, I try to stay out of that swim lane whenever I get close to it. So, um, but the, but you also may not want to do that. And this was kind of part of the bottlenecks thing that we talked about, which is if, if you can, if you built an agent that took information and automatically, uh, created the PowerPoint and saved it to SharePoint, then it would become really easy to just put every horrible idea that gets emailed to you into one of these canvases, and now you have a thousand of these canvases and you've just moved the bottleneck upstream. Okay. Yeah, hopefully that makes sense. So

Isar Meitis:

yeah, two things about this and, and I think it'll be interesting to see, again, if you dive into this and see what's kinda like, uh, under the hood. Uh, but my, my 2 cents on that is something that I do a lot across several different things that I do. My best example that I use all the time, I don't write my own proposals. The proposals get written by a process exactly like this, which is a very similar process to this particular one. So I take, uh, call recordings and transcriptions. I use Fathom to transcribe all my calls plus my email communication with the prospect slash client, and I upload them and click go. I don't have to give it any instructions, and it spits out amazingly well structured, well written proposals. Way better than I can write them myself in about 60 seconds. Uh, and then I spend another 10 minutes verifying the information and, you know, adding my little flare on top of it, which is in many cases, just job secure for myself to feel like I'm doing something productive in the process and, and it's ready to go. And so this is a very similar kind of setup, uh, once you teach it how to read. Unstructured, or he knows how to read unstructured data, wants to teach it, how to structure it into specific buckets, then he just does it amazingly well.

Nate Amidon:

Right? And, and so that's super, it's super beneficial from a personal efficiency standpoint. Now, if you're a director in an organization and you have five to 10 product owners that are all determining what to use or how to build what to, what to prioritize to build, you can't be in every meeting and you can't. And so you can really take your vision of what you want in your organization and embed that in the agent, because the agent can be in every meeting. So at the larger organizations in the enterprise world where you can scale this, there's a huge amount of value, not just from efficiency, but also from increased, increased quality.

Isar Meitis:

Yeah.

Nate Amidon:

Yep. So, uh, to look over this instructions briefly. Um, I did what we talked about earlier. I went to a different LLM and I said, here's what I wanna build and can you structure this and can you structure that? Um, and one thing you said that I like, which I'm gonna steal, which is, uh, I put the, what's a good answer? What's a bad answer to each one of these questions in the instructions? But I could easily just take that, put it into a document and expose it that way, and reduce the instructions. And so I think that we're gonna cont, everyone's gonna continue to get better at building these. Um, but essentially I put in what is quality for each one of the questions. And then I also wanted to put in the flow,'cause I'm creating a guide here. I'm creating someone to hold your hand through the process. What does that, what does that customer journey look like? From when you submit, uh, to build a canvas to when you have a completed process at the end. And so I'll use the instructions to say, okay, after this is done, then you wanna check this out. And after this is done, you wanna check this out. And so thinking about this as a software product, as a, thinking about it as, as a, as having a user and having a user-focused design is important when you're building these instructions. So, um, I have it give check marks that are green, if it's a good answer, uh, a triangle if it's not. Um, and then, you know, a lot of the other kind of just boilerplate, you know, don't say offensive things type of stuff. Um, and then from this perspective, I'm using close, I'm using about almost 7,000 characters. But if I needed more, to your point, I could extract some of this into the source data. Final thing I'll mention about this before we wrap here is, um, I don't select use specified sources on this one. I wanna kind of guardrail it in the instructions, but also let it go out into the internet and find innovative answers and solutions.'cause that's really what we want from ai.

Isar Meitis:

Awesome. Yeah, I think overall, Nate, this is a great journey through how to think about the process of creating these automations. Like what are the things you need to consider? When you come to build these, and we talked about many useful things. One, we talked about breaking them down into single tasks and then building more of these for more tasks. We talked about prepping the documentation in a similar concept and making it more easy, uh, for the product to consume. We talked about thinking about who's the audience and what they would want to use it for and what value they're gonna get from them and how to build it that way. So lots of really, really important aspects when you come to start creating automations because like you said, creating the automation is easy. The tool is today you, you give it instructions in simple English and you will follow it. Creating tools that actually provide real consistent value to a large organization becomes trickier and you give a lot of great good like best practices on how to do that. If people wanna know more about you, work with you, follow you, et cetera, what are the best ways to do that?

Nate Amidon:

Yeah, I mean, I'm primarily on LinkedIn, uh, so you can, uh, find me just Nate Amon on LinkedIn. There's not a lot of us out there. I, uh, also my website and, you know, you can reach out anytime if, uh, if you wanna talk more about this'cause I enjoy it.

Isar Meitis:

Awesome. Uh, thanks so much. Uh, this was, uh, really, really great. I appreciate you, appreciate the time you took to prepare for this and to share this with us. Uh, really valuable stuff, I think as far as, uh, thought process and, and and process in general.

Nate Amidon:

Great. Thanks having me on. I appreciate it.

Isar Meitis:

Bye everyone.