Welcome to a special deep dive on AI Unraveled. Great to be here. Now this show, it's created and produced by Etienne Newman. He's a senior engineer and a passionate soccer dad up in Canada. That's right. And look, if you're finding value in these explorations we do, please take just a second to like and subscribe.
It really does help us bring more AI insights your way. Absolutely. Every bit helps. So today we're really zeroing in on something pretty fundamental. We're talking about the core design principles behind intelligent AI systems. Yeah, the kind that can actually perceive, reason, plan, and act autonomously. It's a big topic. It is. And our focus today, it draws heavily from this really insightful guide, Top 20 Agentic Patterns, a hands-on guide,
to building intelligent systems. Think of it like we're getting a look at the blueprints, the essential structures AI developers are using right now to build these sophisticated agents. Exactly. We're pulling back the curtain, so to speak, on some cutting edge AI development. And the goal here is really to extract those key ideas, explain why they matter, and show how they're actually being used in today's AI frameworks. Right. We want clarity.
those sort of aha moments you're looking for hopefully without getting too bogged down in overwhelming jargon. So core stuff. Exactly. Yeah. Oh, and before we properly dive in, if you are looking to really cement your AI knowledge, maybe give your career a bit of a boost. Well, definitely check out Etienne's AI certification prep books. They're really comprehensive. He's got quite a few now, hasn't he? He does. Study Guides for the Azure AI Engineer Associate, the Google Cloud Generative AI Leader Certification,
AWS certified AI practitioner and also the Azure AI fundamentals exam Wow covering the bases pretty much right you can find them all over at deed sham kick comm we'll put the links in the show notes of course it's stuff essential resources if you're serious about this field definitely okay so agentic systems this guy talks a lot about them what exactly are we're talking about here okay so at its core an agentic system
It's really about software architecture that has agency. Agency. Meaning it can perceive its environment, it can process information, and then crucially it can take actions to achieve specific goals it's been given. Right, which sounds...
well, pretty different from how traditional software usually works. It is, yeah. Very different. Your typical software just follows a set list of instructions, right? Step A, step B, step C. Very rigid. Predefined paths. Exactly. Predefined paths. Agentic systems, though, they stand out because they can act autonomously. They proactively chase objectives. They can function effectively even without constant human hand-holding. So it's AI that can kind of
manage itself more, especially when things get complicated. Precisely. And the big game changer enabling this shift, it's really been the rise of large language models, LLMs. Okay. How have LLMs fueled this?
Well, these models have basically empowered AI agents to move beyond just following rules. They demonstrate what we call emergent capability. Emergent capabilities, things they weren't explicitly programmed to do. Exactly. Things that just sort of arise from the sheer scale and the training data of the model, like sophisticated reasoning, planning out multiple steps to solve a problem, remembering past conversations. Having a memory. Right. Memory awareness and being able to flexibly use different tools to get things done.
This is a huge step up from older systems that mostly just reacted to whatever input you gave them. So less reactive, more proactive, goal driven. That's it. Proactive, goal driven entities that can make complex decisions and solve problems more independently. That really is a leap. So bringing it back to the guide, why are these patterns so important when you're trying to build these kinds of intelligent systems? Ah, good question.
These agentic design patterns, they've become really vital tools for actually harnessing the power of LLMs to get that autonomy we're talking about. So it's more structured than just throwing a huge prompt at the LLM and hoping for the best. Way more structured. Instead of one massive, complex prompt, developers use these patterns to guide the LLMs reasoning and actions through a series of smaller, more focused interactions.
It's about engineering the thought process in a way. OK, so it's a more organized, maybe more reliable way to build these AI minds. Exactly. The patterns give us a shared language for common problems, you know, and they offer solutions that are reusable, that have
been tested that pop up again and again when you're developing these agents. Take some of the guesswork out. Moves it towards engineering. Precisely. Less trial and error, more sound engineering practices. And what are the tangible benefits? Why use this pattern-based approach? Well, there are quite a few, actually. Patterns really encourage modularity. You build complex systems from smaller, easier to manage pieces. Which helps with scaling things up, I imagine. Definitely helps with scalability. It also makes the systems easier to change or add to later on.
And crucially, they help manage the complexity. Think about coordinating multiple agents or different tools or complicated workflows. Patterns provide structures for that. Sounds like it could prevent a lot of headaches down the line. Oh, absolutely. Plus, using established patterns means developers are on the same page. They adopt best practices. It leads to more consistent, generally higher quality AI. The guide makes a distinction between...
Structured workflows and these dynamic agentic patterns. What's the key difference there for us? Right. So structured workflows usually follow a fixed sequence. Step one, step two, step three, always the same. Very predictable. Great for tasks where you know exactly what needs to happen. Like an assembly line. Kind of. Yeah. Agentic patterns, though, they give the AI agent more freedom, more autonomy, in
in deciding what to do next. So they're better when things are a bit more uncertain. Exactly. Much better for tasks that are ambiguous or where the AI needs to make decisions based on the current situation and its own reasoning. More flexibility. Okay, that makes sense. But is there a downside? Any trade-offs with this flexibility? Yes, there can be. Agentic systems sometimes take a bit longer, you know, increase latency, and they can use more computational resources compared to a simpler direct approach.
So more overhead. Potentially, yes. The key principle really should be use the simplest thing that actually works for the job. Agentic patterns are most valuable when that flexibility and sophisticated thinking they offer really outweighs the potential extra cost. So these patterns aren't just code snippets. They're more like...
Blueprints for how the AI should think, how it solves problems. That's a perfect way to put it. Blueprints for the agent's cognitive architecture, how it reasons, plans, learns, interacts. Each pattern has its strengths, its weaknesses, and understanding those is key to making smart design choices. Okay, let's dig into some specifics. The guide covers several foundational ones. First up.
Prompt chaining. What's the core idea? Prompt chaining. Okay, think of it as breaking down a big complex task into a series of smaller linked steps or prompts. Winked towel. The output from one step becomes the input for the next step. So you create this chain of operations all working towards the bigger goal. Like that assembly line idea again, but for AI tasks. And why do that instead of just one big prompt? Is it just too much for the LLM otherwise? Often, yes.
It helps tackle tasks that might just overwhelm an LLM in one go. Maybe the reasoning is too complex, maybe the LLM's context window isn't big enough. The amount of text it can handle at once. Right. Or maybe it's just that trying to do too much at once increases the chance of mistakes. By breaking it down into smaller focus prompts, each step is easier for the LLM to get right. Better accuracy overall. Generally, yes.
higher quality results. Can you give us an example? Make it a bit more concrete. Sure. Let's say you need marketing copy, but then you also need versions for Twitter, LinkedIn, Facebook. Different lengths, different tones. Exactly. So step one in the chain might be the LLM generating the main message.
Then, subsequent steps could take that output and specifically tailor it for each platform's constraints and audience. Okay, I can see that. Or maybe you like writing a report. Outline first, then draft sections. Perfect example. Outline generation in one step, drafting the introduction in the next, then the body paragraphs, and so on. Each step builds on the previous one.
Makes sense. The guide also mentions gates in these chains. What are those for? Ah, gates. Those are basically like little checkpoints you can put between the steps in the chain. Checkpoints for what? For validation. You can programmatically check the output of, say, step two before it goes into step three. Is it accurate? Is it formatted correctly? Did it find the information it needed? So you can catch errors early?
Or even change course. Exactly. It adds a layer of control and reliability. If something's wrong, you can stop or maybe loop back or try a different approach. Got it. Okay, next pattern.
Tool use and function calling. Sounds like giving the AI superpowers beyond just text. That's pretty much it. This pattern is all about letting LLM agents interact with external tools. Things like APIs, databases, maybe running some code. Why do they need that? Aren't LLMs trained on, like, the whole internet? Well, yes and no. They're trained on a massive data set. But that data gets old, right? There's a knowledge cutoff date. Ah, so they don't know current events or real-time stock prices. Yeah.
Exactly. Plus, they can't do things in the real world directly. They can't send an email, book a flight, query a live database. Or even do complex math reliably sometimes. Right. They have computational limits too. So tool use lets the LLM figure out when an external tool could help.
generate a call to that specific tool with the right information, and then take the tool's output and use it in its thinking. So the LLM could say, hmm, I need the weather forecast for tomorrow, then call a weather API tool, get the data back, and use that in its response. Precisely that. The process usually goes. LLM identifies the need for a tool, generates a structured function call, like telling the system which tool and what info to send, the system runs the tool, the
LLM gets the result, the observation uses that observation to continue or give the final answer.
That's incredibly powerful. Opens up a lot of possibilities. What other kinds of tools are we seeing? Oh, loads. Web search is a common one. Code interpreters, so the AI can write and run code to solve problems. Interfaces to databases, like you said. Customer service bots accessing CRM systems. Financial tools pulling live market data. Exactly. AI assistance controlling your smart home devices. It really bridges the gap between the LLM's knowledge and AI.
the dynamic real-time world. It makes them much more capable and well-informed. And speaking of being well-informed, if you're listening and want to really deepen your own understanding, maybe even get certified in this stuff, those AI certification prep books by Etienne Newman we mentioned earlier over at djamgac.com, they're perfect for this. It covers things like the Azure AI Engineer Associate, Google Cloud Generative AI Leader,
Definitely worth checking out. Links are in the show notes. Great reminder, foundational knowledge is key here. Okay, next pattern, basic planning.
This feels like we're moving towards AI that can actually strategize. Yeah, that's a good way to think about it. Basic planning is about the agent's ability to take a high-level goal, something that probably needs multiple steps. Like plan my vacation or write a research paper. Exactly, those kinds of things. And the agent autonomously breaks that big goal down into a sequence of smaller, achievable subtasks or actions.
So it's not just reacting anymore. It's figuring out the roadmap itself. That's the core idea. Moving beyond just immediate responses towards thinking ahead, towards strategy. The agent gets the goal, thinks about the steps needed, makes a plan, could be a simple list, could be more complex, and then starts executing that plan. So for that vacation example...
It might break it down into research destinations, check flight prices, find hotels, create daily itinerary. Precisely. Each becomes a subtask within the overall plan. Or for software development, an AI might plan, design database schema, write API endpoints, build front-end components, write tests. It figures out the logical sequence. That definitely feels like a big step towards more intelligent behavior. Not just automation, but thinking.
It is strategic thinking. All right. The last foundational pattern we're going to tackle today is reflection and self-correction.
This sounds fascinating. AI learning from itself. That's exactly it. Reflection and self-correction gives an AI agent the ability to look at its own outputs, its own decisions, even its own reasoning, and critically evaluate them. Evaluate them for what? Errors? Errors, yes. But also inconsistencies, maybe biases, or just areas where the quality could be better. And then crucially, it uses that self-assessment to improve its approach or even redo the output.
So the AI can act like its own quality checker, its own editor. Kind of, yeah. The typical flow is the agent produces something first, an answer, a plan, a piece of code. Then a reflection step happens. The agent itself, or maybe a dedicated critic agent, analyzes that output.
How does it analyze? It might compare it against known facts, check for logical flaws, see if it meets certain quality standards. Based on that critique, the agent can then revise its work. And this generate-reflect cycle might even happen a few times. And the benefit of that in practice? Better results. Significantly better. Often. It catches mistakes the AI might otherwise make. It lets the agent improve without needing a human to point out every single flaw. It's especially good for complex tasks where the first try might not be perfect. Like
Like writing a really complex document, it could review its own arguments for clarity or check its facts. Exactly that. Or a coding agent could review its code for bugs or efficiency improvements.
This ability to self-critique and refine, it's a really key step towards making AI more robust and reliable. That's really something. Metacognition for AI. Okay, so we've talked about these patterns. The guide also mentions frameworks briefly. How do these patterns actually get built into real AI applications? Right. Well, putting these patterns into practice is made a lot easier by the growing number of AI development frameworks out there.
Frameworks like libraries or toolkits. Exactly. They give developers pre-built components, tools, ways to structure things. Basically, they simplify the process of designing, building, and deploying agents that use these patterns. Can you name a few of the big ones people might hear about? Sure. There's the Google Agent Dev Kit, often called ADK. Crew AI is getting popular. Langchain and its newer graph-based version, Langgraph, are very widely used. Microsoft has Autogen and also Semantic Kernel. Okay. These frameworks...
They have specific features for implementing things like tool use or planning. Oh, absolutely. They often have modules designed specifically for these patterns. Built-in ways to handle prompt chaining sequences, ways to easily integrate APIs for tool use, structures for creating and managing plans, loops for reflection.
They provide the scaffolding. So they handle the plumbing, letting developers focus more on the agents logic. That's a great way to put it. They streamline building these complex agentic systems by giving you the core building blocks ready to go. Which again, if you're aiming to get hands on and maybe get certified, understanding these frameworks is key.
And those prep books from Etienne at djamgettech.com cover the underlying technologies used in things like the Azure AI fundamentals or the AWS AI practitioner exams really helps connect the dots. Definitely. Theory is one thing. Practical implementation is another. So let's try and wrap this up. As we've kind of unpacked today, this whole world of agentic AI, it's really being built on these foundational design patterns, isn't it? It really is. They're the fundamental ways we're designing AI systems to think, to act,
and even to learn for themselves now. - From that step-by-step logic of prompt chaining. - To the dynamic problem solving you get with tool use and planning. - And that critical ability to self-improve through reflection. These really feel like the core concepts driving what's next in AI. - Yeah, hopefully this deep dive has given you, the learner, a clearer picture of the why and the how behind agentic AI, those key nuggets of insight. - Here's the thought to leave you with though. As these patterns get more sophisticated,
As we learn to weave them together in more complex ways, how's our relationship with AI going to change? Interesting question. Are we going to see AI systems less as just tools and more as tools?
Well, true collaborators. Maybe tackling problems we haven't even thought of yet. It's definitely a fascinating frontier to watch, isn't it? The potential is huge. It really is. And if this discussion sparked your curiosity, I definitely recommend looking into that top 20 eugenic patterns guide we based this on. Yeah, it's a real treasure trove if you want to go deeper into understanding and actually building these intelligent systems. And one last time, if you're ready to really level up your AI knowledge, get certified at Ian Newman's AI Prep Books.
Azure AI Engineer Associate, Google Cloud Generative AI Leader, AWS Certified AI Practitioner, Azure AI Fundamentals. They're all at djumgate.com. Links in the show notes. Seriously, fantastic resources for your AI journey. Couldn't agree more. Solid preparation. Well, thank you for joining us on this special deep dive from AI Unraveled. We hope you found it insightful. Yeah. Thanks for tuning in. Until next time, keep exploring the fascinating world of artificial intelligence.