Skip to content

WORKSHOP

From Patterns to Production: Applying Design Pattern Principles to Agentic AI Systems

Date: 13th March 2026 | Time: 10:45 AM to 01:30 PM

Venue: Workshop Room 2, NIMHANS Convention Centre, Bangalore

FEES:
• Rs.299 for Leaders Pass holders
• Rs.2,009 for Professionals Pass holders
• Rs.2,249 for Knowledge Pass holders
• Rs.2,699 for Community Pass holders
• Rs.2,999 for Expo Pass holders
(Limited seats available)

Manjunath Janardhan

AI/ML Computational Science Senior Manager, Accenture

Theory Session: From Patterns to Production (Quick Overview)

The Pattern Evolution Journey:

  • How design patterns evolved: Gang of Four (1994) → Microservices (2010s) → Agentic AI (2024+)
  • Why understanding this lineage matters for building robust AI systems
  • The patterns that carry forward vs. the new patterns AI demands


The Reality Check:

  • Why 95% of GenAI pilots fail to reach production (MIT, 2025)
  • The Demo → Production gap: What works in notebooks breaks in production
  • Infrastructure Reliability vs. Cognitive Reliability — the two pillars your agents need


Core Agentic Patterns (Andrew Ng’s Framework):

  • Reflection: Self-critiquing agents that improve their own output
  • Tool Use: Agents that take action in the real world
  • Planning: Breaking complex problems into executable steps
  • Multi-Agent: Specialized agents collaborating on complex tasks


Why Agent Systems Fail at Scale:

  • Context Failure Modes: Poisoning, Distraction, Confusion, Clash, and Rot
  • The Lethal Trifecta: Security risks when private data meets untrusted content
  • Cognitive failures that traditional retry/circuit-breaker patterns can’t solve

Attendees will be able to write highly performing applications, with good queries that are appropriately indexed.

What You’ll Build: A research assistant agent that can search the web, perform calculations, and synthesize information to answer complex questions.

What You’ll Learn:

  • Understanding the ReAct (Reason + Act) loop — how agents think before they act
  • Implementing tool calling: Giving your agent superpowers beyond text generation
  • Defining tools with clear schemas and descriptions
  • Handling tool responses and feeding them back into agent reasoning
  • Debugging with Google ADK’s tracing: Watch your agent’s reasoning unfold step-by-step
  • Understanding why your agent chose Tool A over Tool B
  • Common pitfalls: Tool misuse, infinite loops, and how to prevent them


Key Takeaway:
You’ll see exactly how an agent decides when to use a tool vs. when to respond directly — and how to fix it when it makes wrong choices.

What You’ll Build: A content generation agent with built-in quality control — it generates, critiques its own work, and refines until the output meets quality standards.

What You’ll Learn:

  • The Generator → Evaluator → Refiner loop architecture
  • Implementing an inner critic: Teaching your agent to spot its own mistakes
  • Defining evaluation criteria: What does “good enough” look like?
  • Setting iteration limits: Preventing infinite refinement loops
  • Comparing output quality: Before reflection vs. after reflection
  • When to use reflection vs. when it’s overkill (latency and cost trade-offs)
  • Tracing the reflection loop: Watching quality improve iteration by iteration


Key Takeaway:
You’ll transform a mediocre single-shot agent into a quality-focused agent that consistently produces better outputs

What You’ll Build: A content creation pipeline with three specialized agents working together:

  • Research Agent — Gathers information and facts on a topic
  • Writer Agent — Transforms research into compelling content
  • Editor Agent — Reviews, refines, and polishes the final output


What You’ll Learn:

  • Coordinator-Worker pattern: How a manager agent delegates to specialists
  • Designing agent specialization: Why focused agents outperform generalist agents
  • Inter-agent communication: Passing context and results between agents
  • State management: Tracking what each agent knows and has produced
  • Sequential vs. parallel execution: When to chain vs. when to fan-out
  • Debugging multi-agent systems: Tracing the flow across agent boundaries
  • Handling agent disagreements: What happens when the editor rejects the writer’s work?


Key Takeaway:
You’ll experience firsthand why “divide and conquer” produces better results than one mega-agent trying to
do everything.

What You’ll Build: Add production-ready guardrails to your multi-agent system, transforming it from a demo into something you’d trust in production.

What You’ll Learn:

  • Implementing evaluation loops: Automated quality checks before output is returned
  • Error handling strategies: Graceful degradation when agents fail
  • Retry with variation: When simple retries aren’t enough
  • Fallback patterns: What to do when your primary agent fails
  • Output validation: Ensuring agent responses meet your schema/format requirements
  • Observability deep-dive:
    • Reading traces to understand agent behavior
    • Identifying bottlenecks and failure points
    • Measuring success rates, latency, and token usage
  • Setting up guardrails for:
    • Hallucination detection
    • Off-topic response prevention
    • Sensitive content filtering
    • Cost and token limit enforcement


Key Takeaway:
You’ll learn the difference between an agent that works in demos and one that survives real-world traffic and edge cases

Pattern Selection Framework:

  • Start simple → earn complexity: The decision tree for choosing patterns
  • When Single Agent is enough (and when it’s not)
  • When to add Reflection, Planning, or Multi-Agent
  • The anti-pattern: Why starting with multi-agent is usually wrong


The Trade-off Triangle:

  • Latency vs. Cost vs. Quality: You can optimize for two, not three
  • Real-world scenarios and which trade-off to make


Evolving Your Architecture (Mastra Framework):

  • List tasks → Solve one problem → Build it well → Notice what users ask → Split if unwieldy → Repeat
  • How production feedback shapes your agent architecture


Resources & Next Steps:

  • Recommended reading: “Patterns for Building AI Agents” by Sam Bhagwat (free ebook)
  • Andrew Ng’s Agentic AI course
  • Google Cloud AI Patterns whitepaper
  • Taking your workshop code to production

Required Software (install before the workshop):

  • Ollama — Install from https://ollama.com and run: ollama run minimax-m2.5:cloud
  • VS Code — Install from https://code.visualstudio.com with Python extension
  • Python 3.10+ — With pip package manager
  • Google ADKpip install google-adk (detailed setup instructions will be shared prior to the workshop)


Hardware:

  • Any laptop (Windows/Mac/Linux) with at least 8GB RAM
  • Stable internet connection (for cloud model access)


Knowledge Prerequisites:

  • Basic Python programming (functions, classes, loops)
  • Familiarity with REST APIs and JSON
  • Basic understanding of LLMs (what prompts are, how completions work)


Nice to Have (not required):

  • Experience with any LLM API (OpenAI, Anthropic, Google)
  • Familiarity with async Python

Benefits/Takeaways of this workshop for the attendees (What will attendees do after attending the workshop which they were not able to do before attending this)

After attending this workshop, participants will be able to:

  1.  Build production-ready agents — Implement the 4 core agentic patterns (Reflection, Tool Use, Planning, Multi-Agent) with working code
  2. Debug agent behaviour — Use Google ADK’s tracing to understand exactly why an agent made specific decisions
  3. Design multi-agent systems — Create coordinator-worker architectures where specialized agents collaborate
  4. Implement guardrails — Add evaluation loops, error handling, and cognitive reliability patterns to prevent agent failures
  5. Make informed architecture decisions — Apply the pattern selection framework to choose the right level of complexity for their use cases
  6. Avoid common pitfalls — Recognize and prevent context failure modes before they reach production

In short: Attendees will leave with a GitHub-ready codebase and the confidence to build agentic AI systems that actually work in production.

About Speakers

Manjunath Janardhan is a Senior Manager in AI/ML Computational Science at Accenture, specializing in enterprise-scale AI architecture, Generative AI, and intelligent automation for global operations. With more than 21 years of experience across Healthcare, Finance, and Cloud platforms, Manjunath designs and delivers high-impact AI systems that combine deep engineering with advanced machine learning and LLM-based intelligence.

A recognized speaker and educator, Manjunath has delivered technical sessions at IISc, PyData Global, AWS Community Day, Google AI Community, Microsoft, NVIDIA, Atlassian EngFest, and Open Source India, covering Agentic AI, RAG architectures, Knowledge Graphs, LLM evaluation, MCP systems, and enterprise adoption of Generative AI. He also mentors aspiring technologists through Microsoft Code Without Barriers and serves as a technical reviewer for industry publications.