🔧 Agent Control & Debugging in LangChain
The Detective Story of AI Agents
Imagine you have a robot helper that goes on missions for you. Sometimes the robot gets lost, makes mistakes, or takes weird paths. Wouldn’t it be great if you could:
- Catch mistakes before they become big problems?
- Listen in on what the robot is doing?
- Follow its footsteps to see where it went?
- Fix problems when things go wrong?
That’s exactly what Agent Control and Debugging is all about! Let’s become detectives and learn how to watch over our AI agents.
🚨 Agent Error Handling
What is Error Handling?
Think of error handling like having a safety net under a tightrope walker. When something goes wrong, instead of crashing, your agent lands safely and tells you what happened.
Why Do Agents Make Errors?
graph TD A["Agent Starts Task"] --> B{Problem Occurs?} B -->|Tool fails| C["Tool Error"] B -->|Bad response| D["Parse Error"] B -->|Too many tries| E["Timeout Error"] B -->|No problem| F["Success!"] C --> G["Error Handler"] D --> G E --> G G --> H["Graceful Recovery"]
Simple Example: Catching Errors
from langchain.agents import AgentExecutor
# Create agent with error handling
agent_executor = AgentExecutor(
agent=my_agent,
tools=my_tools,
handle_parsing_errors=True,
max_iterations=5
)
# The agent won't crash now!
try:
result = agent_executor.invoke({
"input": "Find the weather"
})
except Exception as e:
print(f"Oops! Error: {e}")
Key Error Handling Options
| Option | What It Does |
|---|---|
handle_parsing_errors=True |
Fixes format mistakes automatically |
max_iterations=5 |
Stops after 5 tries (no infinite loops!) |
early_stopping_method |
How to stop when stuck |
Real Life Example
Without error handling: Your agent tries forever, uses all your API credits, and crashes.
With error handling: Your agent tries 5 times, then says “I couldn’t figure this out, here’s what I tried.”
📞 Agent Callbacks
What are Callbacks?
Callbacks are like phone calls your agent makes to tell you what’s happening. Every time something important happens, your agent calls you!
graph TD A["Agent Starts"] -->|Callback: Started!| B["Picks Tool"] B -->|Callback: Using calculator| C["Runs Tool"] C -->|Callback: Got result| D["Thinks"] D -->|Callback: Finished!| E["Done"]
Why Use Callbacks?
- See progress in real-time
- Log everything for later review
- React when certain things happen
- Measure how long things take
Simple Callback Example
from langchain.callbacks import StdOutCallbackHandler
# This prints everything the agent does
handler = StdOutCallbackHandler()
result = agent_executor.invoke(
{"input": "What is 25 x 4?"},
callbacks=[handler]
)
Output you’ll see:
> Entering new chain...
Thought: I need to multiply
Action: Calculator
Action Input: 25 * 4
Observation: 100
Final Answer: 100
> Finished chain.
Custom Callback Example
from langchain.callbacks.base import BaseCallbackHandler
class MyCallback(BaseCallbackHandler):
def on_agent_action(self, action, **kwargs):
print(f"🔧 Using tool: {action.tool}")
def on_agent_finish(self, finish, **kwargs):
print(f"✅ Done! Answer: {finish.return_values}")
# Use your custom callback
my_callback = MyCallback()
result = agent_executor.invoke(
{"input": "Search for cats"},
callbacks=[my_callback]
)
Common Callback Events
| Event | When It Fires |
|---|---|
on_chain_start |
Agent begins working |
on_agent_action |
Agent picks a tool |
on_tool_start |
Tool begins running |
on_tool_end |
Tool finishes |
on_agent_finish |
Agent completes task |
on_chain_error |
Something went wrong |
🔍 Tracing Agent Execution
What is Tracing?
Tracing is like having a GPS tracker on your agent. You can see exactly where it went, what decisions it made, and how long each step took.
graph TD A["Step 1: Parse Input<br/>⏱️ 50ms"] --> B["Step 2: Pick Tool<br/>⏱️ 100ms"] B --> C["Step 3: Run Search<br/>⏱️ 2000ms"] C --> D["Step 4: Format Answer<br/>⏱️ 75ms"] D --> E["Total: 2225ms"]
Using LangSmith for Tracing
LangSmith is like a security camera system for your agents. It records everything!
import os
# Turn on tracing (like flipping a switch)
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-key"
os.environ["LANGCHAIN_PROJECT"] = "my-agent-project"
# Now every agent run is recorded!
result = agent_executor.invoke({
"input": "Find me a recipe"
})
What Tracing Shows You
The Trace View reveals:
- ⏱️ Timing - How long each step took
- 🔗 Flow - The path from start to finish
- 💰 Tokens - How many tokens were used
- 📝 Inputs/Outputs - Exactly what went in and out
- ❌ Errors - Where and why things failed
Simple Tracing Without LangSmith
from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
result = agent_executor.invoke({
"input": "What's the capital of France?"
})
print(f"Tokens used: {cb.total_tokens}")
print(f"Cost: ${cb.total_cost:.4f}")
print(f"Requests: {cb.successful_requests}")
🐛 Debugging Agent Behavior
What is Debugging?
Debugging is like being a detective solving a mystery. Your agent did something unexpected, and you need to find out why!
Turn On Verbose Mode
The easiest way to debug is to make your agent talk more:
from langchain.globals import set_debug, set_verbose
# Option 1: Verbose mode (summary)
set_verbose(True)
# Option 2: Debug mode (everything!)
set_debug(True)
# Now run your agent
result = agent_executor.invoke({
"input": "Search for dogs"
})
What Verbose Shows
> Entering new AgentExecutor chain...
Thought: I should search for dogs
Action: web_search
Action Input: dogs
Observation: Dogs are loyal pets...
Thought: I now have the answer
Final Answer: Dogs are loyal pets that...
> Finished chain.
Debug vs Verbose
| Mode | What You See |
|---|---|
| Verbose | Simple summary of steps |
| Debug | EVERYTHING (prompts, raw responses, timings) |
Common Debugging Scenarios
Problem 1: Agent loops forever
# Solution: Limit iterations
agent_executor = AgentExecutor(
agent=my_agent,
tools=my_tools,
max_iterations=10,
max_execution_time=60 # 60 seconds max
)
Problem 2: Agent picks wrong tool
# Solution: Better tool descriptions
from langchain.tools import Tool
calculator = Tool(
name="Calculator",
description="ONLY use for math. Input: equation like '2+2'",
func=do_math
)
Problem 3: Agent gives weird answers
# Solution: Check the prompt
print(agent.agent.llm_chain.prompt.template)
# Look at what instructions the agent received
The Debugging Checklist
graph TD A["Agent Acting Weird?"] --> B{Turn on verbose} B --> C["Check the thoughts"] C --> D{Wrong tool?} D -->|Yes| E["Fix tool descriptions"] D -->|No| F{Bad reasoning?} F -->|Yes| G["Improve the prompt"] F -->|No| H{Timing out?} H -->|Yes| I["Increase limits"] H -->|No| J["Check tool outputs"]
🎯 Putting It All Together
Here’s a complete example using all four concepts:
from langchain.agents import AgentExecutor
from langchain.callbacks.base import BaseCallbackHandler
from langchain.globals import set_verbose
import os
# 1. TRACING: Enable it
os.environ["LANGCHAIN_TRACING_V2"] = "true"
# 2. DEBUGGING: Turn on verbose
set_verbose(True)
# 3. CALLBACKS: Custom logging
class DebugCallback(BaseCallbackHandler):
def on_agent_action(self, action, **kwargs):
print(f"🔧 Tool: {action.tool}")
def on_chain_error(self, error, **kwargs):
print(f"❌ Error: {error}")
# 4. ERROR HANDLING: Safe configuration
agent_executor = AgentExecutor(
agent=my_agent,
tools=my_tools,
handle_parsing_errors=True,
max_iterations=10,
max_execution_time=120,
callbacks=[DebugCallback()]
)
# Run with full visibility!
try:
result = agent_executor.invoke({
"input": "Find today's weather"
})
print(f"✅ Success: {result}")
except Exception as e:
print(f"🚨 Caught error: {e}")
📝 Quick Summary
| Concept | What It Does | Key Tool |
|---|---|---|
| Error Handling | Catches mistakes | handle_parsing_errors=True |
| Callbacks | Reports events | BaseCallbackHandler |
| Tracing | Records everything | LangSmith |
| Debugging | Shows agent thinking | set_verbose(True) |
🚀 You Did It!
Now you know how to:
- ✅ Catch errors so your agent doesn’t crash
- ✅ Listen to callbacks to see what’s happening
- ✅ Trace execution to measure performance
- ✅ Debug problems when things go wrong
You’re ready to build agents you can trust and understand! 🎉
