Agents

Back

Loading concept...

LangChain Agents: Your AI’s Smart Helpers

Imagine you have a super smart robot friend. But this robot can do more than just answer questions—it can actually use tools to help you! It can search the internet, do math, check the weather, and so much more.

That’s what LangChain Agents are. They’re AI assistants that can think, plan, and act using different tools to solve problems for you.


The Magic Analogy: The Detective Agent

Think of an Agent like a detective solving a mystery.

  • The detective gets a case (your question)
  • They think about what tools they need (magnifying glass, phone, computer)
  • They use those tools step by step
  • They write down their thoughts in a notebook (scratchpad)
  • They keep investigating until they solve the case!

We’ll use this detective story throughout our journey.


1. Agents Overview

What is an Agent?

An Agent is an AI that can decide what to do on its own. It doesn’t just give you answers—it takes actions to find answers.

Regular AI (Chatbot):

You: "What's the weather in Paris?"
AI: "I don't know the current weather."

Agent AI (Detective):

You: "What's the weather in Paris?"
Agent thinks: "I need to check weather tool"
Agent uses: WeatherTool("Paris")
Agent: "It's 15°C and sunny in Paris!"

Why Agents Are Special

graph TD A["You Ask Question"] --> B{Agent Thinks} B --> C["Choose Tool"] C --> D["Use Tool"] D --> E["Get Result"] E --> F{Need More Info?} F -->|Yes| B F -->|No| G["Give Final Answer"]

Real Example:

# Simple agent that can search and calculate
agent = create_agent(
    llm=ChatOpenAI(),
    tools=[SearchTool, Calculator]
)

# Agent decides which tool to use
result = agent.run(
    "How old is the Eiffel Tower?"
)
# Agent uses SearchTool automatically!

The agent is like a detective who picks the right tool for each job!


2. ReAct Agent Pattern

What is ReAct?

ReAct stands for Reasoning + Acting.

It’s like the detective talking to themselves:

  • “Let me think about this…” (Reasoning)
  • “Now I’ll do this…” (Acting)
  • “I got this result, so…” (Observing)

The ReAct Loop

graph TD A["🤔 THOUGHT"] --> B["⚡ ACTION"] B --> C["👀 OBSERVATION"] C --> D{Done?} D -->|No| A D -->|Yes| E["✅ FINAL ANSWER"]

Think of it like this:

Step Detective Example Agent Example
THOUGHT “I need to find when the Eiffel Tower was built” “I should search for Eiffel Tower construction date”
ACTION Opens encyclopedia search("Eiffel Tower built")
OBSERVATION Reads: “Built in 1889” Result: “Completed in 1889”
THOUGHT “Now I can calculate the age” “Current year minus 1889”
ACTION Does math calculate(2024 - 1889)
OBSERVATION Gets: 135 Result: 135
ANSWER “135 years old!” “The Eiffel Tower is 135 years old”

Code Example

from langchain.agents import AgentType
from langchain.agents import initialize_agent

# ReAct agent setup
agent = initialize_agent(
    tools=[search, calculator],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True  # See the thinking!
)

# Watch the ReAct pattern in action
agent.run("What's 2024 minus the year
           Python was created?")

Output shows the pattern:

Thought: I need to find Python's creation year
Action: search
Action Input: "Python programming created year"
Observation: Python was created in 1991
Thought: Now I can subtract
Action: calculator
Action Input: 2024 - 1991
Observation: 33
Thought: I have my answer!
Final Answer: 33 years

3. Tool Calling Agent

What is Tool Calling?

This is a smarter way for agents to use tools. Instead of the agent writing text like “I’ll use the calculator”, it sends a special signal directly to the tool.

Think of it like this:

Old Way (ReAct) New Way (Tool Calling)
Detective writes a note: “Please call the weather station” Detective presses a button that calls weather station directly

Why Tool Calling is Better

graph LR A["Agent"] -->|Direct Signal| B["Tool"] B -->|Clean Result| A
  • Faster: No extra text to process
  • Cleaner: Tools get exact inputs
  • Reliable: Less chance of errors

Code Example

from langchain.agents import create_tool_calling_agent
from langchain.tools import tool

# Define a simple tool
@tool
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"Sunny, 22°C in {city}"

# Create tool-calling agent
agent = create_tool_calling_agent(
    llm=ChatOpenAI(model="gpt-4"),
    tools=[get_weather],
    prompt=prompt_template
)

# The agent calls tools directly!

The agent sends a structured message like:

{
  "tool": "get_weather",
  "arguments": {"city": "Tokyo"}
}

Clean and precise!


4. Prebuilt Agent Functions

Ready-Made Detectives!

LangChain gives you pre-made agents so you don’t have to build everything from scratch.

It’s like buying a detective kit instead of making your own magnifying glass!

Popular Prebuilt Agents

graph TD A["Prebuilt Agents"] --> B["OpenAI Functions Agent"] A --> C["ReAct Agent"] A --> D["Structured Chat Agent"] A --> E["Self-Ask Agent"]
Agent Type Best For Like a Detective Who…
create_openai_functions_agent Tool calling with GPT Uses a walkie-talkie
create_react_agent Step-by-step thinking Talks through everything
create_structured_chat_agent Complex conversations Takes detailed notes

Code Example

from langchain.agents import create_openai_functions_agent
from langchain import hub

# Get a ready-made prompt
prompt = hub.pull("hwchase17/openai-functions-agent")

# Create agent in one line!
agent = create_openai_functions_agent(
    llm=ChatOpenAI(),
    tools=[search, calculator, weather],
    prompt=prompt
)

# That's it! Agent ready to work.

Easy as 1-2-3:

  1. Pick your tools
  2. Pick your agent type
  3. Start asking questions!

5. Agent Executor

The Manager That Runs Everything

The Agent Executor is like the detective’s boss. It:

  • Gives the case to the detective
  • Makes sure they follow the rules
  • Stops them if they take too long
  • Reports back with the answer
graph TD A["Your Question"] --> B["Agent Executor"] B --> C["Agent"] C --> D["Tool 1"] C --> E["Tool 2"] C --> F["Tool 3"] D --> B E --> B F --> B B --> G["Final Answer"]

What Agent Executor Does

Task Without Executor With Executor
Run agent You do it manually Automatic
Handle errors App crashes Graceful recovery
Manage loops Could run forever Has limits
Track history Lost Saved

Code Example

from langchain.agents import AgentExecutor

# Create the executor (the boss)
executor = AgentExecutor(
    agent=my_agent,
    tools=my_tools,
    verbose=True,
    max_iterations=10,  # Stop after 10 tries
    handle_parsing_errors=True
)

# Run with proper management
result = executor.invoke({
    "input": "What's the population of
              the largest city in Japan?"
})

print(result["output"])

The executor makes sure your agent doesn’t go crazy!


6. Agent Scratchpad and Reasoning

The Detective’s Notebook

The scratchpad is where the agent writes down:

  • What it’s thinking
  • What tools it used
  • What results it got
  • What to do next
graph TD A["Question"] --> B["Scratchpad"] B --> C["Thought 1"] C --> D["Action 1"] D --> E["Result 1"] E --> F["Thought 2"] F --> G["Action 2"] G --> H["Result 2"] H --> I["Final Answer"]

What’s in the Scratchpad?

Example Scratchpad:

📝 SCRATCHPAD
═══════════════════════════════
Question: "How far is the moon?"

Step 1:
  Thought: I should search for this
  Action: search("distance to moon")
  Result: "384,400 km on average"

Step 2:
  Thought: I have the answer now!
  Action: None needed
  Result: Ready to respond

═══════════════════════════════
Final: The moon is 384,400 km away

Code Example

from langchain.agents.format_scratchpad import (
    format_to_openai_function_messages
)

# The scratchpad tracks everything
def create_scratchpad(steps):
    """Convert agent steps to messages."""
    return format_to_openai_function_messages(
        steps
    )

# In your agent prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful agent"),
    ("user", "{input}"),
    ("assistant", "{agent_scratchpad}")
])

The scratchpad lets the agent remember what it already tried!


7. Agent Iteration Control

Don’t Let the Detective Work Forever!

Sometimes an agent can get stuck in a loop, trying the same thing over and over. Iteration control sets limits to keep things running smoothly.

graph TD A["Start"] --> B{Iteration 1} B --> C{Iteration 2} C --> D{Iteration 3} D --> E{Max Reached?} E -->|No| F{Iteration 4...} E -->|Yes| G["STOP - Return Best Answer"]

Control Options

Setting What It Does Detective Example
max_iterations Max steps allowed “Only search 5 places”
max_execution_time Time limit “Only 30 seconds”
early_stopping_method How to stop “Stop when confident”

Code Example

from langchain.agents import AgentExecutor

executor = AgentExecutor(
    agent=my_agent,
    tools=my_tools,

    # Iteration controls
    max_iterations=5,        # Max 5 steps
    max_execution_time=30,   # Max 30 seconds

    # What to do when limit hit
    early_stopping_method="generate",

    # Return partial results if stopped
    return_intermediate_steps=True
)

# Safe execution!
result = executor.invoke({"input": query})

Early Stopping Methods

Method Behavior
"force" Stop immediately, return nothing
"generate" Ask agent for best answer so far

Best Practice:

# Always set limits!
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    max_iterations=10,      # Never infinite!
    max_execution_time=60,  # 1 minute max
    early_stopping_method="generate"
)

Putting It All Together

Here’s a complete example using everything we learned:

from langchain.agents import (
    create_openai_functions_agent,
    AgentExecutor
)
from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain import hub

# 1. Define tools
@tool
def search(query: str) -> str:
    """Search the web."""
    return f"Results for: {query}"

@tool
def calculator(expression: str) -> str:
    """Calculate math."""
    return str(eval(expression))

# 2. Get LLM
llm = ChatOpenAI(model="gpt-4")

# 3. Get prompt (with scratchpad)
prompt = hub.pull(
    "hwchase17/openai-functions-agent"
)

# 4. Create tool-calling agent
agent = create_openai_functions_agent(
    llm=llm,
    tools=[search, calculator],
    prompt=prompt
)

# 5. Wrap in executor with controls
executor = AgentExecutor(
    agent=agent,
    tools=[search, calculator],
    verbose=True,
    max_iterations=5,
    max_execution_time=30
)

# 6. Run it!
result = executor.invoke({
    "input": "What year was Python created,
              and how old is it?"
})

print(result["output"])

Quick Summary

Concept One-Line Explanation
Agent AI that uses tools to solve problems
ReAct Think → Act → Observe → Repeat
Tool Calling Direct, structured tool usage
Prebuilt Agents Ready-made agent templates
Agent Executor Manager that runs the agent safely
Scratchpad Agent’s memory of what it tried
Iteration Control Limits to prevent infinite loops

You Did It! 🎉

You now understand how LangChain Agents work:

  1. Agents are AI helpers that can use tools
  2. ReAct is the think-act-observe pattern
  3. Tool Calling makes tool use clean and direct
  4. Prebuilt functions give you ready-made agents
  5. Agent Executor manages everything safely
  6. Scratchpad tracks the agent’s reasoning
  7. Iteration control prevents runaway agents

You’re ready to build your own AI detective! 🕵️

Loading story...

Story - Premium Content

Please sign in to view this story and start learning.

Upgrade to Premium to unlock full access to all stories.

Stay Tuned!

Story is coming soon.

Story Preview

Story - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.