🔗 LangChain Core: LCEL Fundamentals
The Magic Assembly Line
Imagine you’re building with LEGO blocks. Each block does one special thing. Now imagine you could snap those blocks together and they’d work as ONE super-block! That’s exactly what LCEL (LangChain Expression Language) does with AI pieces.
🎯 What is LCEL?
LCEL stands for LangChain Expression Language. It’s a special way to connect AI building blocks together.
Think of it Like This:
You have a toy factory with different machines:
- One machine shapes the toy
- Another machine paints it
- A third machine packages it
Instead of carrying the toy between machines yourself, LCEL creates a conveyor belt that moves things automatically!
# Without LCEL (carrying toys yourself)
shaped = shape_machine(toy)
painted = paint_machine(shaped)
packaged = package_machine(painted)
# With LCEL (magic conveyor belt!)
factory = shape | paint | package
result = factory.invoke(toy)
The | symbol is your conveyor belt! It connects one step to the next.
🧱 What are Runnables?
A Runnable is any building block that can:
- Take something in (input)
- Do work on it (process)
- Give something back (output)
Real World Example:
Think of a coffee machine. It’s a Runnable!
- Input: Coffee beans + water
- Process: Grinding, heating, brewing
- Output: Delicious coffee ☕
In LangChain, these are Runnables:
- Prompts - Format your question
- LLMs - Think and respond
- Parsers - Clean up the answer
# Each of these is a Runnable
prompt = ChatPromptTemplate.from_template(
"Tell me a joke about {topic}"
)
llm = ChatOpenAI()
parser = StrOutputParser()
The Runnable Superpower
Every Runnable has these magic powers:
| Power | What it Does |
|---|---|
.invoke() |
Run once, get answer |
.stream() |
Get answer piece by piece |
.batch() |
Run many at once |
# Using invoke - one joke
chain.invoke({"topic": "cats"})
# Using batch - many jokes at once!
chain.batch([
{"topic": "cats"},
{"topic": "dogs"},
{"topic": "fish"}
])
🔄 Runnable Composition Patterns
Composition means putting blocks together. Like building a sandwich!
Pattern 1: The Pipe (Sequence)
The simplest pattern. One thing after another.
graph TD A[Input] --> B[Step 1] B --> C[Step 2] C --> D[Step 3] D --> E[Output]
# The pipe operator: |
chain = prompt | llm | parser
Think: Toast → Butter → Jam → Eat!
Pattern 2: The Dictionary (Parallel)
Run multiple things at the same time!
graph TD A[Input] --> B[Task 1] A --> C[Task 2] B --> D[Combine] C --> D
# Using RunnableParallel
from langchain_core.runnables import (
RunnableParallel
)
parallel = RunnableParallel(
joke=joke_chain,
fact=fact_chain
)
# One input, two outputs!
result = parallel.invoke({"topic": "cats"})
# result["joke"] = "Why do cats..."
# result["fact"] = "Cats sleep 16 hours..."
Pattern 3: The Passthrough
Keep original data while adding new stuff!
from langchain_core.runnables import (
RunnablePassthrough
)
chain = RunnableParallel(
original=RunnablePassthrough(),
answer=prompt | llm | parser
)
# Input stays, answer added!
result = chain.invoke({"topic": "space"})
# result["original"] = {"topic": "space"}
# result["answer"] = "Space is..."
Pattern 4: The Lambda (Quick Transform)
Need a quick change? Use a tiny function!
from langchain_core.runnables import (
RunnableLambda
)
# Make everything UPPERCASE
uppercase = RunnableLambda(
lambda x: x.upper()
)
chain = prompt | llm | parser | uppercase
⚙️ Runnable Binding and Config
Sometimes your Runnables need extra instructions. That’s where binding comes in!
What is Binding?
Binding = Attaching extra settings to a Runnable.
Example: Your coffee machine (LLM) can make coffee differently:
- Regular or strong?
- Hot or iced?
- With cream or without?
# Basic LLM
llm = ChatOpenAI()
# LLM with special settings "bound"
creative_llm = llm.bind(
temperature=0.9 # More creative!
)
serious_llm = llm.bind(
temperature=0.1 # More focused!
)
Binding Tools
You can bind tools (special abilities) to your LLM!
# Give the LLM a calculator
llm_with_calc = llm.bind_tools([
calculator_tool
])
# Now it can do math!
Using Config
Config lets you control things while running:
from langchain_core.runnables import (
ConfigurableField
)
# Make temperature changeable
flex_llm = llm.configurable_fields(
temperature=ConfigurableField(
id="temp",
name="Temperature"
)
)
# Use different temperatures!
flex_llm.invoke(
"Tell a joke",
config={"temp": 0.9}
)
RunnableConfig Options
| Option | What it Does |
|---|---|
max_concurrency |
How many at once |
recursion_limit |
Stop infinite loops |
callbacks |
Watch what happens |
tags |
Label for tracking |
config = {
"max_concurrency": 5,
"tags": ["my-app", "jokes"]
}
chain.invoke(input, config=config)
🎉 Putting It All Together
Here’s a complete example that uses EVERYTHING:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import (
ChatPromptTemplate
)
from langchain_core.output_parsers import (
StrOutputParser
)
from langchain_core.runnables import (
RunnableParallel,
RunnablePassthrough
)
# Create our Runnables
prompt = ChatPromptTemplate.from_template(
"Explain {topic} to a 5 year old"
)
llm = ChatOpenAI().bind(temperature=0.7)
parser = StrOutputParser()
# Compose them together!
chain = (
RunnableParallel(
topic=RunnablePassthrough(),
explanation=prompt | llm | parser
)
)
# Run it!
result = chain.invoke("gravity")
print(result["explanation"])
# "Gravity is like a big invisible
# magnet in the Earth..."
🌟 Key Takeaways
- LCEL = A way to snap AI blocks together
- Runnables = Blocks that take input and give output
- Pipe
|= The conveyor belt connecting blocks - Composition = Building bigger things from smaller things
- Binding = Attaching extra settings to blocks
- Config = Control settings when running
Remember: Just like LEGO, start simple and build up! Every complex chain is just simple pieces connected together. 🧱✨