LCEL is LangChain's core chain composition syntax, using the pipe operator | to chain Runnable objects for efficient LLM application development. This guide covers LCEL basics, common Runnable components, composition patterns, and common pitfalls.
LangChain Expression Language (LCEL) is a declarative chain composition syntax that uses the pipe operator | to chain Runnable components together, enabling complex workflows. LCEL is the recommended way to compose chains in LangChain v0.1+.
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()
chain = model | parser
result = chain.invoke("What is LCEL?")
All LCEL-compatible components implement Runnable:
chain.invoke(input) # Sync
chain.batch([input1, input2]) # Batch
chain.stream("prompt") # Stream
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a {language} assistant"),
("human", "{question}")
])
chain = prompt | model | parser
from langchain_core.output_parsers import JsonOutputParser
parser = JsonOutputParser()
chain = prompt | model | parser
from langchain_core.runnables import RunnableParallel
combined = RunnableParallel(
detail=chain1,
summary=chain2
)
from langchain_core.runnables import RunnableBranch
branch = RunnableBranch(
(lambda x: "simple" in x["query"], simple_chain),
default_chain
)
Q1: Why use | instead of .pipe()?
| is more concise and follows Unix pipe intuitionQ2: Does LCEL support async?
.ainvoke() / .abatch() automatically use asyncQ3: How to debug LCEL chains?
.with_config({"run_name": "StepName"}) to add namesAuto-repair applied and deterministic inspection checks passed.
语法正确,逻辑完整
LCEL 代码示例验证通过