LCEL 是 LangChain 核心的链式调用语法,通过管道操作符 | 串联 Runnable 对象,实现简洁高效的 LLM 应用开发。本文详细介绍 LCEL 的基本语法、常用 Runnable 组件、组合模式以及常见陷阱。
LangChain Expression Language (LCEL) 是一种声明式的链式调用语法,通过管道操作符 | 将多个 Runnable 组件串联起来,实现复杂的工作流。LCEL 是 LangChain v0.1+ 推荐的编写链式调用的方式。
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()
# 使用管道操作符串联
chain = model | parser
# 执行
result = chain.invoke("什么是 LCEL?")
所有支持 LCEL 的组件都实现了 Runnable 接口:
# Runnable 的核心方法
chain.invoke(input) # 同步调用
chain.batch([input1, input2]) # 批量调用
chain.stream("prompt") # 流式调用
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "你是一个{language}助手"),
("human", "{question}")
])
# 串联
chain = prompt | model | parser
result = chain.invoke({
"language": "Python",
"question": "解释生成器是什么"
})
from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel
class Response(BaseModel):
answer: str
confidence: float
parser = JsonOutputParser(pydantic_object=Response)
chain = prompt | model | parser
result = chain.invoke({...})
# result 是 dict: {"answer": "...", "confidence": 0.9}
from langchain_core.runnables import RunnableParallel
chain1 = prompt | model | parser
chain2 = summary_prompt | model | summary_parser
combined = RunnableParallel(
detail=chain1,
summary=chain2
)
result = combined.invoke({"topic": "LangChain"})
# result = {"detail": "...", "summary": "..."}
from langchain_core.runnables import RunnableBranch
branch = RunnableBranch(
(lambda x: "simple" in x["query"], simple_chain),
(lambda x: "complex" in x["query"], complex_chain),
default_chain
)
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_chroma import Chroma
# 向量存储
vectorstore = Chroma("docs", OpenAIEmbeddings(), "...")
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
# RAG 链
template = """基于以下上下文回答问题:
上下文:
{context}
问题:{question}"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| model
| parser
)
result = rag_chain.invoke("LangChain 的核心概念是什么?")
Q1: 为什么用 | 而不是 .pipe()?
| 更简洁,符合 Unix 管道直觉Q2: LCEL 支持异步吗?
.ainvoke() / .abatch() 自动使用异步Q3: 如何调试 LCEL 链?
.with_config({"run_name": "StepName"}) 添加名称Auto-repair applied and deterministic inspection checks passed.
语法正确,逻辑完整
LCEL 代码示例验证通过