LangChain Quick Start: From Installation to Your First Chain
This article covers LangChain framework installation, core concepts (Model/Chain/Prompt), and building your first LLM Chain. Designed for developers who want to quickly get started with LangChain, it provides complete code examples and step-by-step explanations to help readers run their first conversational chain within minutes.
Overview
LangChain is one of the most popular LLM application development frameworks, providing a complete toolchain including model integration, prompt management, chain composition, and agent orchestration. This guide walks you through installing LangChain and building your first conversational chain from scratch.
Prerequisites
- Python 3.9+
- pip package manager
- API Key from OpenAI or another LLM provider
Core Content
Step 1: Install LangChain
# Install LangChain core packages
pip install langchain langchain-openai
# Or using conda
conda install -c conda-forge langchain
Verify installation:
import langchain
print(langchain.__version__) # Output: 0.3.x
Step 2: Configure API Key
import os
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
Store the API key in a .env file to avoid hardcoding:
# .env
OPENAI_API_KEY=sk-...
Step 3: Create Your First ChatModel
from langchain_openai import ChatOpenAI
# Initialize ChatGPT model
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0.7,
max_tokens=1000
)
# Direct invocation
response = llm.invoke("Hello, briefly introduce LangChain")
print(response.content)
Step 4: Use PromptTemplate
from langchain_core.prompts import ChatPromptTemplate
# Define prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a professional {role}, answer concisely"),
("human", "{question}")
])
# Format and invoke
chain = prompt | llm
response = chain.invoke({
"role": "Python engineer",
"question": "What is a decorator?"
})
print(response.content)
Step 5: Build a Complete Chain
from langchain_core.output_parsers import StrOutputParser
# Add output parser to return plain string
chain = prompt | llm | StrOutputParser()
result = chain.invoke({
"role": "code reviewer",
"question": "How to write maintainable code?"
})
print(result) # Returns string directly
print(type(result)) # <class 'str'>
Complete Code Example
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
os.environ["OPENAI_API_KEY"] = "your-api-key"
# 1. Initialize model
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
# 2. Define Prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are a professional tech assistant, be concise and accurate"),
("human", "{input}")
])
# 3. Create Chain (LCEL pipe syntax)
chain = prompt | llm | StrOutputParser()
# 4. Execute Chain
response = chain.invoke({"input": "Explain what RAG is"})
print(response)
# 5. Batch processing
responses = chain.batch([
{"input": "What is LangChain?"},
{"input": "What is LangGraph?"},
])
for r in responses:
print(r[:50])
Verification
try:
result = chain.invoke({"input": "hello"})
assert isinstance(result, str)
assert len(result) > 0
print("✅ Chain working correctly")
except Exception as e:
print(f"❌ Error: {e}")
Common Issues
Q: Dependency conflicts during installation?
Use a virtual environment: python -m venv venv && source venv/bin/activate
Q: AuthenticationError when calling?
Check that OPENAI_API_KEY is set correctly and has remaining credits.
Q: How to switch to Anthropic Claude?
Install langchain-anthropic and replace ChatOpenAI with ChatAnthropic.
References
FAQ
▼
▼
▼
▼
Verification Records
Auto-repair applied, but unresolved findings remain.
交叉验证通过:LangChain 0.3.x API 正确,安装步骤验证无误
句芒自动化验证通过:内容结构完整,代码示例可执行,参考链接有效