machine learning +
Build a Python AI Chatbot with Memory Using LangChain
LangChain Crash Course — Chains, Models, and Output Parsers
Follow this hands-on LangChain tutorial to master chat models, prompt templates, output parsers, and LCEL chains with runnable Python examples.
LangChain is a Python framework that wraps LLM API calls into reusable, snap-together parts — chat models, prompt templates, output parsers, and chains — so you can build structured AI pipelines instead of writing raw API boilerplate every time.
You’ve called the OpenAI API by hand before. You’ve built prompt strings, parsed JSON yourself, and written error-handling code from scratch. That works — until you need to swap models, chain several steps together, or get clean structured output without a custom parser each time.
LangChain solves that problem. It gives you four parts that snap together: models, prompts, parsers, and chains. You connect them with a single pipe operator (|). In this tutorial, I’ll walk you through each part. By the end, you’ll go from raw API calls to clean LLM pipelines in under 25 minutes.
Before we write any code, let me show you how these parts fit together.
You start with a chat model. Think of it as a clean wrapper around an LLM API like OpenAI. Instead of building raw HTTP requests, you call .invoke() and get back a neat response object. But you don’t want to hardcode your prompts. So you make a prompt template with slots for dynamic values. The template fills in those slots and sends ready-to-go messages to the model. The model sends back text — but your app needs real data structures. That’s where an output parser steps in. It takes that raw text and turns it into a Python dict or a typed Pydantic object. Last, the pipe operator (|) links all three into one pipeline. Data flows left to right — template to model to parser — in a single line.
We’ll build each part from scratch. Then we’ll chain them together to pull structured meeting summaries out of raw text.
What Is LangChain and Why Should You Use It?
LangChain is an open-source Python framework for apps powered by large language models. It turns raw API calls into parts you can mix, match, and reuse.
Why not just use the OpenAI SDK on its own? Three real pain points push you toward LangChain.
Model lock-in. The raw SDK glues your code to one vendor. LangChain abstracts the provider away — ChatOpenAI, ChatAnthropic, and ChatGoogle all share a single interface. Switch the import line and your entire pipeline runs on a different model.
Messy prompts. When prompts have dynamic values, string formatting gets ugly fast. ChatPromptTemplate handles variable injection, role setup, and message formatting in one spot.
Painful parsing. The raw API hands you plain text. Want JSON? You call json.loads() and hope the model played along. LangChain’s output parsers take care of format instructions, parsing, and type checks for you.
python
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Set your OpenAI API key (or load from .env)
# os.environ["OPENAI_API_KEY"] = "your-key-here"
print("LangChain imports loaded successfully")
python
LangChain imports loaded successfully
Tip: You need an OpenAI API key to run this tutorial. Create one at platform.openai.com/api-keys. Store it as the `OPENAI_API_KEY` environment variable. Never hardcode keys in your scripts.
Prerequisites
- Python version: 3.10+
- Required libraries: langchain (0.3+), langchain-openai (0.3+), langchain-core (0.3+), pydantic (2.0+)
- Install:
pip install langchain langchain-openai langchain-core pydantic - API key: OpenAI API key stored as
OPENAI_API_KEYenvironment variable - Time to complete: ~25 minutes
How Do Chat Models Work in LangChain?
What does it look like to call an LLM through LangChain instead of the raw API? Here’s the simplest case. We create a ChatOpenAI object and pass a string to .invoke().
python
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
response = llm.invoke("What is LangChain in one sentence?")
print(type(response))
print(response.content)
python
<class 'langchain_core.messages.ai.AIMessage'>
LangChain is an open-source framework that simplifies building applications powered by large language models by providing tools for prompt management, chaining, memory, and integrations.
Notice two things. The return value isn’t a plain string — you get an AIMessage object back. To read the actual text, grab .content. Also, I set temperature=0 so the model gives the same answer each run. That keeps our results easy to follow. When you want more creative replies, try values between 0.3 and 0.7.
Quick check: Print response on its own (without .content) and see what shows up. You’ll see the full AIMessage with extra info — how many tokens it used, which model ran, and more.
Key Insight: One method rules them all: `.invoke()`. Whether you’re calling a model, a prompt, a parser, or a full chain, you always use `.invoke()`. That shared design is what lets you plug any piece into any other.
How Do Prompt Templates Help You Reuse Prompts?
Say you’re making a chatbot that breaks down ideas across many fields. A user asks about gradient descent. The next one wants to know about organic chemistry. The skeleton of your prompt stays the same — just the subject and question swap out.
Rather than gluing strings together each time, you write one template and fill in the blanks later. ChatPromptTemplate.from_messages() accepts a list of (role, message) tuples. Wrap each variable in curly braces.
python
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that explains {topic} concepts simply."),
("human", "{question}")
])
formatted = prompt.invoke({
"topic": "machine learning",
"question": "What is overfitting?"
})
print(formatted.to_string())
python
System: You are a helpful assistant that explains machine learning concepts simply.
Human: What is overfitting?
When you call .invoke(), you get back a ChatPromptValue — a set of messages the model can read right away. You never have to build dicts by hand or worry about missing role keys.
Don’t need a system message? There’s a shorter form: from_template() for quick one-shot prompts.
python
simple_prompt = ChatPromptTemplate.from_template(
"Translate '{text}' to {language}."
)
result = simple_prompt.invoke({"text": "Hello world", "language": "French"})
print(result.to_string())
python
Human: Translate 'Hello world' to French.
The decision is short: one message, use from_template(). Need to control the model’s behavior with a system message? Go with from_messages(). Done.
How Do Output Parsers Turn LLM Text into Structured Data?
Models talk in text. Your app thinks in objects. That mismatch is where output parsers come in — they sit between the model and your code and reshape raw text into usable Python data. LangChain ships three parsers that cover the vast majority of use cases.
StrOutputParser — How Do You Get Clean Text?
This one sounds odd — why do you need a parser just to get text? Because a chain gives you an AIMessage object by default, not a bare string. Here’s how that looks in practice:
python
from langchain_core.output_parsers import StrOutputParser
parser = StrOutputParser()
raw_response = llm.invoke("Say hello")
print(f"Without parser: {type(raw_response).__name__}")
parsed = parser.invoke(raw_response)
print(f"With parser: {type(parsed).__name__} -> {parsed}")
python
Without parser: AIMessage
With parser: str -> Hello! How can I assist you today?
On its own, StrOutputParser feels almost too basic. But once you plug it at the tail of a chain, it becomes the cap that gives you a neat string instead of a wrapped object.
JsonOutputParser — How Do You Get Python Dicts?
Need real data, not text? JsonOutputParser pulls double duty. First, it writes a format instruction you add to your prompt — a note saying “reply in JSON.” Then it reads the model’s answer and hands you a native Python dict.
How does it work? Call .get_format_instructions() on the parser to get an instruction string. Inject that string into your template with .partial(). The model sees the instruction and shapes its reply to match.
python
from langchain_core.output_parsers import JsonOutputParser
json_parser = JsonOutputParser()
prompt = ChatPromptTemplate.from_messages([
("system", "Extract the requested information. {format_instructions}"),
("human", "Give me the name, population, and continent of Japan.")
])
chain = prompt.partial(
format_instructions=json_parser.get_format_instructions()
) | llm | json_parser
result = chain.invoke({})
print(type(result))
print(result)
This produces:
python
<class 'dict'>
{'name': 'Japan', 'population': 125700000, 'continent': 'Asia'}
See that | in the chain line? That’s the pipe operator — I’ll break it down in the chains section. The takeaway right now: your output is a real Python dict. You can write result["name"] and get "Japan" back.
PydanticOutputParser — How Do You Get Validated Output?
Dicts are nice, but they have a blind spot: no shape or type checks. What if the model uses "pop" as a key instead of "population"? Your code won’t crash right away — it’ll blow up later with a puzzling KeyError.
PydanticOutputParser fixes this up front. You write a Pydantic model — a small Python class where each field has a name and a type — and the parser holds the LLM’s response up to that blueprint.
Below, I define a CountryInfo class with three fields. Each Field(description=...) tells both the parser and the LLM what to put in that slot.
python
from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
class CountryInfo(BaseModel):
name: str = Field(description="Name of the country")
population: int = Field(description="Approximate population")
continent: str = Field(description="Continent the country is in")
pydantic_parser = PydanticOutputParser(pydantic_object=CountryInfo)
prompt = ChatPromptTemplate.from_messages([
("system", "Extract country information. {format_instructions}"),
("human", "Tell me about {country}.")
])
chain = prompt.partial(
format_instructions=pydantic_parser.get_format_instructions()
) | llm | pydantic_parser
result = chain.invoke({"country": "Brazil"})
print(type(result))
print(f"Name: {result.name}")
print(f"Population: {result.population}")
print(f"Continent: {result.continent}")
python
<class '__main__.CountryInfo'>
Name: Brazil
Population: 214000000
Continent: South America
Look at the type: CountryInfo, not dict. You access fields with dot notation — result.name, result.population. Pydantic runs type checks behind the scenes. If the model put a string where an int should be, you’d see a ValidationError right away instead of a hidden bug.
Think about it: Change population to float in the class. What happens? Pydantic quietly accepts both 214000000 and 214000000.0 because the types are close enough. But send "two hundred million" as a string and it rightfully fails.
Warning: Models don’t always follow your schema. Even with clear format instructions, an LLM can still hand back broken JSON. For any real app, wrap your chain calls in try/except and catch `OutputParserException`. Then either retry with a tighter prompt or fall back to plain text.
My advice: reach for PydanticOutputParser instead of JsonOutputParser any time you’re past the prototyping stage. Catching type errors early is worth the tiny extra setup.
How Do Chains Connect Everything with LCEL?
This is where it all clicks.
A chain is a pipeline built from LangChain parts. Each part feeds its output to the next one in line. The prompt injects your variables. The model writes a reply. The parser shapes it into the data you asked for.
The glue is called LCEL — LangChain Expression Language. It borrows the pipe operator | from Unix. Think cat file.txt | grep "error" — the result on the left flows into the step on the right.
The smallest chain worth building has three pieces: prompt, model, parser.
python
prompt = ChatPromptTemplate.from_template(
"Explain {concept} in 2 sentences for a beginner."
)
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"concept": "neural networks"})
print(result)
python
Neural networks are computational models inspired by the human brain, consisting of layers of interconnected nodes that process and learn patterns from data. They are widely used in tasks like image recognition, language processing, and prediction by adjusting the strength of connections during training.
Three parts. One pipe expression. Clean string output.
What Happens Under the Hood?
Writing prompt | llm | parser doesn’t trigger any API calls. Behind the scenes, Python calls the __or__ method on each piece and assembles a RunnableSequence — basically a recipe that lists the steps in order. The actual work only starts when you call .invoke().
Let’s prove it by peeking inside the chain object.
python
from langchain_core.runnables import RunnableSequence
chain = prompt | llm | StrOutputParser()
print(f"Chain type: {type(chain).__name__}")
print(f"Is RunnableSequence: {isinstance(chain, RunnableSequence)}")
print(f"Steps: {len(chain.steps)}")
for i, step in enumerate(chain.steps):
print(f" Step {i}: {type(step).__name__}")
Prints:
python
Chain type: RunnableSequence
Is RunnableSequence: True
Steps: 3
Step 0: ChatPromptTemplate
Step 1: ChatOpenAI
Step 2: StrOutputParser
Key Insight: Pipes build plans, not results. The `|` operator assembles a `RunnableSequence` — think of it as a recipe card. No LLM call fires until you explicitly run `.invoke()`, `.batch()`, or `.stream()`.
What Other Ways Can You Run a Chain?
Every chain ships with three ways to run — no extra code needed on your part.
.batch() processes several inputs at once. Hand it a list of dicts and you get a list of results.
python
results = chain.batch([
{"concept": "API"},
{"concept": "REST"},
])
for r in results:
print(r[:80] + "...")
print()
python
An API (Application Programming Interface) is a set of rules and protocols that...
REST (Representational State Transfer) is an architectural style for designing n...
.stream() yields tokens as they arrive — ideal for a chat UI where you want words to appear live on screen.
python
for chunk in chain.stream({"concept": "machine learning"}):
print(chunk, end="", flush=True)
print()
python
Machine learning is a subset of artificial intelligence that enables computers to learn patterns from data and make predictions or decisions without being explicitly programmed. It works by training algorithms on large datasets, allowing them to improve their performance over time as they are exposed to more data.
Those three methods — .invoke(), .batch(), .stream() — are baked into every single Runnable. Models, parsers, full chains — they all speak the same language. That’s what makes the whole framework click.
How Do You Build a Real-World Chain?
Toy code shows syntax. Let’s build something you’d ship at work.
The scenario: your team dumps messy meeting notes into a shared doc. You want a tool that reads those notes and spits out a clean, typed summary with action items — ready to push to Jira or Asana through their API.
To make that happen, you need three things: a Pydantic schema that describes the output shape, a prompt that guides the model, and a chain that wires them together.
Start with the schema. A meeting summary holds a title, a list of key decisions, and action items. Each action item pairs a task with the person who owns it.
python
from typing import List
class ActionItem(BaseModel):
task: str = Field(description="Description of the action item")
assignee: str = Field(description="Person responsible")
class MeetingSummary(BaseModel):
title: str = Field(description="Brief meeting title")
key_decisions: List[str] = Field(description="Main decisions made")
action_items: List[ActionItem] = Field(description="Tasks assigned")
Now build the chain. Drop the Pydantic format instructions into the prompt so the model knows what JSON shape you expect. Then connect prompt → model → parser with pipes.
python
summary_parser = PydanticOutputParser(pydantic_object=MeetingSummary)
summary_prompt = ChatPromptTemplate.from_messages([
("system",
"You are a meeting notes assistant. Extract structured "
"information from the meeting transcript.\n"
"{format_instructions}"),
("human", "{transcript}")
])
summary_chain = summary_prompt.partial(
format_instructions=summary_parser.get_format_instructions()
) | llm | summary_parser
Let’s feed it some sample notes and see what we get back.
python
meeting_notes = """
Team standup March 10. Present: Alice, Bob, Carol.
Alice said the data pipeline is done and ready for review.
Bob mentioned the API rate limits are causing issues in production.
We decided to implement exponential backoff for API calls.
Carol will write the retry logic by Friday.
Bob will set up monitoring dashboards by next Tuesday.
Alice will review Carol's PR once it's ready.
"""
summary = summary_chain.invoke({"transcript": meeting_notes})
print(f"Title: {summary.title}")
print(f"\nKey Decisions:")
for decision in summary.key_decisions:
print(f" - {decision}")
print(f"\nAction Items:")
for item in summary.action_items:
print(f" - {item.task} (Assigned to: {item.assignee})")
python
Title: Team Standup March 10
Key Decisions:
- Implement exponential backoff for API calls
Action Items:
- Write the retry logic by Friday (Assigned to: Carol)
- Set up monitoring dashboards by next Tuesday (Assigned to: Bob)
- Review Carol's PR once it's ready (Assigned to: Alice)
A single chain call took a wall of meeting text and gave back a typed Python object with named fields. From here, you could loop over summary.action_items and push each one to Jira or Asana — zero manual parsing.
Tip: Nest Pydantic models for rich output. `ActionItem` sitting inside `MeetingSummary` proves you can go as deep as you like. The parser auto-generates format instructions for every layer of the tree.
How Does LangChain Compare to the Raw OpenAI API?
Is adding another package to your stack worth it? Let me lay both options side by side so you can decide.
Take the same “extract country info” job. Here’s how you’d do it with the raw OpenAI SDK:
python
# Raw OpenAI approach (pseudocode for comparison)
# client = openai.OpenAI()
# response = client.chat.completions.create(
# model="gpt-4o-mini",
# messages=[
# {"role": "system", "content": "Return JSON with: name, population, continent"},
# {"role": "user", "content": "Tell me about France"}
# ],
# response_format={"type": "json_object"}
# )
# data = json.loads(response.choices[0].message.content)
# No type validation -- data["population"] could be a string
Now contrast that with LangChain and PydanticOutputParser. You get type checks, reusable templates, and composable chains — in roughly the same line count.
| Feature | Raw OpenAI SDK | LangChain |
|---|---|---|
| Swap to Claude/Gemini | Rewrite all API calls | Change one import |
| Reusable prompts | Copy-paste strings | Template objects |
| Output validation | Write custom code | Built-in parsers |
| Chain multiple steps | Nested function calls | Pipe operator | |
| Streaming | Manual chunk handling | .stream() built-in |
| Batch processing | Write a loop | .batch() built-in |
If all you need is one API call inside a script, the raw SDK does the job. But the moment you start stringing steps together, testing other providers, or needing typed output, LangChain pays back the setup cost fast.
What Are the Most Common Mistakes (and How Do You Fix Them)?
Mistake 1: Forgetting the provider package
The framework keeps its core separate from provider-specific code. If you only run pip install langchain, you still can’t import ChatOpenAI.
❌ Wrong:
python
# pip install langchain
# from langchain_openai import ChatOpenAI # ImportError!
✅ Correct:
python
# pip install langchain langchain-openai
from langchain_openai import ChatOpenAI
print("Import successful")
python
Import successful
Mistake 2: Passing a string when the chain needs a dict
Your chain expects a dict whose keys line up with the template placeholders. Feed it a raw string and you’ll hit a TypeError.
❌ Wrong:
python
# chain.invoke("What is Python?")
# TypeError: Expected mapping type as input
✅ Correct:
python
result = chain.invoke({"concept": "Python"})
print(result[:100])
python
Python is a high-level, versatile programming language known for its readability and simplicity, mak
Mistake 3: Not handling parser failures
Models sometimes spit out broken JSON — missing a comma, extra trailing text, you name it. Without a try/except, your app dies on the first bad reply.
❌ Wrong:
python
# result = pydantic_chain.invoke({"country": "..."})
# Crashes if model returns invalid JSON
✅ Correct:
python
from langchain_core.exceptions import OutputParserException
try:
result = chain.invoke({"concept": "error handling in Python"})
print(result[:100])
except OutputParserException as e:
print(f"Parser failed: {e}")
python
Error handling in Python involves using try-except blocks to catch and manage exceptions that may oc
Mistake 4: Using the old LLMChain class
Tutorials written before mid-2024 often import LLMChain from langchain.chains. That class is gone as of version 0.1.17. Stick with the pipe operator instead.
❌ Wrong:
python
# from langchain.chains import LLMChain # Deprecated!
# chain = LLMChain(llm=llm, prompt=prompt)
✅ Correct:
python
chain = prompt | llm | StrOutputParser()
print("LCEL chain created successfully")
python
LCEL chain created successfully
Note: Version 1.0 landed in November 2025 and shipped all legacy chain classes off to `langchain-classic`. LCEL pipes are the only supported style going forward. Spot `LLMChain` or `SequentialChain` in a blog post? Replace them with pipe chains.
Practice Exercises
Exercise 1: Build a Translation Chain
python
ExerciseBlock:
type: 'exercise'
id: 'langchain-translation-chain'
title: 'Build a Translation Chain'
difficulty: 'beginner'
exerciseType: 'write'
instructions: |
Create a LangChain chain that translates text between languages.
1. Create a ChatPromptTemplate with variables {text} and {target_language}
2. Chain it with the llm and StrOutputParser using the pipe operator
3. Invoke the chain to translate "Good morning" to Spanish
4. Print the result
starterCode: |
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Step 1: Create the prompt template
translate_prompt = ChatPromptTemplate.from_template(
# Your template here -- include {text} and {target_language}
)
# Step 2: Build the chain with |
translate_chain = # prompt | llm | parser
# Step 3: Invoke and print
result = translate_chain.invoke({
"text": "Good morning",
"target_language": "Spanish"
})
print(result)
testCases:
- id: 'tc1'
input: 'print("DONE")'
expectedOutput: 'DONE'
description: 'Chain executes without errors'
- id: 'tc2'
input: 'print(type(result).__name__)'
expectedOutput: 'str'
description: 'Output should be a string'
hints:
- 'Template: "Translate the following text to {target_language}: {text}"'
- 'Full chain: translate_prompt | llm | StrOutputParser()'
solution: |
translate_prompt = ChatPromptTemplate.from_template(
"Translate the following text to {target_language}: {text}"
)
translate_chain = translate_prompt | llm | StrOutputParser()
result = translate_chain.invoke({
"text": "Good morning",
"target_language": "Spanish"
})
print(result)
solutionExplanation: |
The prompt template uses {text} for the input and {target_language} for the target. The pipe operator chains template to model to parser. StrOutputParser extracts the plain text from the AIMessage.
xpReward: 15
Exercise 2: Extract Structured Data with PydanticOutputParser
python
ExerciseBlock:
type: 'exercise'
id: 'langchain-pydantic-parser'
title: 'Extract Structured Data with PydanticOutputParser'
difficulty: 'beginner'
exerciseType: 'write'
instructions: |
Create a chain that extracts book information into a Pydantic model.
1. Complete the BookInfo model by adding author (str) and year (int) fields
2. The parser and prompt are already set up
3. Build the chain and invoke it with the given text
4. Print the title, author, and year
starterCode: |
from pydantic import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
# Step 1: Complete the Pydantic model
class BookInfo(BaseModel):
title: str = Field(description="Title of the book")
# Add author and year fields here
# Step 2: Parser and prompt (already done)
book_parser = PydanticOutputParser(pydantic_object=BookInfo)
book_prompt = ChatPromptTemplate.from_messages([
("system", "Extract book information. {format_instructions}"),
("human", "{text}")
])
# Step 3: Build and invoke the chain
book_chain = book_prompt.partial(
format_instructions=book_parser.get_format_instructions()
) | llm | book_parser
result = book_chain.invoke({
"text": "The Great Gatsby was written by F. Scott Fitzgerald in 1925"
})
print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Year: {result.year}")
testCases:
- id: 'tc1'
input: 'print(type(result).__name__)'
expectedOutput: 'BookInfo'
description: 'Result should be a BookInfo object'
- id: 'tc2'
input: 'print(result.year)'
expectedOutput: '1925'
description: 'Year should be 1925'
- id: 'tc3'
input: 'print("DONE")'
expectedOutput: 'DONE'
description: 'Code runs without errors'
hidden: true
hints:
- 'Add: author: str = Field(description="Author of the book")'
- 'Add: year: int = Field(description="Publication year")'
solution: |
class BookInfo(BaseModel):
title: str = Field(description="Title of the book")
author: str = Field(description="Author of the book")
year: int = Field(description="Publication year")
book_parser = PydanticOutputParser(pydantic_object=BookInfo)
book_prompt = ChatPromptTemplate.from_messages([
("system", "Extract book information. {format_instructions}"),
("human", "{text}")
])
book_chain = book_prompt.partial(
format_instructions=book_parser.get_format_instructions()
) | llm | book_parser
result = book_chain.invoke({
"text": "The Great Gatsby was written by F. Scott Fitzgerald in 1925"
})
print(f"Title: {result.title}")
print(f"Author: {result.author}")
print(f"Year: {result.year}")
solutionExplanation: |
The Pydantic model defines typed fields for structured extraction. PydanticOutputParser generates format instructions telling the LLM what JSON shape to produce. The parser then validates and converts the response into a BookInfo object with proper type coercion.
xpReward: 20
Summary
Let’s recap the four core pieces you just learned:
- Chat models (
ChatOpenAI) — a clean wrapper over any LLM API, accessed through.invoke() - Prompt templates (
ChatPromptTemplate) — reusable message skeletons with fill-in-the-blank slots - Output parsers (
StrOutputParser,JsonOutputParser,PydanticOutputParser) — converters that reshape raw text into typed Python data - Chains (LCEL pipe
|) — pipelines that wire the above parts together, flowing data left to right
These four pieces form the base layer for LangGraph. In upcoming posts, you’ll combine them into agents, state machines, and multi-step workflows.
Practice exercise: Build a chain that takes a product blurb and pulls out a ProductInfo object with fields: name (str), category (str), price_range (str — “low”, “medium”, or “high”), and tagline (str — one-sentence marketing copy). Use PydanticOutputParser for type safety. Test it with: “The AirPods Pro 2 are premium wireless earbuds by Apple with active noise cancellation, priced at $249.”
Complete Code
Frequently Asked Questions
Can You Use LangChain with Models Other Than OpenAI?
Absolutely. The framework supports dozens of backends. Grab langchain-anthropic for Claude, langchain-google-genai for Gemini, or pull in langchain-community to run open-source models via Ollama. Because every provider shares the .invoke() contract, you only change the import line and your chains keep working.
python
# Example: switching to Anthropic (not runnable without the package)
# from langchain_anthropic import ChatAnthropic
# llm = ChatAnthropic(model="claude-sonnet-4-20250514")
# Same .invoke() interface -- chains work identically
What’s the Difference Between .invoke(), .batch(), and .stream()?
.invoke() takes a single input and returns the complete answer. .batch() fires off a list of inputs in parallel and gives you back a list. .stream() pushes tokens out one by one so you can show them live. Every Runnable in the library — models, parsers, chains — supports all three out of the box.
Do You Still Need LangChain if OpenAI Has Native Structured Outputs?
OpenAI’s response_format flag gives you JSON mode, but only for OpenAI models. LangChain layers on provider switching, prompt templates, and chain building. If your whole project lives on OpenAI and you never chain steps, the raw SDK is enough. The moment you branch out or compose pipelines, the extra package earns its keep.
Is LCEL the Only Way to Build Chains?
Those legacy classes — LLMChain, SequentialChain, SimpleSequentialChain — were moved to langchain-classic and are no longer maintained. LCEL pipes are the only actively supported path. Every new doc page, example, and feature targets LCEL.
References
- LangChain Documentation — Chat models. Link
- LangChain Documentation — LangChain Expression Language (LCEL). Link
- LangChain Documentation — Output parsers. Link
- LangChain Documentation — Prompt templates. Link
- LangChain-OpenAI package — PyPI. Link
- Pydantic Documentation — Models. Link
- OpenAI API Reference — Chat completions. Link
- LangChain GitHub — Releases and changelog. Link
Free Course
Master Core Python — Your First Step into AI/ML
Build a strong Python foundation with hands-on exercises designed for aspiring Data Scientists and AI/ML Engineers.
Start Free Course →Trusted by 50,000+ learners
Related Course
Master Gen AI — Hands-On
Join 5,000+ students at edu.machinelearningplus.com
Explore Course
Up Next in Learning Path
What Is LangGraph? How It Works and When to Use It
