Skip to content

Instantly share code, notes, and snippets.

@donbr
Created November 5, 2025 00:25
Show Gist options
  • Save donbr/cbbc8f5527d012c5fa60c3297e182722 to your computer and use it in GitHub Desktop.
Save donbr/cbbc8f5527d012c5fa60c3297e182722 to your computer and use it in GitHub Desktop.
lanchain-provider-switching-pattern.md

The LangChain Provider Switching Pattern

How to Switch Between LLM Providers in 20 Lines of Code


🎯 The Core Pattern

The essential truth about switching LLM providers in LangChain:

# It's just this simple:
from langchain_X import ChatX
model = ChatX(model="model-name")

That's it. Everything else in your application stays the same.


πŸ“ What Actually Changes (The Minimal Set)

1. Import Statement

# From OpenAI
from langchain_openai import ChatOpenAI

# To Together AI
from langchain_together import ChatTogether

# To Anthropic
from langchain_anthropic import ChatAnthropic

# To any provider X
from langchain_X import ChatX

2. Model Instantiation

# From OpenAI
model = ChatOpenAI(model="gpt-4")

# To Together AI
model = ChatTogether(model="openai/gpt-oss-20b")

# To Anthropic
model = ChatAnthropic(model="claude-3-opus")

3. API Key (Environment Variable)

# From
OPENAI_API_KEY=sk-...

# To
TOGETHER_API_KEY=tlk-...
# or
ANTHROPIC_API_KEY=sk-ant-...

4. Model Names (Provider-Specific)

Provider Example Model Names
OpenAI gpt-4, gpt-3.5-turbo
Together openai/gpt-oss-20b, meta-llama/Llama-3-70b
Anthropic claude-3-opus, claude-3-sonnet
Google gemini-pro, gemini-1.5-pro

βœ… What Stays Exactly the Same

Everything else in your codebase remains untouched:

# All these patterns work identically across providers:

# 1. Basic invocation
response = model.invoke("What is the capital of France?")

# 2. Tool binding
model_with_tools = model.bind_tools(tools)

# 3. Streaming
for chunk in model.stream("Tell me a story"):
    print(chunk.content, end="")

# 4. Message formats
messages = [
    SystemMessage(content="You are helpful"),
    HumanMessage(content="Hello!"),
    AIMessage(content="Hi there!")
]
response = model.invoke(messages)

# 5. Chains and graphs
chain = prompt | model | output_parser
graph.add_node("agent", lambda x: model.invoke(x))

# 6. All your business logic
# 7. All your prompts
# 8. All your error handling
# 9. All your tests (mostly)

πŸ”„ Complete Switching Examples

Chat Models: OpenAI β†’ Together AI

Before:

# app/models.py
from langchain_openai import ChatOpenAI

def get_chat_model(temperature=0):
    return ChatOpenAI(
        model="gpt-4",
        temperature=temperature,
        api_key=os.environ.get("OPENAI_API_KEY")
    )

After:

# app/models.py
from langchain_together import ChatTogether

def get_chat_model(temperature=0):
    return ChatTogether(
        model="openai/gpt-oss-20b",
        temperature=temperature,
        together_api_key=os.environ.get("TOGETHER_API_KEY")
    )

Lines changed: 4 ✨

Embeddings: OpenAI β†’ Together AI

Before:

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(
    model="text-embedding-3-small"
)

After:

from langchain_together import TogetherEmbeddings

embeddings = TogetherEmbeddings(
    model="BAAI/bge-large-en-v1.5"
)

Lines changed: 3 ✨


🧠 The Mental Model

Your Application Code
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LangChain        β”‚  ← This interface never changes
β”‚ Abstract Layer   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Provider Adapter β”‚  ← Only this changes (import + constructor)
β”‚ (ChatX class)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Provider API     β”‚  ← OpenAI, Together, Anthropic, etc.
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Multi-Provider Pattern (Advanced)

Want to support multiple providers with runtime switching?

# app/models.py
def get_chat_model(provider="together", temperature=0):
    """Factory pattern for multiple providers."""

    providers = {
        "openai": {
            "class": "langchain_openai.ChatOpenAI",
            "model": "gpt-4",
            "key_env": "OPENAI_API_KEY"
        },
        "together": {
            "class": "langchain_together.ChatTogether",
            "model": "openai/gpt-oss-20b",
            "key_env": "TOGETHER_API_KEY"
        },
        "anthropic": {
            "class": "langchain_anthropic.ChatAnthropic",
            "model": "claude-3-opus",
            "key_env": "ANTHROPIC_API_KEY"
        }
    }

    config = providers[provider]
    module_name, class_name = config["class"].rsplit(".", 1)
    module = __import__(module_name, fromlist=[class_name])
    model_class = getattr(module, class_name)

    return model_class(
        model=config["model"],
        temperature=temperature,
        api_key=os.environ.get(config["key_env"])
    )

# Usage
model = get_chat_model(provider="together")  # or "openai" or "anthropic"

πŸ“š Common Provider Packages

Provider Package Install Command
OpenAI langchain-openai pip install langchain-openai
Together langchain-together pip install langchain-together
Anthropic langchain-anthropic pip install langchain-anthropic
Google langchain-google-genai pip install langchain-google-genai
AWS Bedrock langchain-aws pip install langchain-aws
Cohere langchain-cohere pip install langchain-cohere
HuggingFace langchain-huggingface pip install langchain-huggingface

πŸŽ“ The Teaching Moment

When teaching provider switching, emphasize:

"LangChain's power is in the abstraction. You're not rewriting your application for each provider - you're just swapping the adapter."

This means:

  • Prototype with OpenAI (familiar, well-documented)
  • Deploy with open-source (privacy, cost control)
  • Switch anytime based on requirements (performance, cost, compliance)

The switching cost is essentially zero - it's just changing an import and a constructor.


⚑ Quick Reference Card

# The entire pattern in 4 lines:

# 1. Import
from langchain_X import ChatX

# 2. Instantiate
model = ChatX(model="model-name", x_api_key="...")

# 3. Use (identical for all providers)
response = model.invoke("Your prompt")

# 4. That's it. Really.

πŸ”‘ Key Takeaway

Switching LLM providers in LangChain is not a migration - it's a configuration change.

The actual code changes are so minimal that you could:

  • A/B test providers in production
  • Switch providers based on load
  • Use different providers for different tasks
  • Failover between providers

All without touching your business logic.


πŸ’‘ Why This Matters

  1. No Vendor Lock-in: Switch providers in minutes, not months
  2. Cost Optimization: Use expensive models only when needed
  3. Privacy Control: Use local/open-source models for sensitive data
  4. Performance Tuning: Choose the best model for each task
  5. Future-Proof: New providers just need a new adapter

The pattern from langchain_X import ChatX is your guarantee of flexibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment