A unified interface for LangChain chat model providers that simplifies working with multiple LLM providers.
- Unified API: Single interface for multiple LangChain chat model providers
- Lazy Loading: Providers are only imported when needed
- Graceful Fallbacks: Missing dependencies don't break the entire package
- Provider Discovery: Easily list available providers
- Standardized Parameters: Consistent parameter handling across providers
- OpenAI (
openai) -ChatOpenAI - Azure OpenAI (
azure_openai) -AzureChatOpenAI - AWS Bedrock (
bedrock) -ChatBedrock - Google Vertex AI (
vertex) -ChatVertexAI - Google Gemini (
gemini) -ChatGoogleGenerativeAI - Anthropic (
anthropic) -ChatAnthropic - Ollama (
ollama) -ChatOllama
Install the base package:
uv add langchaingangInstall with specific provider support:
# OpenAI (and Azure OpenAI) support
uv add "langchaingang[openai]"
# AWS Bedrock support
uv add "langchaingang[aws]"
# Google Gemini and Vertex AI support
uv add "langchaingang[google]"
# Anthropic support
uv add "langchaingang[anthropic]"
# Ollama support
uv add "langchaingang[ollama]"
# Multiple providers
uv add "langchaingang[openai,anthropic,aws]"
# All providers
uv add "langchaingang[all]"import langchaingang
# List available providers
providers = langchaingang.get_provider_list()
print(providers) # ['openai', 'anthropic', 'ollama', ...]
# Get a chat model
model = langchaingang.get_chat_model(
provider_name="openai",
model="gpt-4o-mini",
api_key="your-api-key"
)
# Use the model
response = model.invoke("Hello, world!")
print(response.content)model = langchaingang.get_chat_model(
provider_name="openai",
model="gpt-4o-mini",
api_key="your-openai-key"
)model = langchaingang.get_chat_model(
provider_name="azure_openai",
model="gpt-4o-mini",
azure_endpoint="https://your-resource.openai.azure.com/",
api_key="your-azure-key",
api_version="2024-02-01"
)model = langchaingang.get_chat_model(
provider_name="bedrock",
model="meta.llama3-2-70b-instruct-v1:0", # Will be converted to model_id
region_name="us-east-1"
)model = langchaingang.get_chat_model(
provider_name="vertex",
model="gemini-2.0-flash-001", # Will be converted to model_name
project="your-gcp-project"
)model = langchaingang.get_chat_model(
provider_name="anthropic",
model="claude-sonnet-4-0",
api_key="your-anthropic-key"
)model = langchaingang.get_chat_model(
provider_name="ollama",
model="llama3",
base_url="http://localhost:11434" # Optional, defaults to localhost:11434
)LangChainGang automatically handles provider-specific parameter differences:
- Bedrock:
modelparameter is converted tomodel_id - Vertex AI:
modelparameter is converted tomodel_name - All others:
modelparameter is passed through unchanged
The package gracefully handles missing dependencies:
# This won't fail even if langchain-openai isn't installed
providers = langchaingang.get_provider_list()
# This will raise ImportError if langchain-openai isn't installed
model = langchaingang.get_chat_model("openai", model="gpt-4o-mini")Install development dependencies:
uv add "langchaingang[dev]"Run tests:
pytestFormat code:
black langchaingang/
isort langchaingang/Type checking:
mypy langchaingang/MIT License - see LICENSE file for details.