LangChain IBM Integration
langchain-ibm is an integration package that connects LangChain with IBM watsonx.ai, leveraging the ibm-watsonx-ai SDK. It provides wrappers for various IBM watsonx.ai capabilities, including chat models, large language models (LLMs), embedding models, and rerankers, enabling developers to build AI applications using IBM's foundation models within the LangChain framework. The library is actively maintained with frequent updates and releases.
Warnings
- breaking LangChain v1 introduced significant changes to package namespaces and import paths within the core LangChain library. If you are migrating from an older LangChain version, ensure your `langchain-core` dependency is compatible and update imports accordingly, as older tutorials may break.
- gotcha IBM watsonx.ai models require an API key and Project ID, which should ideally be set as environment variables (e.g., `WATSONX_API_KEY`, `WATSONX_PROJECT_ID`) rather than hardcoding. Also, the `model_id` and `url` must be correctly specified for your watsonx.ai service instance.
- gotcha Different watsonx.ai model types (chat, LLM, reranker) require specific parameter schemas (e.g., `TextChatParameters` for `ChatWatsonx`, `TextGenParameters` for `WatsonxLLM`, `RerankParameters` for `WatsonxRerank`). Using the wrong schema will lead to errors.
- breaking A serialization injection vulnerability (CVE-2025-68664) was found in LangChain's `dumps()` and `dumpd()` functions, allowing untrusted data with 'lc' keys to be deserialized as objects. This directly affects `langchain-core` versions and, by extension, applications using `langchain-ibm` that rely on these serialization methods.
- gotcha There is an open bug where `asyncio` calls may fail due to synchronous, connection-pooled HTTP clients in the underlying `ibm-watsonx-ai` SDK. This can lead to unexpected behavior or deadlocks in asynchronous applications.
Install
-
pip install -U langchain-ibm
Imports
- ChatWatsonx
from langchain_ibm import ChatWatsonx
- WatsonxLLM
from langchain_ibm import WatsonxLLM
- WatsonxEmbeddings
from langchain_ibm import WatsonxEmbeddings
- WatsonxRerank
from langchain_ibm import WatsonxRerank
- TextChatParameters
from ibm_watsonx_ai.foundation_models.schema import TextChatParameters
- TextGenParameters
from ibm_watsonx_ai.foundation_models.schema import TextGenParameters
Quickstart
import os
from getpass import getpass
from langchain_ibm import ChatWatsonx
from ibm_watsonx_ai.foundation_models.schema import TextChatParameters
# Set environment variables (replace with your actual values or retrieve from a secure source)
if not os.environ.get("WATSONX_API_KEY"): os.environ["WATSONX_API_KEY"] = getpass("Enter your IBM Cloud API Key: ")
if not os.environ.get("WATSONX_PROJECT_ID"): os.environ["WATSONX_PROJECT_ID"] = getpass("Enter your IBM watsonx.ai Project ID: ")
# Define model parameters
parameters = TextChatParameters(
temperature=0.7,
max_completion_tokens=500
)
# Initialize the ChatWatsonx model
# Replace with your actual model_id and url
model = ChatWatsonx(
model_id="ibm/granite-13b-chat-v2",
url="https://us-south.ml.cloud.ibm.com", # Example URL, choose based on your region
project_id=os.environ["WATSONX_PROJECT_ID"],
params=parameters,
)
# Invoke the model
response = model.invoke("What is the capital of France?")
print(response.content)