LlamaIndex Embeddings IBM
raw JSON → 0.6.0.post1 verified Sat May 09 auth: no python
LlamaIndex integration for IBM watsonx.ai embeddings. Provides wrapper classes to generate embeddings using IBM foundation models via watsonx.ai API. Current version 0.6.0.post1, supports Python >=3.11, <4.0. Packages are released frequently alongside llama-index core updates.
pip install llama-index-embeddings-ibm Common errors
error ModuleNotFoundError: No module named 'ibm_watsonx_ai' ↓
cause Missing required dependency `ibm-watsonx-ai` not installed.
fix
Run
pip install ibm-watsonx-ai. error ImportError: cannot import name 'WatsonxEmbeddings' from 'llama_index.embeddings.watsonx' ↓
cause Import path changed after version 0.5.0.
fix
Use
from llama_index.embeddings.ibm import WatsonxEmbeddings. error AuthenticationError: Bearer token is invalid. ↓
cause Missing or incorrect API key or project ID.
fix
Ensure
WATSONX_APIKEY and WATSONX_PROJECT_ID are set correctly. Warnings
breaking In version 0.5.0, the import path changed from `llama_index.embeddings.watsonx` to `llama_index.embeddings.ibm`. Update all imports. ↓
fix Use `from llama_index.embeddings.ibm import WatsonxEmbeddings`
deprecated Passing `credentials` as a dict is deprecated. Use explicit parameters (`url`, `apikey`, `project_id`) instead. ↓
fix Remove `credentials` argument and set individual parameters.
gotcha The `model_id` must include the provider prefix (e.g., 'ibm/slate-125m-english-rtrvr'). Omitting 'ibm/' may cause a silent fallback or error. ↓
fix Always prefix your model with 'ibm/'.
Imports
- WatsonxEmbeddings wrong
from llama_index.embeddings.watsonx import WatsonxEmbeddingscorrectfrom llama_index.embeddings.ibm import WatsonxEmbeddings
Quickstart
from llama_index.embeddings.ibm import WatsonxEmbeddings
embedding = WatsonxEmbeddings(
model_id="ibm/slate-125m-english-rtrvr",
url=os.environ.get("WATSONX_URL", "https://us-south.ml.cloud.ibm.com"),
apikey=os.environ.get("WATSONX_APIKEY", ""),
project_id=os.environ.get("WATSONX_PROJECT_ID", ""),
)
embeddings = embedding.get_text_embedding("Hello World!")
print(embeddings[:5])