{"id":8287,"library":"llama-index-embeddings-bedrock","title":"LlamaIndex Bedrock Embeddings Integration","description":"The `llama-index-embeddings-bedrock` library provides robust integration for Amazon Bedrock embedding models within the LlamaIndex framework. It allows developers to leverage various AWS Bedrock models like Amazon Titan and Cohere for generating text embeddings. The library is actively maintained, with version 0.8.0 released on March 12, 2026, and receives frequent updates to align with LlamaIndex core and Bedrock API changes.","status":"active","version":"0.8.0","language":"en","source_language":"en","source_url":"https://llamahub.ai/l/llama-index-embeddings-bedrock","tags":["LlamaIndex","embeddings","AWS Bedrock","LLM","AI","cloud","RAG"],"install":[{"cmd":"pip install llama-index-embeddings-bedrock","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Required for core LlamaIndex functionalities and types.","package":"llama-index-core","optional":false},{"reason":"AWS SDK for Python, necessary to interact with Amazon Bedrock service.","package":"boto3","optional":false}],"imports":[{"note":"This is the primary class for Bedrock embedding models.","symbol":"BedrockEmbedding","correct":"from llama_index.embeddings.bedrock import BedrockEmbedding"}],"quickstart":{"code":"import os\nfrom llama_index.embeddings.bedrock import BedrockEmbedding\n\n# Configure AWS credentials and region via environment variables or explicitly\n# os.environ['AWS_ACCESS_KEY_ID'] = 'YOUR_ACCESS_KEY'\n# os.environ['AWS_SECRET_ACCESS_KEY'] = 'YOUR_SECRET_KEY'\n# os.environ['AWS_REGION_NAME'] = 'us-east-1'\n\n# Initialize the embedding model\nembed_model = BedrockEmbedding(\n    model_name=\"cohere.embed-english-v3\", # Example model, choose from supported models\n    region_name=os.environ.get('AWS_REGION_NAME', 'us-east-1'),\n    # Optionally, specify credentials directly or via profile_name\n    aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID'),\n    aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY'),\n    # profile_name='my-aws-profile'\n)\n\n# Get a single embedding\ntext = \"Hello, world! This is a test document.\"\nembedding = embed_model.get_text_embedding(text)\nprint(f\"Embedding length: {len(embedding)}\")\nprint(f\"First 5 embedding values: {embedding[:5]}\")\n\n# List supported models\n# supported_models = BedrockEmbedding.list_supported_models()\n# print(\"Supported models:\", supported_models)\n","lang":"python","description":"This quickstart demonstrates how to initialize the `BedrockEmbedding` class, configure AWS credentials and region, and generate an embedding for a given text. It highlights using environment variables for sensitive information."},"warnings":[{"fix":"Ensure that text chunks passed to the embedding model are within the model's maximum context length. Adjust LlamaIndex's chunking strategy (e.g., `chunk_size` and `chunk_overlap`) and consider model-specific limitations.","message":"Cohere embedding models on Bedrock have strict input token limits (e.g., 512 tokens or ~2048 characters for `cohere.embed-english-v3`). Exceeding this limit will result in a `ValidationException` or 'Input too long' error, even with small LlamaIndex chunk sizes if the underlying text is too long.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always ensure the `model_name` passed to `BedrockEmbedding` accurately reflects the model configured in your AWS Application Inference Profile.","message":"When using an `application_inference_profile_arn` with `BedrockEmbedding`, the `model_name` argument *must* still match the underlying model referenced by the profile. The integration does not validate this, and mismatched values lead to undefined behavior or errors.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Set AWS credentials as environment variables, provide them directly to `BedrockEmbedding` during initialization, or configure an AWS profile, and always specify the `region_name`.","message":"Failing to configure AWS credentials (e.g., `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION_NAME`) or specify `region_name` during `BedrockEmbedding` initialization will lead to `botocore.exceptions.NoRegionError` or authentication failures.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure that the embedding model used for generating vectors matches the expected dimension of your vector store index. Re-index your data if you switch embedding models.","message":"Mixing embedding models with different output vector dimensions (e.g., default OpenAI `text-embedding-ada-002` (1536 dims) with AWS Titan (1024 dims)) when using a vector store can lead to `Vector dimension does not match the dimension of the index` errors.","severity":"gotcha","affected_versions":"All versions (when integrating with vector stores)"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure `region_name` is provided to `BedrockEmbedding` (e.g., `region_name=\"us-east-1\"`) or set the `AWS_REGION_NAME` environment variable. Also check other AWS credential configurations.","cause":"The AWS region was not specified in the `BedrockEmbedding` constructor or via environment variables (`AWS_REGION_NAME`).","error":"botocore.exceptions.NoRegionError: You must specify a region."},{"fix":"Reduce the size of the text chunks being embedded. For LlamaIndex, adjust `chunk_size` and `chunk_overlap` settings in your `Settings` (or `ServiceContext` for older versions) object to ensure chunks adhere to the model's limits.","cause":"The input text provided to the Bedrock embedding model (especially Cohere models) exceeded its maximum token or character limit.","error":"An error occurred (ValidationException) when calling the InvokeModel operation: Input is too long for requested model."},{"fix":"Ensure consistency in embedding model dimensions. If you intend to use AWS Bedrock embeddings, configure LlamaIndex to use `BedrockEmbedding` for all indexing and querying operations, and re-index your data if the dimensions are mismatched. Explicitly set the embedding model in LlamaIndex's global settings or `ServiceContext`.","cause":"The embedding model used to generate vectors (e.g., OpenAI's ADA with 1536 dimensions by default in LlamaIndex) does not match the dimensionality of the vector store index (e.g., 1024 for AWS Titan).","error":"Vector dimension 1536 does not match the dimension of the index 1024"},{"fix":"Upgrade `llama-index-embeddings-bedrock` to the latest version, which typically has broader `llama-index-core` compatibility. If issues persist, try upgrading `llama-index-core` to its latest version or, as a last resort, downgrade `llama-index-core` to a version compatible with your `llama-index-embeddings-bedrock` (e.g., `pip install llama-index-core==0.10.0`).","cause":"Specific versions of `llama-index-embeddings-bedrock` might have strict dependencies on `llama-index-core`, leading to conflicts if `llama-index-core` is already installed at an incompatible version.","error":"Unable to install llama-index-embeddings-bedrock (due to `llama-index-core` version conflict)"}]}