{"id":9083,"library":"llama-index-llms-bedrock","title":"LlamaIndex LLMs Bedrock Integration","description":"This library provides an integration for connecting LlamaIndex with AWS Bedrock Large Language Models (LLMs). It allows users to leverage various Bedrock models for text completion and chat functionalities within their LlamaIndex applications, including streaming responses. The library is part of the broader LlamaIndex ecosystem, which maintains an active development pace with frequent updates and a move towards modular integrations.","status":"active","version":"0.5.0","language":"en","source_language":"en","source_url":"https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/llms/llama-index-llms-bedrock","tags":["LlamaIndex","LLM","AWS","Bedrock","AI","NLP"],"install":[{"cmd":"pip install llama-index-llms-bedrock","lang":"bash","label":"Install only Bedrock LLM integration"},{"cmd":"pip install llama-index llama-index-llms-bedrock boto3","lang":"bash","label":"Install with LlamaIndex core and AWS SDK"}],"dependencies":[{"reason":"Provides core LlamaIndex abstractions and functionalities required by the LLM integration.","package":"llama-index-core","optional":false},{"reason":"AWS SDK for Python, essential for interacting with AWS Bedrock services.","package":"boto3","optional":false},{"reason":"Asynchronous AWS SDK client, likely used internally for async operations. (Often a dependency of llama-index-llms-bedrock-converse)","package":"aioboto3","optional":true}],"imports":[{"symbol":"Bedrock","correct":"from llama_index.llms.bedrock import Bedrock"}],"quickstart":{"code":"import os\nfrom llama_index.llms.bedrock import Bedrock\nfrom llama_index.core.llms import ChatMessage\n\n# Configure AWS credentials via environment variables for security\n# Ensure AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION are set\n# Or configure AWS CLI profile and use profile_name parameter\n\n# Initialize the Bedrock LLM\nllm = Bedrock(\n    model=os.environ.get('BEDROCK_MODEL_ID', 'amazon.titan-text-express-v1'),\n    region_name=os.environ.get('AWS_REGION', 'us-east-1'),\n    temperature=0.7,\n    max_tokens=512\n)\n\n# Example 1: Simple completion\nprint(\"\\n--- Completion Example ---\")\nresponse_complete = llm.complete(\"What is the capital of France?\")\nprint(response_complete)\n\n# Example 2: Chat with messages\nprint(\"\\n--- Chat Example ---\")\nmessages = [\n    ChatMessage(role=\"system\", content=\"You are a helpful AI assistant.\"),\n    ChatMessage(role=\"user\", content=\"Tell me a short story about a brave knight.\"),\n]\nresponse_chat = llm.chat(messages)\nprint(response_chat.message.content)\n\n# Example 3: Streaming completion\nprint(\"\\n--- Streaming Completion Example ---\")\nprint(\"Streaming response: \", end=\"\")\nstream_resp = llm.stream_complete(\"Explain the concept of AI in simple terms.\")\nfor r in stream_resp:\n    print(r.delta, end=\"\")\nprint(\"\\n\")","lang":"python","description":"This quickstart demonstrates how to initialize the `Bedrock` LLM, perform a simple text completion, engage in a chat conversation using `ChatMessage` objects, and stream a completion response. Authentication is handled via environment variables for AWS credentials (or a configured AWS CLI profile)."},"warnings":[{"fix":"Migrate your code to use `llama-index-llms-bedrock-converse`. Install it with `pip install llama-index-llms-bedrock-converse` and import `BedrockConverse` from `llama_index.llms.bedrock_converse`.","message":"The `llama-index-llms-bedrock` package is officially deprecated in favor of `llama-index-llms-bedrock-converse`. The `bedrock-converse` package utilizes the newer Bedrock Converse API, which is the recommended way to interact with Bedrock LLMs and provides enhanced capabilities like native function calling.","severity":"deprecated","affected_versions":"0.5.0 and earlier"},{"fix":"Explicitly set `region_name` when initializing `Bedrock` (e.g., `region_name='us-east-1'`) or ensure the `AWS_REGION` environment variable is correctly set.","message":"AWS region configuration is critical. Ensure the `region_name` parameter passed to `Bedrock` matches the AWS region where your Bedrock model access is granted and where you want to invoke the model. Mismatched regions or an unset region will lead to `NoRegionError` or invocation failures.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Verify that the IAM role or user credentials used have the necessary `bedrock:InvokeModel` permissions for the specific foundation model and region you are trying to access. Check for any explicit deny policies.","message":"Incorrect IAM permissions on your AWS role or user can lead to `BedrockException` indicating 'User is not authorized to perform: bedrock:InvokeModel'.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure you use `llm.complete(prompt_string)` for single-turn text completions and `llm.chat([ChatMessage(role='user', content='...')])` for multi-turn conversations or when expecting a chat-like interface. Check model-specific documentation for any unique prompt requirements.","message":"For Anthropic Claude models (and potentially others) within `llama-index-llms-bedrock`, there can be prompt formatting differences or type mismatches when using `llm.complete()` versus `llm.chat()`. Specifically, `complete` expects a string prompt, while `chat` expects a list of `ChatMessage` objects. Attempting to pass a list of messages to `complete` (or a string to `chat` without proper wrapping) can result in a `TypeError` or unexpected behavior.","severity":"gotcha","affected_versions":"0.5.0 and earlier"},{"fix":"This is a known issue. Consider using `llama-index-llms-bedrock-converse` as it has seen fixes for similar issues regarding parameter passing. Alternatively, implement guardrails at the AWS service level or ensure your `guardrail_config` is being correctly translated and applied in your specific `llama-index` version (check the relevant GitHub issues for updates).","message":"Passing `guardrail_config` parameters to `Bedrock.complete` might not be correctly processed due to an underlying issue where `kwargs` can be lost. This means Bedrock Guardrails configured this way might not apply to your model invocations.","severity":"gotcha","affected_versions":"0.5.0 and earlier"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"When initializing `Bedrock`, provide the `region_name` argument (e.g., `Bedrock(..., region_name='us-west-2')`) or set the `AWS_REGION` environment variable.","cause":"The AWS region was not specified during Bedrock LLM initialization or via environment variables (AWS_REGION).","error":"botocore.exceptions.NoRegionError: You must specify a region."},{"fix":"Verify the exact model ID string from AWS Bedrock documentation (e.g., 'mistral.mixtral-8x7b-instruct-v0:1' or 'amazon.titan-text-express-v1'). Also, ensure the model is enabled and available in your AWS account and the specified `region_name`.","cause":"The model ID string format or the model itself is not recognized by the underlying Bedrock integration layer (e.g., LiteLLM if used indirectly), or the model is not accessible in the specified region.","error":"ValueError: Provider bedrock/mistral for model bedrock/mistral.mixtral-8x7b-instruct-v0:1 is not supported"},{"fix":"Review your AWS IAM policies and ensure the user/role has permission to invoke Bedrock models, specifically `bedrock:InvokeModel` on the target resource (e.g., `arn:aws:bedrock:us-west-2::foundation-model/mistral.mixtral-8x7b-instruct-v0:1`). Also, check for any Service Control Policies (SCPs) that might be explicitly denying access.","cause":"The AWS IAM user or role attempting to invoke the Bedrock model lacks the necessary `bedrock:InvokeModel` permission for the specific model and region.","error":"litellm.APIConnectionError: BedrockException - {\"Message\":\"User: arn:aws:sts::xxxxxxxxxxxxxxxx is not authorized to perform: bedrock:InvokeModel on resource: arn:aws:bedrock:us-west-2::foundation-model/mistral.mixtral-8x7b-instruct-v0:1 with an explicit deny in a service control policy\"}"},{"fix":"For `llm.complete(prompt_string)`, `prompt_string` must be a `str`. For `llm.chat(messages)`, `messages` must be a `list` of `ChatMessage` objects. Convert your input to the expected type for the method being called.","cause":"Attempting to use `llm.complete()` with a list of `ChatMessage` objects, or `llm.chat()` with a raw string prompt, when the underlying model or `llama-index` wrapper expects a different type.","error":"TypeError: 'prompt' parameter must be a string for completion, not a list of messages (or vice-versa for chat)"}]}