LlamaIndex LLMs Bedrock Integration

0.5.0 · active · verified Thu Apr 16

This library provides an integration for connecting LlamaIndex with AWS Bedrock Large Language Models (LLMs). It allows users to leverage various Bedrock models for text completion and chat functionalities within their LlamaIndex applications, including streaming responses. The library is part of the broader LlamaIndex ecosystem, which maintains an active development pace with frequent updates and a move towards modular integrations.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize the `Bedrock` LLM, perform a simple text completion, engage in a chat conversation using `ChatMessage` objects, and stream a completion response. Authentication is handled via environment variables for AWS credentials (or a configured AWS CLI profile).

import os
from llama_index.llms.bedrock import Bedrock
from llama_index.core.llms import ChatMessage

# Configure AWS credentials via environment variables for security
# Ensure AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION are set
# Or configure AWS CLI profile and use profile_name parameter

# Initialize the Bedrock LLM
llm = Bedrock(
    model=os.environ.get('BEDROCK_MODEL_ID', 'amazon.titan-text-express-v1'),
    region_name=os.environ.get('AWS_REGION', 'us-east-1'),
    temperature=0.7,
    max_tokens=512
)

# Example 1: Simple completion
print("\n--- Completion Example ---")
response_complete = llm.complete("What is the capital of France?")
print(response_complete)

# Example 2: Chat with messages
print("\n--- Chat Example ---")
messages = [
    ChatMessage(role="system", content="You are a helpful AI assistant."),
    ChatMessage(role="user", content="Tell me a short story about a brave knight."),
]
response_chat = llm.chat(messages)
print(response_chat.message.content)

# Example 3: Streaming completion
print("\n--- Streaming Completion Example ---")
print("Streaming response: ", end="")
stream_resp = llm.stream_complete("Explain the concept of AI in simple terms.")
for r in stream_resp:
    print(r.delta, end="")
print("\n")

view raw JSON →