OpenInference Bedrock Instrumentation
The `openinference-instrumentation-bedrock` library provides automatic instrumentation for the AWS Bedrock client (`boto3`), enabling OpenTelemetry-compliant observability for applications utilizing Bedrock foundation models. It captures detailed traces of LLM invocations and interactions, which can be sent to any OpenTelemetry-compatible backend, such as Arize AI's Phoenix platform. The library is actively maintained by Arize AI, with frequent updates across its various instrumentation packages, as evidenced by its rapid minor version releases. [1, 3, 4, 11, 17]
Common errors
-
Spans created but LLM attributes missing (e.g., prompts, responses, model info, token usage) for Bedrock calls.
cause Using `aioboto3` (the async client) for Bedrock interactions. The current instrumentation's monkey-patching on `boto3` internals does not affect `aioboto3` clients. [8]fixSwitch to synchronous `boto3` client calls, or manually create OpenTelemetry spans around `aioboto3` calls and explicitly set OpenInference semantic attributes like `input.value`, `output.value`, and `llm.model_name`. [8] -
No traces are generated for Bedrock `invoke_model` or `converse` calls.
cause The `BedrockInstrumentor().instrument()` method was called *after* the `boto3` Bedrock client was initialized, or the OpenTelemetry `TracerProvider` was not properly configured or set globally. [4, 13]fixCall `BedrockInstrumentor().instrument()` before creating any `boto3.client('bedrock-runtime')` instances. Also, ensure `opentelemetry.trace.set_tracer_provider(your_provider)` is called with a correctly configured `TracerProvider` and `SpanProcessor`. -
Traces are generated, but they do not appear in my OpenTelemetry collector or observability platform (e.g., Phoenix, Grafana).
cause The OpenTelemetry `TracerProvider` is configured without an appropriate `SpanExporter` (e.g., `OTLPSpanExporter` for remote collectors) or the exporter is misconfigured (e.g., incorrect endpoint).fixEnsure your `TracerProvider` includes a `SpanProcessor` that uses an `OTLPSpanExporter` (or another relevant exporter for your backend) and that its configuration (e.g., `endpoint`, `headers`) is correct for your observability platform.
Warnings
- gotcha Tracing of LLM-specific metadata (prompts, responses, token usage) is not fully supported for asynchronous Bedrock calls made with `aioboto3`. While spans might be generated, they often lack rich LLM attributes. [8]
- gotcha When using Meta models (e.g., Llama 3) on Amazon Bedrock, outputs might not be traced when using the `invoke_model` API. It is recommended to use the `converse` API for these models to ensure full tracing. [4, 13]
- gotcha The `BedrockInstrumentor().instrument()` call must occur *before* any `boto3.client('bedrock-runtime')` instances are created. Clients initialized prior to instrumentation will not be traced. [4, 13]
Install
-
pip install openinference-instrumentation-bedrock boto3
Imports
- BedrockInstrumentor
from openinference.instrumentation.bedrock import BedrockInstrumentor
Quickstart
import os
import boto3
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter
from openinference.instrumentation.bedrock import BedrockInstrumentor
# 1. Configure OpenTelemetry Tracer Provider
resource = Resource.create({"service.name": "my-bedrock-app"})
tracer_provider = TracerProvider(resource=resource)
tracer_provider.add_span_processor(
SimpleSpanProcessor(ConsoleSpanExporter())
)
trace.set_tracer_provider(tracer_provider)
# 2. Instrument the Bedrock client
BedrockInstrumentor().instrument()
# 3. Create a boto3 client (must be after instrumentation)
# Ensure AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION_NAME are set in environment
# or configure boto3 credentials separately.
bedrock_runtime = boto3.client(
service_name='bedrock-runtime',
region_name=os.environ.get('AWS_REGION_NAME', 'us-east-1'),
aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID', 'YOUR_AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY', 'YOUR_AWS_SECRET_ACCESS_KEY')
)
# 4. Invoke a Bedrock model
model_id = "anthropic.claude-instant-v1"
content_type = "application/json"
accept_type = "application/json"
body = {
"prompt": "Human: What is the capital of France? Assistant:",
"max_tokens_to_sample": 100,
"temperature": 0.5,
}
try:
response = bedrock_runtime.invoke_model(
body=str(body),
modelId=model_id,
contentType=content_type,
accept=accept_type
)
response_body = response['body'].read().decode('utf-8')
print(f"Model Response: {response_body}")
except Exception as e:
print(f"Error invoking model: {e}")
print("Please ensure your AWS credentials are configured and Bedrock access is granted.")
# Spans will be printed to console by ConsoleSpanExporter