{"id":6750,"library":"openinference-instrumentation-google-genai","title":"OpenInference Google GenAI Instrumentation","description":"This is a Python auto-instrumentation library designed to trace interactions with the Google GenAI SDK. It emits OpenTelemetry-compatible traces, providing observability for generative AI applications. These traces can be sent to any OpenTelemetry collector, such as Arize Phoenix, for analysis and visualization. The library is actively maintained with frequent updates.","status":"active","version":"0.1.15","language":"en","source_language":"en","source_url":"https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-google-genai","tags":["observability","opentelemetry","google-genai","ai","llm","instrumentation","tracing"],"install":[{"cmd":"pip install openinference-instrumentation-google-genai google-genai arize-otel","lang":"bash","label":"With Arize OpenTelemetry Helper"},{"cmd":"pip install openinference-instrumentation-google-genai google-genai","lang":"bash","label":"Minimal Install"}],"dependencies":[{"reason":"This instrumentation library provides tracing for the Google GenAI SDK.","package":"google-genai","optional":false},{"reason":"Provides OpenTelemetry setup utilities for sending traces to Arize/Phoenix, commonly used with OpenInference instrumentations.","package":"arize-otel","optional":true},{"reason":"A local AI observability and evaluation platform that can receive and display OpenTelemetry traces.","package":"arize-phoenix","optional":true}],"imports":[{"symbol":"GoogleGenAIInstrumentor","correct":"from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor"}],"quickstart":{"code":"import os\nfrom openinference.instrumentation.google_genai import GoogleGenAIInstrumentor\nfrom arize.otel import register\nfrom google import genai\n\n# Configure your Google GenAI API key\n# It's recommended to set this as an environment variable\n# os.environ[\"GEMINI_API_KEY\"] = \"your_gemini_api_key\"\n\n# Setup OpenTelemetry via Arize's convenience function\n# Replace with your actual space_id, api_key, and project_name if using Arize\ntracer_provider = register(\n    space_id=os.environ.get('ARIZE_SPACE_ID', 'YOUR_ARIZE_SPACE_ID'),\n    api_key=os.environ.get('ARIZE_API_KEY', 'YOUR_ARIZE_API_KEY'),\n    project_name=\"my-genai-app\",\n)\n\n# Instrument the Google GenAI client\nGoogleGenAIInstrumentor().instrument(tracer_provider=tracer_provider)\n\n# Ensure GEMINI_API_KEY is set for google-genai to function\nif not os.environ.get(\"GEMINI_API_KEY\"):\n    print(\"Warning: GEMINI_API_KEY environment variable not set. Using a placeholder.\")\n    # This placeholder will likely cause authentication errors for actual API calls\n    os.environ[\"GEMINI_API_KEY\"] = \"sk-your-gemini-api-key\"\n\ndef send_message_multi_turn() -> tuple[str, str]:\n    client = genai.Client(api_key=os.environ[\"GEMINI_API_KEY\"])\n    chat = client.chats.create(model=\"gemini-1.5-flash-001\")\n    response1 = chat.send_message(\"What is the capital of France?\")\n    response2 = chat.send_message(\"Why is the sky blue?\")\n    return response1.text or \"\", response2.text or \"\"\n\nif __name__ == \"__main__\":\n    print(\"Starting GenAI chat...\")\n    resp1_text, resp2_text = send_message_multi_turn()\n    print(f\"Response 1: {resp1_text}\")\n    print(f\"Response 2: {resp2_text}\")\n    print(\"Traces should now be available in your configured OpenTelemetry collector (e.g., Arize Phoenix).\")","lang":"python","description":"This quickstart demonstrates how to install and configure `openinference-instrumentation-google-genai` to trace a simple multi-turn chat interaction using the Google GenAI SDK. It integrates with `arize-otel` for setting up the OpenTelemetry tracer provider, which then sends traces to a collector like Arize Phoenix. Remember to set your `GEMINI_API_KEY` and Arize credentials as environment variables."},"warnings":[{"fix":"Manually manage chat history length to stay within API limitations. Consider summarizing or truncating older messages before sending new ones, or starting new chat sessions for long conversations.","message":"The Google GenAI SDK, which this library instruments, can encounter `google.api_core.exceptions.InternalServerError` when chat history grows too large, as the API may not gracefully handle internal quota limits. This is a limitation of the underlying Google GenAI API itself, not the instrumentation, but will manifest in your traced applications.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Check the official OpenInference GitHub repository for updates and newer versions that might include improved support. For critical agentic workflows, manual instrumentation might be necessary to capture full detail.","message":"As of version 0.1.15, the instrumentation may have limitations in fully tracing advanced features like complex tool calling scenarios (e.g., `tool_use_prompt_token_count` not included in `llm.token_count.prompt`) or the newer Google GenAI Interactions API. Support for these features is actively under development.","severity":"gotcha","affected_versions":"Prior to future releases addressing these features"},{"fix":"Regularly review the official OpenInference documentation and GitHub changelog for breaking changes or important updates, especially before upgrading to new major or minor versions.","message":"The OpenInference project and its instrumentations, including for Google GenAI, are rapidly evolving. While stable for core use cases, updates to the underlying Google GenAI SDK or OpenInference semantic conventions could introduce breaking changes or require adjustments to your tracing configuration.","severity":"gotcha","affected_versions":"All versions, due to active development"}],"env_vars":null,"last_verified":"2026-04-15T00:00:00.000Z","next_check":"2026-07-14T00:00:00.000Z","problems":[]}