{"id":7356,"library":"langwatch","title":"LangWatch Python SDK","description":"LangWatch is a Python SDK for monitoring, evaluating, and testing LLM-powered applications and AI agents. It provides end-to-end observability by capturing traces and spans for LLM calls, RAG retrievals, and other pipeline steps, helping developers debug, prevent regressions, and optimize their AI systems. The current version is 0.18.0, and the library undergoes active development with regular releases.","status":"active","version":"0.18.0","language":"en","source_language":"en","source_url":"https://github.com/langwatch/langwatch","tags":["LLM","observability","monitoring","AI","agent testing","evaluations","OpenTelemetry","LLMops"],"install":[{"cmd":"pip install langwatch","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Commonly used with LangWatch for LLM instrumentation, as shown in quickstart examples.","package":"openai","optional":true},{"reason":"Used for automatic instrumentation of LangChain applications.","package":"langchain","optional":true},{"reason":"A separate but related library for agent testing simulations.","package":"langwatch-scenario","optional":true}],"imports":[{"symbol":"langwatch","correct":"import langwatch"},{"note":"Initializes the LangWatch client.","symbol":"setup","correct":"langwatch.setup()"},{"note":"Decorator to capture an end-to-end operation as a trace.","symbol":"trace","correct":"@langwatch.trace()"},{"note":"Decorator to instrument specific parts of a pipeline within a trace.","symbol":"span","correct":"@langwatch.span()"},{"note":"Instrumentors are typically found in the `langwatch.instrumentors` submodule.","wrong":"from langwatch import OpenAIInstrumentor","symbol":"OpenAIInstrumentor","correct":"from langwatch.instrumentors import OpenAIInstrumentor"},{"note":"LangChain integration is via `langwatch.langchain` submodule and uses a context manager or callback.","wrong":"from langwatch import LangChainTracer","symbol":"LangChainTracer","correct":"import langwatch.langchain\nlangWatchCallback = langwatch.langchain.LangChainTracer()"}],"quickstart":{"code":"import os\nimport langwatch\nfrom langwatch.instrumentors import OpenAIInstrumentor\nfrom openai import OpenAI\n\n# Ensure your API key is set as an environment variable or pass it directly\n# os.environ[\"LANGWATCH_API_KEY\"] = \"YOUR_LANGWATCH_API_KEY\"\napi_key = os.environ.get(\"LANGWATCH_API_KEY\", \"\")\n\nif not api_key:\n    print(\"Warning: LANGWATCH_API_KEY environment variable not set. Traces will not be sent.\")\n    # For demonstration, we'll proceed but you should set your key.\n    # In a real application, you might raise an error or configure LangWatch to only log locally.\n\n# Initialize LangWatch early in your application\n# Automatically instruments OpenAI calls within the decorated functions\nlangwatch.setup(\n    api_key=api_key,\n    instrumentors=[OpenAIInstrumentor()]\n)\n\nclient = OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\", \"\")) # Assume OpenAI API key is also set\n\n@langwatch.trace(name=\"UserInteraction\")\nasync def handle_user_query(query: str):\n    print(f\"Processing query: {query}\")\n    try:\n        response = client.chat.completions.create(\n            model=\"gpt-3.5-turbo\",\n            messages=[\n                {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}, \n                {\"role\": \"user\", \"content\": query}\n            ]\n        )\n        result = response.choices[0].message.content\n        print(f\"Assistant's response: {result}\")\n        return result\n    except Exception as e:\n        langwatch.get_current_trace().error(str(e)) # Record error if something goes wrong\n        print(f\"An error occurred: {e}\")\n        raise\n\nasync def main():\n    await handle_user_query(\"Tell me a fun fact about Python.\")\n    # Ensure traces are flushed before the application exits\n    await langwatch.shutdown()\n\nimport asyncio\nasyncio.run(main())\n","lang":"python","description":"This quickstart demonstrates how to initialize LangWatch, enable automatic OpenAI instrumentation, and trace an asynchronous function that interacts with the OpenAI API. It highlights the use of `langwatch.setup()` and the `@langwatch.trace()` decorator, along with handling the `LANGWATCH_API_KEY`."},"warnings":[{"fix":"Set `LANGWATCH_API_KEY=your_api_key` in your environment variables or provide it as `api_key='your_api_key'` to `langwatch.setup()`.","message":"Traces might not appear in the LangWatch dashboard if the `LANGWATCH_API_KEY` environment variable is not set or passed explicitly during `langwatch.setup()`.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure `await langwatch.shutdown()` is called before your application exits to flush any pending traces.","message":"In applications that exit quickly (e.g., short scripts), traces might not be fully sent to the LangWatch backend before termination.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Consult the LangWatch documentation on OpenTelemetry integration if you have an existing OpenTelemetry setup. Ensure compatible versions and proper exporter configuration.","message":"LangWatch uses OpenTelemetry under the hood. Incorrect or conflicting OpenTelemetry configurations can interfere with LangWatch's tracing.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Install `langwatch-scenario` separately if you intend to use the simulation framework (`pip install langwatch-scenario`). Be aware that their functionalities are complementary but distinct.","message":"The `langwatch` Python SDK is distinct from the `langwatch-scenario` library, which focuses on agent simulations.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Set your LangWatch API key as an environment variable: `export LANGWATCH_API_KEY=\"your_api_key\"` (Linux/macOS) or `$Env:LANGWATCH_API_KEY=\"your_api_key\"` (PowerShell), or pass it directly: `langwatch.setup(api_key=\"your_api_key\")`.","cause":"The `LANGWATCH_API_KEY` environment variable is likely not set, or the API key passed to `langwatch.setup()` is incorrect or missing.","error":"No traces appearing in LangWatch dashboard."},{"fix":"Import the symbol from its correct submodule path, e.g., `from langwatch.instrumentors import OpenAIInstrumentor`.","cause":"Specific modules like instrumentors are located within subpackages, not directly under the top-level `langwatch` package.","error":"ImportError: cannot import name 'OpenAIInstrumentor' from 'langwatch'"},{"fix":"Ensure all operations that interact with the current trace are performed within a function or block that is decorated with `@langwatch.trace()` or `@langwatch.span()`.","cause":"Attempting to call `langwatch.get_current_trace().error()` or similar trace-specific methods outside of an active trace context (i.e., not within a `@langwatch.trace()` or `@langwatch.span()` decorated function/context block).","error":"Error: Trace not found for current context."}]}