{"id":5647,"library":"langchainplus-sdk","title":"LangSmith Python Client SDK","description":"The `langchainplus-sdk` is the official Python client library designed to connect to the LangSmith LLM Tracing and Evaluation Platform. LangSmith is a unified developer platform that helps teams debug, evaluate, and monitor language models and intelligent agents. This SDK facilitates logging traces, creating datasets, and evaluating runs, offering seamless integration with the LangChain framework while also supporting standalone use with other LLM applications. The current version is 0.0.20, with updates to the underlying LangSmith platform being more frequent.","status":"active","version":"0.0.20","language":"en","source_language":"en","source_url":"https://github.com/langchain-ai/langsmith-sdk","tags":["LLM","AI","tracing","monitoring","LangChain","LangSmith","observability"],"install":[{"cmd":"pip install langchainplus-sdk","lang":"bash","label":"Install stable version"}],"dependencies":[],"imports":[{"symbol":"LangChainPlusClient","correct":"from langchainplus_sdk import LangChainPlusClient"}],"quickstart":{"code":"import os\nfrom langchainplus_sdk import LangChainPlusClient\n\n# Set up LangSmith environment variables\n# Replace with your actual API key and optionally project name\nos.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\nos.environ[\"LANGCHAIN_ENDPOINT\"] = os.environ.get(\"LANGCHAIN_ENDPOINT\", \"https://api.langchain.plus\")\nos.environ[\"LANGCHAIN_API_KEY\"] = os.environ.get(\"LANGCHAIN_API_KEY\", \"YOUR_LANGCHAINPLUS_API_KEY\")\n# os.environ[\"LANGCHAIN_PROJECT\"] = os.environ.get(\"LANGCHAIN_PROJECT\", \"My Default Project\")\n\n# Initialize the client (optional, tracing often works via environment variables directly)\nclient = LangChainPlusClient()\n\n# Example: Create a simple dataset entry\ndataset_name = \"My Example Dataset\"\ndescription = \"A dataset for demonstrating langchainplus-sdk usage.\"\n\n# In a real application, you would log traces from your LLM calls\n# or manually create runs/examples. For this quickstart, we'll simulate a simple action.\n\ntry:\n    # This part would typically be driven by actual LLM runs being traced\n    # For a direct client interaction, you can create datasets and examples.\n    # Check if dataset exists, if not, create it\n    existing_datasets = client.list_datasets(name=dataset_name)\n    if not list(existing_datasets):\n        dataset = client.create_dataset(name=dataset_name, description=description)\n        print(f\"Created dataset: {dataset.name} (ID: {dataset.id})\")\n    else:\n        dataset = list(existing_datasets)[0]\n        print(f\"Using existing dataset: {dataset.name} (ID: {dataset.id})\")\n\n    # Example of creating a simple example within the dataset\n    example_name = \"Initial Example\"\n    example_inputs = {\"question\": \"What is the capital of France?\"}\n    example_outputs = {\"answer\": \"Paris\"}\n\n    # Check if example exists to avoid duplicates in quickstart re-runs\n    existing_examples = client.list_examples(dataset_id=dataset.id)\n    example_found = False\n    for ex in existing_examples:\n        if ex.inputs == example_inputs and ex.outputs == example_outputs:\n            example_found = True\n            break\n\n    if not example_found:\n        example = client.create_example(\n            dataset_id=dataset.id,\n            inputs=example_inputs,\n            outputs=example_outputs,\n            name=example_name\n        )\n        print(f\"Created example: {example.name} (ID: {example.id})\")\n    else:\n        print(f\"Example '{example_name}' already exists in dataset.\")\n\n    print(\"LangSmith client setup and basic interaction successful.\")\n    print(\"Check your LangSmith UI for traces and datasets.\")\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    print(\"Please ensure your LANGCHAIN_API_KEY is correct and LangSmith service is accessible.\")","lang":"python","description":"This quickstart demonstrates how to initialize the `LangChainPlusClient` and interact with the LangSmith platform by creating a dataset and an example. It highlights the use of environment variables for configuration, which is the recommended way for tracing. Replace 'YOUR_LANGCHAINPLUS_API_KEY' with your actual LangSmith API key."},"warnings":[{"fix":"Be aware that `langchainplus-sdk` is the correct PyPI package for the LangSmith Python client. Refer to LangSmith documentation for the most up-to-date usage patterns.","message":"The PyPI package `langchainplus-sdk` is the Python client for the LangSmith platform. Users might be confused by the different naming conventions (`langchainplus` vs. `langsmith`) and might search for `langsmith-sdk` on PyPI, which does not exist under that exact name. The active development and more frequent releases are seen in the `langchain-ai/langsmith-sdk` GitHub repository, indicating `langchainplus-sdk` on PyPI might be a less frequently updated wrapper or a specific version of that client.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Monitor official LangSmith documentation for updates to the `RunTree` API when logging traces outside of LangChain. Decouple your core logic from `RunTree` where possible to minimize refactoring needs.","message":"The `RunTree` API, used for logging traces outside of the main LangChain framework, is noted as 'experimental' and subject to future changes. This means code relying heavily on this specific API might require adjustments in newer versions.","severity":"breaking","affected_versions":"All versions, especially future major releases"},{"fix":"Always ensure that required environment variables are correctly set and accessible to your application. Double-check API keys and endpoint URLs. Use a `.env` file for local development and secure secrets management in production.","message":"Tracing and platform connection heavily rely on environment variables (e.g., `LANGCHAIN_TRACING_V2`, `LANGCHAIN_ENDPOINT`, `LANGCHAIN_API_KEY`). Incorrectly set or missing environment variables are a common source of connection or tracing failures.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Be aware of potential inaccuracies in token cost reporting, especially for non-streaming LLM calls with caching enabled. Cross-reference LangSmith reports with direct API usage metrics from your LLM provider if cost accuracy is critical. Monitor `langsmith-sdk` GitHub issues for fixes related to token parsing.","message":"There have been reports of `LangchainCallbackHandler` (which the SDK utilizes for tracing LangChain operations) losing cache token metrics and inflating input token costs in non-streaming paths for certain models (e.g., Anthropic, OpenAI). This can lead to inaccurate cost estimations in LangSmith.","severity":"gotcha","affected_versions":"Likely versions 0.0.20 and potentially newer client versions interacting with LangChain, as per GitHub issues."}],"env_vars":null,"last_verified":"2026-04-13T00:00:00.000Z","next_check":"2026-07-12T00:00:00.000Z"}