{"id":9584,"library":"cleanlab-tlm","title":"Cleanlab Trustworthy Language Model (TLM) Client","description":"The `cleanlab-tlm` library provides a Python client for the Cleanlab Trustworthy Language Model, enabling users to augment LLM interactions with trust scores and explanations. It wraps existing LLM APIs (like OpenAI) to provide a layer of trustworthiness analysis. The current version is 1.1.39, and the library is actively developed with frequent minor releases adding new features and improvements.","status":"active","version":"1.1.39","language":"en","source_language":"en","source_url":"https://github.com/cleanlab/cleanlab-tlm","tags":["AI","LLM","trustworthiness","machine learning","NLP","evaluation"],"install":[{"cmd":"pip install cleanlab-tlm","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Required for integrating with OpenAI models and APIs, as TLM wraps OpenAI's functionality.","package":"openai","optional":false},{"reason":"Used for data validation and settings management, common in modern Python libraries.","package":"pydantic","optional":false},{"reason":"Likely used for configuration loading or data serialization.","package":"pyyaml","optional":false},{"reason":"Standard library for making HTTP requests, used for internal API communication.","package":"requests","optional":false}],"imports":[{"symbol":"TLMChatCompletion","correct":"from cleanlab_tlm.tlm import TLMChatCompletion"},{"note":"The `TLMResponses` class is typically found within the `tlm` submodule, not directly under the top-level `cleanlab_tlm` package.","wrong":"from cleanlab_tlm import TLMResponses","symbol":"TLMResponses","correct":"from cleanlab_tlm.tlm import TLMResponses"}],"quickstart":{"code":"import os\nfrom cleanlab_tlm.tlm import TLMChatCompletion\nfrom openai import OpenAI\n\n# Initialize the TLM ChatCompletion client\n# Ensure OPENAI_API_KEY environment variable is set\ntlm_client = TLMChatCompletion(\n    client=OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\", \"\")), # Replace '' with your key if not using env var\n    trust_scores=True, # enables trust scores\n    explanation_metadata=True # enables explanation metadata\n)\n\n# Example chat interaction\nmessages = [\n    {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n    {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n]\n\ntry:\n    response = tlm_client.chat.completions.create(\n        model=\"gpt-4o\", # Use an available OpenAI model\n        messages=messages\n    )\n\n    print(f\"LLM Response: {response.choices[0].message.content}\")\n\n    # Get the trust score\n    trust_scores = response.get_trust_scores()\n    print(f\"Trust scores: {trust_scores}\")\n\n    # Get explanation metadata\n    explanation = response.get_explanation()\n    print(f\"Explanation: {explanation}\")\n\nexcept Exception as e:\n    print(f\"An error occurred: {e}\")\n    print(\"Please ensure your OPENAI_API_KEY is set and valid, and the model exists.\")","lang":"python","description":"This quickstart demonstrates how to initialize `TLMChatCompletion` with an OpenAI client, send a chat request, and retrieve trust scores and explanation metadata. It highlights the importance of explicitly enabling `trust_scores` and `explanation_metadata` during client initialization."},"warnings":[{"fix":"Initialize `TLMChatCompletion` like: `TLMChatCompletion(client=..., trust_scores=True, explanation_metadata=True)`.","message":"Trust scores and explanation metadata are not enabled by default. You must explicitly set `trust_scores=True` and `explanation_metadata=True` during `TLMChatCompletion` initialization to access these features.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Upgrade to `cleanlab-tlm>=1.1.33` to ensure accurate trust score calculations, especially when `call_id` is involved in responses.","message":"Prior to v1.1.33, the presence of `call_id` in formatted responses could lead to incorrectly low trust scores. While fixed, users analyzing historical data or on older client versions should be aware of this potential inaccuracy.","severity":"gotcha","affected_versions":"<1.1.33"},{"fix":"Consult the official documentation for the correct way to implement per-field scoring for structured outputs, ensuring your model and response types are supported. Always test thoroughly.","message":"Structured output per-field scoring (added in v1.1.32 and refined in later versions like 1.1.37) requires specific usage patterns and might not be compatible with all models or response structures. Misuse can lead to errors or incorrect scores.","severity":"gotcha","affected_versions":"<1.1.32 (not available), >=1.1.32 (potential misuse)"}],"env_vars":null,"last_verified":"2026-04-17T00:00:00.000Z","next_check":"2026-07-16T00:00:00.000Z","problems":[{"fix":"Ensure the package is installed in your active environment: `pip install cleanlab-tlm`.","cause":"The `cleanlab-tlm` package is not installed or the Python environment is incorrect.","error":"ModuleNotFoundError: No module named 'cleanlab_tlm'"},{"fix":"Set the `OPENAI_API_KEY` environment variable or pass a valid key directly to `OpenAI(api_key=\"your-key-here\")`. Verify your key on the OpenAI platform.","cause":"The OpenAI API key provided to the `OpenAI` client (and subsequently `TLMChatCompletion`) is either missing, invalid, or expired.","error":"openai.AuthenticationError: Incorrect API key provided:"},{"fix":"Initialize `TLMChatCompletion` with `trust_scores=True` and/or `explanation_metadata=True` as needed to enable these features.","cause":"You attempted to call `response.get_trust_scores()` or `response.get_explanation()` but `trust_scores=False` or `explanation_metadata=False` was set (or defaulted) during `TLMChatCompletion` initialization.","error":"AttributeError: 'TLMResponse' object has no attribute 'get_trust_scores'"},{"fix":"Provide an initialized client object, for example: `tlm_client = TLMChatCompletion(client=OpenAI(api_key=os.environ.get('OPENAI_API_KEY')), ...)`.","cause":"You instantiated `TLMChatCompletion` without passing an LLM client object (e.g., `OpenAI()` instance).","error":"TypeError: TLMChatCompletion.__init__() missing 1 required positional argument: 'client'"}]}