Cleanlab Trustworthy Language Model (TLM) Client
The `cleanlab-tlm` library provides a Python client for the Cleanlab Trustworthy Language Model, enabling users to augment LLM interactions with trust scores and explanations. It wraps existing LLM APIs (like OpenAI) to provide a layer of trustworthiness analysis. The current version is 1.1.39, and the library is actively developed with frequent minor releases adding new features and improvements.
Common errors
-
ModuleNotFoundError: No module named 'cleanlab_tlm'
cause The `cleanlab-tlm` package is not installed or the Python environment is incorrect.fixEnsure the package is installed in your active environment: `pip install cleanlab-tlm`. -
openai.AuthenticationError: Incorrect API key provided:
cause The OpenAI API key provided to the `OpenAI` client (and subsequently `TLMChatCompletion`) is either missing, invalid, or expired.fixSet the `OPENAI_API_KEY` environment variable or pass a valid key directly to `OpenAI(api_key="your-key-here")`. Verify your key on the OpenAI platform. -
AttributeError: 'TLMResponse' object has no attribute 'get_trust_scores'
cause You attempted to call `response.get_trust_scores()` or `response.get_explanation()` but `trust_scores=False` or `explanation_metadata=False` was set (or defaulted) during `TLMChatCompletion` initialization.fixInitialize `TLMChatCompletion` with `trust_scores=True` and/or `explanation_metadata=True` as needed to enable these features. -
TypeError: TLMChatCompletion.__init__() missing 1 required positional argument: 'client'
cause You instantiated `TLMChatCompletion` without passing an LLM client object (e.g., `OpenAI()` instance).fixProvide an initialized client object, for example: `tlm_client = TLMChatCompletion(client=OpenAI(api_key=os.environ.get('OPENAI_API_KEY')), ...)`.
Warnings
- gotcha Trust scores and explanation metadata are not enabled by default. You must explicitly set `trust_scores=True` and `explanation_metadata=True` during `TLMChatCompletion` initialization to access these features.
- gotcha Prior to v1.1.33, the presence of `call_id` in formatted responses could lead to incorrectly low trust scores. While fixed, users analyzing historical data or on older client versions should be aware of this potential inaccuracy.
- gotcha Structured output per-field scoring (added in v1.1.32 and refined in later versions like 1.1.37) requires specific usage patterns and might not be compatible with all models or response structures. Misuse can lead to errors or incorrect scores.
Install
-
pip install cleanlab-tlm
Imports
- TLMChatCompletion
from cleanlab_tlm.tlm import TLMChatCompletion
- TLMResponses
from cleanlab_tlm import TLMResponses
from cleanlab_tlm.tlm import TLMResponses
Quickstart
import os
from cleanlab_tlm.tlm import TLMChatCompletion
from openai import OpenAI
# Initialize the TLM ChatCompletion client
# Ensure OPENAI_API_KEY environment variable is set
tlm_client = TLMChatCompletion(
client=OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "")), # Replace '' with your key if not using env var
trust_scores=True, # enables trust scores
explanation_metadata=True # enables explanation metadata
)
# Example chat interaction
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
try:
response = tlm_client.chat.completions.create(
model="gpt-4o", # Use an available OpenAI model
messages=messages
)
print(f"LLM Response: {response.choices[0].message.content}")
# Get the trust score
trust_scores = response.get_trust_scores()
print(f"Trust scores: {trust_scores}")
# Get explanation metadata
explanation = response.get_explanation()
print(f"Explanation: {explanation}")
except Exception as e:
print(f"An error occurred: {e}")
print("Please ensure your OPENAI_API_KEY is set and valid, and the model exists.")