Databricks Agents SDK
The Databricks Agents SDK provides tools and interfaces for building AI agents within the Databricks platform. It facilitates the creation, deployment, and management of agents that can leverage various tools, LLMs, and integrate with MLflow for logging and tracing. The current version is 1.9.4, with frequent updates aligned with the evolving AI and MLflow ecosystems.
Common errors
-
ModuleNotFoundError: No module named 'databricks.sdk'
cause The required Python package (e.g., `databricks-sdk` or `databricks-agents` itself or a dependency) is not installed in the environment where the code is being run, or the environment's Python path is not correctly configured.fixInstall the package using `pip install databricks-agents` (which often brings in `databricks-sdk` as a dependency) or `pip install databricks-sdk` if specifically needed. Ensure that the package is installed in the correct environment (e.g., Databricks cluster libraries, or a local virtual environment) and that your Python path includes necessary local directories, especially after a `%restart_python` on serverless compute. -
AttributeError: 'str' object has no attribute 'items'
cause This error often occurs when the `environment_vars` parameter in `agents.deploy()` is provided as a plain string instead of a dictionary of key-value pairs.fixPass `environment_vars` as a dictionary to the `agents.deploy` function. For example: `agents.deploy(model_name, model_version, environment_vars={'VAR_NAME': 'value'})`. -
Error: failed to create app - For input string: ""
cause This specific deployment error arises when the `experiment_id` field within the `databricks.yml` configuration file (used for MLflow experiment resources) is left blank or contains an invalid value.fixUpdate your `databricks.yml` to dynamically reference a bundle-managed MLflow experiment ID. For instance, define an experiment resource and then refer to its ID: `experiment_id: ${resources.experiments.app_experiment.id}`. -
PERMISSION_DENIED
cause The user or service principal attempting to deploy an agent or access resources (such as Lakebase, Unity Catalog tables, or LLM serving endpoints) lacks the necessary permissions. This can also be due to workspace tier limitations or missing feature enablement.fixVerify that the deploying entity has `CAN_MANAGE` permissions on the agent, `USE CATALOG` and `SELECT` on Unity Catalog objects, and appropriate permissions for any external resources like Lakebase. Ensure your Databricks workspace is on a Premium or Enterprise tier and that relevant AI Agent Framework features are enabled in the workspace or account settings. For Lakebase dependencies, the endpoint creator might require workspace admin privileges due to automatic authentication passthrough.
Warnings
- gotcha The `databricks-agents` library requires a Databricks workspace and a deployed LLM serving endpoint to function correctly, even for basic usage. It is not designed for standalone, local-only operation without a Databricks backend.
- gotcha The library heavily integrates with the LangChain ecosystem. Breaking changes or significant API shifts in underlying LangChain components (e.g., `langchain_core`, `langchain-community`) can indirectly impact usage patterns or require code adjustments in `databricks-agents` applications.
- gotcha For optimal logging, tracing, and experiment tracking, `databricks-agents` is designed to work seamlessly with MLflow. While not strictly mandatory, users who skip MLflow integration might miss out on valuable debugging and performance monitoring capabilities.
Install
-
pip install databricks-agents
Imports
- ChatAgent
from databricks.agents.chat.agent import ChatAgent
- DatabricksLLM
from databricks.agents.providers.databricks import DatabricksLLM
- DollyTool
from databricks.agents.tools.dolly.tool import DollyTool
Quickstart
import os
from databricks.agents.chat.agent import ChatAgent
from databricks.agents.providers.databricks import DatabricksLLM
# Ensure these environment variables are set for Databricks connectivity
# export DATABRICKS_HOST="https://<your-workspace-url>"
# export DATABRICKS_TOKEN="dapi<your-token>"
# export DATABRICKS_LLM_ENDPOINT="<your-model-serving-endpoint>" (e.g., 'databricks-mixtral-8x7b-instruct')
databricks_host = os.environ.get('DATABRICKS_HOST', 'https://your-workspace.cloud.databricks.com')
databricks_token = os.environ.get('DATABRICKS_TOKEN', 'YOUR_DATABRICKS_TOKEN')
llm_endpoint = os.environ.get('DATABRICKS_LLM_ENDPOINT', 'databricks-mixtral-8x7b-instruct')
# Initialize a Databricks-backed LLM provider
llm = DatabricksLLM(
endpoint=llm_endpoint,
host=databricks_host,
api_token=databricks_token,
temperature=0.1
)
# Create a ChatAgent
agent = ChatAgent(
llm=llm,
# Add tools as needed, e.g., tools=[DollyTool()]
verbose=True
)
# Interact with the agent
response = agent.chat("What is the capital of France?")
print(f"Agent response: {response}")
# Example with a follow-up question
response_follow_up = agent.chat("And what is its most famous landmark?")
print(f"Agent follow-up response: {response_follow_up}")