LiveKit Plugins xAI
raw JSON → 1.5.7 verified Fri May 01 auth: no python
Plugin for the LiveKit Agents framework integrating xAI's Grok models for LLM and TTS inference. Current version 1.5.7, released as part of the livekit-agents monorepo with weekly releases.
pip install livekit-plugins-xai Common errors
error ModuleNotFoundError: No module named 'livekit.plugins.xai' ↓
cause The xai plugin package is not installed.
fix
Run
pip install livekit-plugins-xai error openai.NotFoundError: 404 (request ID: ...) - The model `grok-1` does not exist or you do not have access to it. ↓
cause Using a model name that xAI does not support or the API key does not have access to.
fix
Set the model via
LLM(model='grok-2-latest') or check available models in xAI docs. error AttributeError: module 'livekit.plugins.xai' has no attribute 'xAILLM' ↓
cause Incorrect import name from earlier documentation.
fix
Use
from livekit.plugins.xai import LLM instead. Warnings
gotcha The xAI plugin inherits from the OpenAI plugin's LLM and TTS classes, but xAI does not support all OpenAI features (e.g., structured output, function calling). Errors may occur if those are used. ↓
fix Use only basic chat completion features; avoid function calling and response_format with JSON schema.
breaking The xAI plugin was previously part of the main livekit-agents package and imported from `livekit.plugins.xai` only after becoming a separate plugin. Users migrating from older livekit-agents must install `livekit-plugins-xai`. ↓
fix Run `pip install livekit-plugins-xai` and update imports to `from livekit.plugins.xai import ...`
deprecated The `livekit.plugins.xai.LLM.with_groq` alias from earlier beta is removed. Use `LLM()` directly. ↓
fix Replace `LLM.with_groq(...)` with `LLM(...)`.
Imports
- LLM wrong
from livekit.plugins.xai import xAILLMcorrectfrom livekit.plugins.xai import LLM - TTS
from livekit.plugins.xai import TTS
Quickstart
import os
from livekit.plugins.xai import LLM, TTS
# Ensure XAI_API_KEY is set in environment
llm = LLM()
tts = TTS()
result = llm.chat([{"role": "user", "content": "Hello"}])
print(result)
# TTS example
audit = tts.synthesize("Hello from xAI")
print(audio)