DeepSeek API
raw JSON → API only — no SDK version verified Tue May 12 auth: no python install: verified quickstart: verified
DeepSeek has NO official Python SDK. The correct integration pattern is using the openai package with base_url='https://api.deepseek.com'. The deepseek PyPI package (1.0.0) is a stub placeholder — do not use it. DeepSeek API is fully OpenAI-compatible. Two primary models: deepseek-chat (V3.2, general purpose) and deepseek-reasoner (R1, chain-of-thought reasoning).
pip install openai Common errors
error ModuleNotFoundError: No module named 'deepseek' ↓
cause Developers are attempting to import from a non-existent DeepSeek Python SDK; DeepSeek API is designed to be used with the OpenAI Python client.
fix
Use the
openai Python package and configure it with DeepSeek's API base URL. Do not install or import 'deepseek' directly. error openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Invalid API Key' ...}} ↓
cause The provided DeepSeek API key is either missing, incorrect, expired, or has insufficient permissions.
fix
Verify your DeepSeek API key in your DeepSeek dashboard, ensure it's correctly set as an environment variable or passed directly to the
OpenAI client, and check your account balance. error Error code: 400 - {'error': {'message': "Failed to deserialize the JSON body into the target type: messages[X]: invalid type: sequence, expected a string at line 1 column YYY", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}} ↓
cause DeepSeek API expects message content to be a simple string, but the request sent an array or a complex object for the message content, often seen with advanced OpenAI SDK features like structured outputs or vision API compatibility which DeepSeek may not fully support.
fix
Ensure that the
content field within your messages array is always a simple string, rather than an array of parts or a complex JSON object. error Error: OpenAI inference failed: Error code: 400 - {'error': {'message': 'This response_format type is unavailable now', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}} ↓
cause DeepSeek API does not support the `response_format` parameter for structured outputs as implemented in some OpenAI SDK versions or wrappers.
fix
Remove the
response_format parameter from your chat.completions.create call. Instead, prompt the model to generate JSON and then parse the string response manually. error AttributeError: 'OpenAI' object has no attribute 'ChatCompletion' ↓
cause This error occurs when using an older version of the `openai` Python package (v0.28.1 or earlier) with code written for the newer, breaking changes introduced in `openai` v1.0.0+.
fix
Upgrade your
openai Python package to the latest version (pip install --upgrade openai) to use the client.chat.completions.create syntax. Warnings
breaking No official DeepSeek Python SDK exists. The 'deepseek' PyPI package (1.0.0) is an empty stub with no functionality. LLMs hallucinate 'import deepseek' or 'from deepseek import Client' — neither works. ↓
fix pip install openai; use OpenAI(api_key=DEEPSEEK_API_KEY, base_url='https://api.deepseek.com')
breaking R1 reasoning model name is 'deepseek-reasoner' not 'deepseek-r1'. Wrong model name returns 404 or model not found error. ↓
fix model='deepseek-reasoner' for R1 thinking model. model='deepseek-chat' for V3.2 general model.
gotcha deepseek-reasoner returns an extra reasoning_content field on the message object. Standard OpenAI response parsing misses the chain-of-thought output. ↓
fix Access response.choices[0].message.reasoning_content for thinking output, response.choices[0].message.content for final answer.
gotcha base_url must be 'https://api.deepseek.com' — no trailing slash, no /v1 suffix needed. The /v1 path exists for OpenAI compatibility but has no relationship to model version. ↓
fix base_url='https://api.deepseek.com' — both with and without /v1 work but without is canonical.
gotcha API key env var is DEEPSEEK_API_KEY not OPENAI_API_KEY. OpenAI SDK will silently use OPENAI_API_KEY if DEEPSEEK_API_KEY is not explicitly passed. ↓
fix Always pass api_key=os.environ['DEEPSEEK_API_KEY'] explicitly to OpenAI() constructor.
gotcha deepseek-reasoner does not support system prompt in the same way. System prompts are accepted but may be converted to user messages internally. Behavior differs from deepseek-chat. ↓
fix For deepseek-reasoner use user messages for instructions, not system prompts.
breaking The DEEPSEEK_API_KEY environment variable must be set for authentication. The script failed with a KeyError because this required environment variable was not found. ↓
fix Ensure the DEEPSEEK_API_KEY environment variable is set with your DeepSeek API key before running the script (e.g., `export DEEPSEEK_API_KEY='YOUR_API_KEY'` or by configuring your execution environment).
Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - 1.69s 102.3M
3.10 slim (glibc) - - 1.21s 172M
3.11 alpine (musl) - - 2.34s 111.1M
3.11 slim (glibc) - - 1.99s 180M
3.12 alpine (musl) - - 2.29s 101.6M
3.12 slim (glibc) - - 2.27s 171M
3.13 alpine (musl) - - 2.04s 100.9M
3.13 slim (glibc) - - 2.07s 170M
3.9 alpine (musl) - - 1.67s 101.3M
3.9 slim (glibc) - - 1.49s 171M
Imports
- DeepSeek via OpenAI SDK wrong
import deepseek client = deepseek.Client(api_key='...')correctfrom openai import OpenAI import os client = OpenAI( api_key=os.environ['DEEPSEEK_API_KEY'], base_url='https://api.deepseek.com' ) response = client.chat.completions.create( model='deepseek-chat', messages=[ {'role': 'system', 'content': 'You are a helpful assistant'}, {'role': 'user', 'content': 'Hello'} ], stream=False ) print(response.choices[0].message.content) - deepseek-reasoner (R1 thinking model) wrong
response = client.chat.completions.create( model='deepseek-r1', # wrong model name messages=[...] )correctfrom openai import OpenAI client = OpenAI( api_key='DEEPSEEK_API_KEY', base_url='https://api.deepseek.com' ) response = client.chat.completions.create( model='deepseek-reasoner', # R1 thinking model messages=[{'role': 'user', 'content': 'Solve: 2x + 5 = 13'}] ) # Access reasoning content separately reasoning = response.choices[0].message.reasoning_content answer = response.choices[0].message.content print(reasoning) print(answer)
Quickstart verified last tested: 2026-04-23
# pip install openai
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ['DEEPSEEK_API_KEY'],
base_url='https://api.deepseek.com'
)
response = client.chat.completions.create(
model='deepseek-chat',
messages=[
{'role': 'system', 'content': 'You are a helpful assistant'},
{'role': 'user', 'content': 'What is the capital of France?'}
]
)
print(response.choices[0].message.content)