asyncpg
High-performance async PostgreSQL driver for Python/asyncio. Implements PostgreSQL binary protocol natively — ~5x faster than psycopg3 in benchmarks. Current version: 0.31.0 (Nov 2025). Still pre-1.0. NOT DB-API 2.0 compliant — uses $1/$2 placeholders not %s. No dict row support out of the box — returns Record objects. Major footgun: prepared statements break with pgbouncer in transaction/statement mode (Supabase, Neon poolers).
Common errors
-
ModuleNotFoundError: No module named 'asyncpg'
cause The 'asyncpg' module is not installed in the current Python environment.fixEnsure 'asyncpg' is installed by running 'pip install asyncpg' in the appropriate environment. -
SyntaxError: invalid syntax
cause Using 'async with' outside of an asynchronous function.fixWrap the 'async with' statement inside an 'async def' function and run it using an event loop. -
asyncpg.InterfaceError: cannot perform operation: another operation is in progress
cause Attempting to execute multiple operations concurrently on the same connection.fixEnsure that each operation is awaited properly and avoid overlapping operations on the same connection. -
ImportError: cannot import name 'exceptions' from 'asyncpg'
cause PyInstaller is not including all necessary asyncpg modules in the build.fixAdd '--hidden-import=asyncpg.pgproto.pgproto' to the PyInstaller command to include the required modules. -
asyncpg.exceptions.ForeignKeyViolationError: insert or update on table "table_name" violates foreign key constraint
cause Inserting or updating a record with a foreign key that does not exist in the referenced table.fixEnsure that the foreign key value exists in the referenced table before performing the insert or update operation.
Warnings
- breaking Placeholders are $1/$2/$3 (PostgreSQL native) NOT %s (psycopg2) or ? (sqlite3). LLMs trained on psycopg2 code consistently generate %s placeholders which fail with asyncpg.
- breaking asyncpg is NOT DB-API 2.0 compliant. Code written for psycopg2/sqlite3 will not work without changes. No cursor objects, different method names (fetch/fetchrow/fetchval not execute/fetchone/fetchall).
- breaking Prepared statements break with pgbouncer in transaction/statement pool mode. Error: 'prepared statement asyncpg_stmt_X does not exist'. Affects Supabase transaction pooler (port 6543), Neon, and any pgbouncer setup.
- gotcha fetch() returns list of asyncpg.Record objects, not dicts. Record supports dict-style access (row['name']) but isinstance(row, dict) is False. Code that expects dicts breaks silently.
- gotcha Still pre-1.0 (0.31.x). API stability not guaranteed across minor versions.
- gotcha Prepared statements and cursors from Connection.prepare() become invalid once a connection is released back to the pool. Must re-prepare on next acquisition.
- breaking Connection failed: 'Connect call failed' (OSError: Errno 111) means the asyncpg client could not establish a network connection to the database server. This is typically due to the database not running, incorrect host/port in the connection string, or a firewall blocking the connection.
- breaking asyncpg.create_pool or asyncpg.connect fails with 'OSError: [Errno 111] Connect call failed' if the PostgreSQL database server is not running, not listening on the specified host/port, or is unreachable due to network issues (e.g., firewall). This is a general network/database availability error, not specific to asyncpg's API or usage patterns.
Install
-
pip install asyncpg
Imports
- connect
row = await conn.fetchrow( 'SELECT * FROM users WHERE id = %s', 42 ) # SyntaxError or wrong results — must use $1 not %simport asyncpg import asyncio async def main(): conn = await asyncpg.connect( 'postgresql://user:pass@localhost/mydb' ) # $1, $2 placeholders — NOT %s row = await conn.fetchrow( 'SELECT id, name FROM users WHERE id = $1', 42 ) print(row['name']) # Record supports dict-style access await conn.close() asyncio.run(main()) - create_pool
# Creating a new connection per request — don't do this async def handle_request(): conn = await asyncpg.connect(...) # expensive, no pooling ...import asyncpg import asyncio async def main(): pool = await asyncpg.create_pool( 'postgresql://user:pass@localhost/mydb', min_size=2, max_size=10 ) async with pool.acquire() as conn: rows = await conn.fetch('SELECT * FROM users') for row in rows: print(dict(row)) # convert Record to dict await pool.close() asyncio.run(main())
Quickstart
# pip install asyncpg
import asyncpg
import asyncio
async def main():
# Connection pool for production
pool = await asyncpg.create_pool(
'postgresql://user:pass@localhost/mydb',
min_size=2,
max_size=10
)
async with pool.acquire() as conn:
# $1, $2 — not %s
await conn.execute(
'INSERT INTO users(name, email) VALUES($1, $2)',
'Alice', 'alice@example.com'
)
# fetchrow returns asyncpg.Record — dict-like
row = await conn.fetchrow(
'SELECT * FROM users WHERE name = $1', 'Alice'
)
print(row['name']) # 'Alice'
print(dict(row)) # convert to plain dict
# fetch returns list of Records
rows = await conn.fetch('SELECT id, name FROM users')
await pool.close()
asyncio.run(main())