SQLAlchemy BigQuery Dialect
raw JSON → 1.16.0 verified Tue May 12 auth: no python install: verified quickstart: stale
SQLAlchemy BigQuery Dialect (sqlalchemy-bigquery) is a Python library that provides a SQLAlchemy dialect for connecting to and interacting with Google BigQuery. It enables users to leverage SQLAlchemy's Core and ORM functionalities to query, manipulate, and manage BigQuery data, treating it like a traditional relational database. The library is actively maintained by Google Cloud, with regular releases, and its current version is 1.16.0.
pip install sqlalchemy-bigquery Common errors
error sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:bigquery ↓
cause The `sqlalchemy-bigquery` library is not installed or not accessible in the current Python environment, preventing SQLAlchemy from loading the BigQuery dialect.
fix
pip install sqlalchemy-bigquery
error ModuleNotFoundError: No module named 'google.cloud.bigquery' ↓
cause The `sqlalchemy-bigquery` dialect relies on the `google-cloud-bigquery` client library, which is a required dependency but is not installed.
fix
pip install google-cloud-bigquery
error google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and attach them to the client. ↓
cause The BigQuery client failed to find valid Google Cloud authentication credentials in the environment or through standard discovery mechanisms.
fix
Set the
GOOGLE_APPLICATION_CREDENTIALS environment variable to a service account key file path, or authenticate via gcloud auth application-default login. error google.api_core.exceptions.NotFound: 404 Not found: Table <project-id>:<dataset-id>.<table-name> ↓
cause The BigQuery dataset or table referenced in your SQLAlchemy query does not exist, or the authenticated user lacks sufficient permissions to access it.
fix
Verify the dataset and table names are correct and match BigQuery's case-sensitivity, and ensure the service account or user has BigQuery Data Viewer role or similar permissions.
error sqlalchemy.exc.CompileError: (sqlalchemy.bigquery.BITSA_dialect.BigQueryDialect) Cannot compile type <type_name> ↓
cause The SQLAlchemy data type used in a model or schema definition (e.g., UUID, ENUM) does not have a direct or supported mapping to a native BigQuery data type by the `sqlalchemy-bigquery` dialect.
fix
Use a compatible SQLAlchemy type that maps correctly to BigQuery (e.g.,
String for UUID or ENUM), or implement a custom type with BigQuery-specific compilation rules. Warnings
breaking Version 1.16.0 removed support for Python 3.8. Users on Python 3.8 or older must upgrade their Python environment to at least 3.9 to use `sqlalchemy-bigquery` 1.16.0 and later. Python 3.7 support was removed in earlier versions. ↓
fix Upgrade Python to version 3.9 or newer. The current PyPI metadata indicates support up to `<3.15`.
breaking Version 1.11.0 raised the minimum required SQLAlchemy version to 1.4.x and introduced full support for SQLAlchemy 2.0. Applications using older SQLAlchemy versions (pre-1.4) will need to upgrade SQLAlchemy when updating to `sqlalchemy-bigquery` 1.11.0 or newer. ↓
fix Ensure your SQLAlchemy installation is version 1.4.x or 2.0.x.
gotcha When querying tables from non-default BigQuery projects or datasets, specify the full `project.dataset.table` format for table names or `project.dataset` in the `schema` parameter for `Table` objects. ↓
fix Use the full path in your connection string or `Table` definition, e.g., `Table('my_table', metadata, schema='project_id.dataset_id')` or `engine = create_engine('bigquery://project_id/dataset_id')`.
gotcha For optimal performance when handling large datasets, especially when reading results, install `sqlalchemy-bigquery` with the `bqstorage` extra (`pip install sqlalchemy-bigquery[bqstorage]`). This leverages the BigQuery Storage API. ↓
fix Install with `pip install sqlalchemy-bigquery[bqstorage]`.
gotcha The `sqlalchemy.JSON()` column type is not natively supported for DDL (Data Definition Language) statements in `sqlalchemy-bigquery` as of version 1.16.0. Attempting to use it in `Table` definitions or Alembic migrations may result in an `sqlalchemy.exc.UnsupportedFeature` error. ↓
fix Currently, a workaround is necessary, such as defining JSON columns as `String` or `LargeBinary` and handling serialization/deserialization at the application level, or awaiting future dialect updates for native JSON type compilation.
breaking The `sqlalchemy-bigquery` dialect relies on `google-auth` for authentication. By default, it attempts to use Application Default Credentials (ADC). If ADC are not properly configured in the execution environment, a `google.auth.exceptions.DefaultCredentialsError` will be raised when attempting to create an engine or connect to BigQuery. ↓
fix Ensure Application Default Credentials are configured. This typically involves setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of a service account key file, running `gcloud auth application-default login` for local development, or deploying to a Google Cloud environment where service accounts are automatically available. Consult the Google Cloud documentation on ADC for detailed setup instructions.
breaking `sqlalchemy-bigquery` relies on `google-auth` to find credentials, often via Application Default Credentials (ADC). If ADC are not set up or accessible in the execution environment, connection attempts will fail with `google.auth.exceptions.DefaultCredentialsError`. ↓
fix Ensure your environment is properly authenticated for Google Cloud. This typically involves setting `GOOGLE_APPLICATION_CREDENTIALS` to a service account key file path, running `gcloud auth application-default login`, or deploying in an environment with managed identities (e.g., GCE, Cloud Run, GKE) where credentials are automatically provided.
Install
pip install sqlalchemy-bigquery[bqstorage] Install compatibility verified last tested: 2026-05-12
python os / libc variant status wheel install import disk
3.10 alpine (musl) sqlalchemy-bigquery sdist - 0.49s 97.2M
3.10 alpine (musl) bqstorage sdist - 0.48s 274.9M
3.10 alpine (musl) sqlalchemy-bigquery - - 0.49s 95.9M
3.10 alpine (musl) bqstorage - - 0.48s 268.3M
3.10 slim (glibc) sqlalchemy-bigquery wheel 8.4s 0.38s 93M
3.10 slim (glibc) bqstorage wheel 10.5s 0.40s 246M
3.10 slim (glibc) sqlalchemy-bigquery - - 0.37s 92M
3.10 slim (glibc) bqstorage - - 0.37s 241M
3.11 alpine (musl) sqlalchemy-bigquery sdist - 0.77s 105.6M
3.11 alpine (musl) bqstorage sdist - 0.75s 285.8M
3.11 alpine (musl) sqlalchemy-bigquery - - 0.82s 104.2M
3.11 alpine (musl) bqstorage - - 0.82s 279.1M
3.11 slim (glibc) sqlalchemy-bigquery wheel 7.2s 0.67s 101M
3.11 slim (glibc) bqstorage wheel 9.5s 0.68s 257M
3.11 slim (glibc) sqlalchemy-bigquery - - 0.63s 100M
3.11 slim (glibc) bqstorage - - 0.64s 252M
3.12 alpine (musl) sqlalchemy-bigquery sdist - 0.85s 96.4M
3.12 alpine (musl) bqstorage sdist - 0.88s 276.2M
3.12 alpine (musl) sqlalchemy-bigquery - - 0.87s 95.0M
3.12 alpine (musl) bqstorage - - 0.88s 269.6M
3.12 slim (glibc) sqlalchemy-bigquery wheel 6.2s 0.90s 92M
3.12 slim (glibc) bqstorage wheel 8.3s 0.84s 248M
3.12 slim (glibc) sqlalchemy-bigquery - - 0.84s 91M
3.12 slim (glibc) bqstorage - - 0.83s 242M
3.13 alpine (musl) sqlalchemy-bigquery sdist - 0.81s 95.8M
3.13 alpine (musl) bqstorage sdist - 0.79s 275.6M
3.13 alpine (musl) sqlalchemy-bigquery - - 0.85s 94.4M
3.13 alpine (musl) bqstorage - - 0.88s 268.9M
3.13 slim (glibc) sqlalchemy-bigquery wheel 6.7s 0.77s 92M
3.13 slim (glibc) bqstorage wheel 8.4s 0.76s 247M
3.13 slim (glibc) sqlalchemy-bigquery - - 0.82s 90M
3.13 slim (glibc) bqstorage - - 0.81s 242M
3.9 alpine (musl) sqlalchemy-bigquery sdist - 0.50s 96.4M
3.9 alpine (musl) bqstorage sdist - 0.45s 255.3M
3.9 alpine (musl) sqlalchemy-bigquery - - 0.46s 95.1M
3.9 alpine (musl) bqstorage - - 0.46s 254.0M
3.9 slim (glibc) sqlalchemy-bigquery wheel 10.1s 0.44s 93M
3.9 slim (glibc) bqstorage wheel 12.2s 0.42s 234M
3.9 slim (glibc) sqlalchemy-bigquery - - 0.42s 92M
3.9 slim (glibc) bqstorage - - 0.41s 233M
Imports
- create_engine
from sqlalchemy.engine import create_engine - Table, Column, Integer, String, MetaData
from sqlalchemy import Table, Column, Integer, String, MetaData - text
from sqlalchemy import text
Quickstart stale last tested: 2026-04-23
import os
from sqlalchemy import create_engine, text, Table, MetaData, Column, String, Integer
# Configure your Google Cloud Project ID. This can also be inferred from the environment.
# For local development, ensure GOOGLE_APPLICATION_CREDENTIALS points to your service account key file.
PROJECT_ID = os.environ.get('GCP_PROJECT_ID', 'your-gcp-project-id')
DATASET_ID = os.environ.get('BIGQUERY_DATASET_ID', 'your_dataset_id')
TABLE_NAME = os.environ.get('BIGQUERY_TABLE_NAME', 'your_table_name')
# Connection string format: 'bigquery://[PROJECT_ID]/[DATASET_ID]'
# If PROJECT_ID is omitted, it will try to infer from the environment.
# Authentication typically happens via GOOGLE_APPLICATION_CREDENTIALS env var or gcloud CLI.
engine = create_engine(f'bigquery://{PROJECT_ID}/{DATASET_ID}')
try:
# Example 1: Execute a raw SQL query
with engine.connect() as connection:
result = connection.execute(text(f"SELECT 1 FROM `{PROJECT_ID}.{DATASET_ID}.{TABLE_NAME}` LIMIT 1"))
print(f"Raw SQL query result: {result.scalar()}")
# Example 2: Reflect table metadata and query using SQLAlchemy Core
metadata = MetaData()
my_table = Table(TABLE_NAME, metadata, autoload_with=engine)
with engine.connect() as connection:
# Select all columns from the table, limit to 5 rows
select_stmt = my_table.select().limit(5)
result = connection.execute(select_stmt)
print("\nQuerying table via SQLAlchemy Core (first 5 rows):")
for row in result:
print(row)
except Exception as e:
print(f"An error occurred: {e}")
print("Please ensure GOOGLE_APPLICATION_CREDENTIALS is set, your project/dataset/table exist, and you have permissions.")