Google Cloud Bigtable Python Client Library
raw JSON → 2.35.0 verified Tue May 12 auth: no python install: verified quickstart: stale
The `google-cloud-bigtable` library is the official Python client for Google Cloud Bigtable, a fully managed, scalable NoSQL database service designed for high-performance and low-latency applications. It powers many core Google services like Search, Analytics, Maps, and Gmail. Currently at version 2.35.0, the library is actively maintained with frequent updates, adding new features and ensuring compatibility with the Bigtable service.
pip install google-cloud-bigtable Common errors
error ModuleNotFoundError: No module named 'google.cloud.bigtable' ↓
cause The `google-cloud-bigtable` library is not installed in the Python environment, or the environment where the code is being run does not have access to the installed package.
fix
Install the library using pip:
pip install google-cloud-bigtable. If using a virtual environment, ensure it is activated. error google.api_core.exceptions.PermissionDenied: 7 PERMISSION_DENIED: Request had insufficient authentication scopes. ↓
cause The service account or user attempting to access Bigtable does not have the necessary Identity and Access Management (IAM) permissions for the requested operation or resource. This could also manifest as 'Missing IAM permission: bigtable.instances.ping'.
fix
Ensure the service account or user has the appropriate Bigtable IAM roles (e.g.,
roles/bigtable.user, roles/bigtable.reader, roles/bigtable.admin) on the Bigtable instance or project. Verify Application Default Credentials are correctly configured if running locally. error google.api_core.exceptions.DeadlineExceeded: The deadline expired before the operation could complete. ↓
cause A Bigtable operation, such as reading or writing a large amount of data or performing a complex scan, took longer than the default or configured timeout. This can be caused by network latency, an overloaded Bigtable cluster, or inefficient schema design leading to hotspotting.
fix
Optimize your Bigtable schema for even data distribution, batch mutations, ensure the client application is in the same geographical region as the Bigtable instance, scale up the Bigtable cluster if overloaded, or increase the operation timeout in the client configuration if appropriate for the workload.
error google.api_core.exceptions.NotFound: 404 Not found: Table <table_id> ↓
cause The specified Bigtable instance, table, or column family ID does not exist, or the client is attempting to access a resource that has been deleted or has not yet been created.
fix
Verify that the Bigtable instance, table, and column family IDs in your code are correct and that the resources exist in your Google Cloud project. Ensure there are no typos and that the client is pointing to the correct project and instance.
error google.api_core.exceptions.AlreadyExists: 6 ALREADY_EXISTS: The entity that a client attempted to create already exists. ↓
cause The code is attempting to create a Bigtable resource (e.g., a table) that already exists in the specified instance.
fix
Add logic to check for the resource's existence (e.g.,
client.instance(instance_id).table(table_id).exists()) before attempting to create it, or implement exception handling to gracefully manage AlreadyExists errors if idempotent creation is desired. Warnings
breaking Python versions older than 3.7 are no longer supported by recent `google-cloud-bigtable` releases. Python 3.6 support ended with v2.10.1 (2022-06-03), and Python 2.7/3.5 support ended with v1.7.0 (2021-02-09). ↓
fix Upgrade your Python environment to 3.7 or newer to use current versions of the library.
gotcha The `BigtableDataClientAsync` client (introduced in v2.23.0) is designed for `asyncio` codebases. It is generally not recommended to use the async client within an otherwise synchronous application, as it can negate performance benefits and lead to unexpected behavior. ↓
fix Design your application to be fully asynchronous from the ground up to leverage the `BigtableDataClientAsync` client, or stick to the synchronous `Client` for synchronous codebases.
gotcha Suboptimal performance can occur if your Bigtable client application is not running in the same Google Cloud zone as your Bigtable cluster, or if your Bigtable schema is poorly designed, leading to hot-spotting or inefficient reads/writes. ↓
fix Deploy your client applications in the same zone as your Bigtable instance. Follow Bigtable schema design best practices to distribute reads and writes evenly across your table.
gotcha The `table.read_row()` method returns `None` if a row with the specified key does not exist. Cell values are returned as bytes and require explicit decoding (e.g., `.decode("utf-8")`) for string representation. ↓
fix Always check for `None` when reading a row: `if read_row: ...`. Remember to decode byte values to strings or other appropriate types when processing data.
gotcha For new projects, prefer using the native `google-cloud-bigtable` client directly over wrapper libraries like `google-cloud-happybase` (which mimics the Apache HBase API). The native client provides more direct access to Bigtable features and better performance due to avoiding object conversion overhead. ↓
fix Utilize `from google.cloud import bigtable` and its API directly for optimal performance and access to the latest Bigtable features, unless strict Apache HBase API compatibility is a requirement.
breaking The application failed to find Google Cloud credentials. This typically happens when running in an environment where Application Default Credentials (ADC) are not set up or accessible. Common causes include running locally without 'gcloud auth application-default login', missing environment variables (GOOGLE_APPLICATION_CREDENTIALS), or incorrect IAM permissions on the executing service account/VM. ↓
fix Ensure your environment is correctly authenticated. For local development, run `gcloud auth application-default login`. For production environments, use service accounts with appropriate roles, set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to a service account key file, or ensure your compute instance has the necessary scopes enabled.
breaking The `google-cloud-bigtable` client library requires appropriate Google Cloud credentials to authenticate and authorize API requests. If credentials are not explicitly provided, the library will attempt to use Application Default Credentials (ADC), which may fail if not configured in the execution environment. ↓
fix Ensure that your environment is properly authenticated. This typically involves setting `GOOGLE_APPLICATION_CREDENTIALS` environment variable to a service account key file path, running `gcloud auth application-default login` for user credentials, or deploying in an environment with managed identities (e.g., GCE, GKE, Cloud Run, App Engine) that provide service account credentials by default. Refer to the Google Cloud authentication documentation for details on setting up Application Default Credentials.
Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - 2.54s 73.9M
3.10 slim (glibc) - - 1.15s 72M
3.11 alpine (musl) - - 3.23s 79.3M
3.11 slim (glibc) - - 1.84s 77M
3.12 alpine (musl) - - 3.16s 70.7M
3.12 slim (glibc) - - 2.17s 69M
3.13 alpine (musl) - - 3.00s 70.2M
3.13 slim (glibc) - - 2.25s 68M
3.9 alpine (musl) - - 2.35s 74.1M
3.9 slim (glibc) - - 1.33s 72M
Imports
- Client
from google.cloud import bigtable - BigtableDataClientAsync
from google.cloud.bigtable.data import BigtableDataClientAsync
Quickstart stale last tested: 2026-04-23
import os
from google.cloud import bigtable
from google.cloud.bigtable import row
from google.cloud.bigtable import column_family
# Configuration
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT", "your-gcp-project-id") # e.g., "my-project-123"
INSTANCE_ID = os.environ.get("BIGTABLE_INSTANCE_ID", "your-bigtable-instance-id") # e.g., "my-instance"
TABLE_ID = os.environ.get("BIGTABLE_TABLE_ID", "your-table-id") # e.g., "my-table"
COLUMN_FAMILY_ID = "cf1"
COLUMN_QUALIFIER = "greeting"
ROW_KEY = "r1"
# Initialize Bigtable client
# Ensure GOOGLE_APPLICATION_CREDENTIALS environment variable is set for local dev
# or gcloud auth application-default login has been run.
# Use admin=True if you need to create/manage tables/instances.
client = bigtable.Client(project=PROJECT_ID, admin=True)
instance = client.instance(INSTANCE_ID)
# Check if table exists, create if not
table = instance.table(TABLE_ID)
if not table.exists():
print(f"Creating table {TABLE_ID}...")
table.create(column_families={COLUMN_FAMILY_ID: column_family.ColumnFamily()})
print(f"Table {TABLE_ID} created.")
else:
print(f"Table {TABLE_ID} already exists.")
# Write a row
print(f"Writing row '{ROW_KEY}'...")
row_entry = row.DirectRow(ROW_KEY.encode("utf-8"))
row_entry.set_cell(COLUMN_FAMILY_ID, COLUMN_QUALIFIER.encode("utf-8"),
b"Hello Bigtable!", timestamp=client.timestamp())
table.mutate_rows([row_entry])
print(f"Row '{ROW_KEY}' written.")
# Read a row
print(f"Reading row '{ROW_KEY}'...")
read_row = table.read_row(ROW_KEY.encode("utf-8"))
if read_row:
# Get the latest cell value for the specific column
cells = read_row.cells[COLUMN_FAMILY_ID][COLUMN_QUALIFIER.encode("utf-8")]
latest_value = cells[0].value.decode("utf-8")
timestamp = cells[0].timestamp
print(f"Read: {COLUMN_FAMILY_ID}:{COLUMN_QUALIFIER} = '{latest_value}' (at {timestamp})")
else:
print(f"Row '{ROW_KEY}' not found.")
# Optional: Delete the table for cleanup
# print(f"Deleting table {TABLE_ID}...")
# table.delete()
# print(f"Table {TABLE_ID} deleted.")