gcloud-aio-storage

raw JSON →
9.6.4 verified Tue May 12 auth: no python install: verified quickstart: stale

gcloud-aio-storage is an asyncio-compatible Python client library for Google Cloud Storage. It provides full CRUD operations for buckets and blobs, including streaming support for large files, parallel upload capabilities, and built-in session management. Designed for high-performance cloud storage operations with modern async/await patterns, it is currently at version 9.6.4 and follows an active release cadence.

pip install gcloud-aio-storage
error from gcloud.aio.storage import Storage
cause The 'gcloud-aio-storage' library, specifically its `Storage` class, is imported incorrectly, often due to a typo in the module path or the library not being installed.
fix
Ensure gcloud-aio-storage is installed via pip install gcloud-aio-storage and use the correct import statement: from gcloud.aio.storage import Storage.
error OverflowError: string longer than 2147483647 bytes
cause This error typically occurs when attempting to upload a very large file (over 2GB) by reading its entire content into a Python string, which can exceed memory limits or Python's string size limitations, especially on 32-bit systems. The `gcloud-aio-storage` library, depending on how it's used, might try to buffer the entire file content.
fix
For large files, use streaming uploads or upload in chunks. Instead of reading the entire file into memory as a string, pass a file-like object or an asynchronous generator that yields bytes to the upload function, allowing the library to handle the data in smaller parts.
error TimeoutError
cause This error occurs when a network operation, such as uploading or downloading a file, exceeds the default or specified timeout duration. This can be caused by large file sizes, slow network connections, or insufficient timeout settings.
fix
Increase the timeout parameter in your gcloud.aio.storage method calls (e.g., client.upload(..., timeout=300) for 300 seconds) or ensure that your network connection is stable and sufficiently fast for the operation.
error aiohttp.client_exceptions.ClientResponseError: 404, message='Not Found'
cause This 'Not Found' error usually indicates that the specified Google Cloud Storage bucket or object does not exist, or that there's an issue with the GCS emulator setup (e.g., the emulator is running but the target bucket hasn't been created within it).
fix
Verify that the bucket name and object path are correct and exist in your Google Cloud Project. If using a GCS emulator, ensure the emulator is running and the bucket you're trying to access or create has been provisioned in the emulator before making requests.
gotcha When downloading large files and using `response.text()`, `aiohttp` (which `gcloud-aio-storage` uses) may default to `chardet` for character encoding detection, which can be very slow. Explicitly setting the `Content-Type` with a `charset` (e.g., `text/plain; charset=utf-8`) when uploading can significantly improve performance for text-based files.
fix Set `contentType` with `charset=utf-8` in object metadata during upload, e.g., `content_type='application/json; charset=utf-8'`.
breaking Support for Python 3.9 was dropped in the `gcloud-aio-auth` component (version 5.4.4) of the `gcloud-aio` ecosystem. While this specifically refers to `gcloud-aio-auth`, it's generally indicative of the overall library's compatibility, so users on Python 3.9 attempting to upgrade other `gcloud-aio` components might encounter issues.
fix Upgrade your Python environment to 3.10 or newer. `gcloud-aio-storage` itself requires `>=3.10, <4.0`.
gotcha Recent releases include fixes related to `auto_decompress` handling in `download_stream` and `ClientSession` settings. Ensure you are not inadvertently overwriting or misconfiguring `auto_decompress` if you have custom `aiohttp.ClientSession` settings.
fix Upgrade to `gcloud-aio-storage` 9.6.4+ to benefit from bugfixes. Review your `auto_decompress` settings, especially if passing a custom `aiohttp.ClientSession`.
gotcha When passing custom metadata during an `upload()` operation, ensure it's nested under a 'metadata' key in the dictionary. Incorrectly structured metadata (e.g., top-level key-value pairs) will not be stored as custom metadata.
fix Structure custom metadata as `metadata={'metadata': {'foo': 'bar'}}`.
gotcha Users have reported issues uploading files larger than 2GB, particularly when running on 32-bit Python installations. This is a common limitation for 32-bit systems regarding file sizes and memory addressing.
fix Use a 64-bit Python environment. For extremely large files, consider resumable uploads or streaming data in chunks.
breaking The `Storage` class constructor no longer accepts a `project` keyword argument. The library is designed to infer the project ID from the environment (e.g., `GOOGLE_CLOUD_PROJECT` environment variable) or from default credentials configured in the execution environment.
fix Remove the `project` argument from the `Storage` constructor. Ensure the `GOOGLE_CLOUD_PROJECT` environment variable is set, or that the application is running in an environment where default credentials are correctly configured for the desired Google Cloud Project.
python os / libc status wheel install import disk
3.10 alpine (musl) wheel - 0.74s 63.3M
3.10 alpine (musl) - - 0.75s 62.6M
3.10 slim (glibc) wheel 7.1s 0.54s 65M
3.10 slim (glibc) - - 0.57s 65M
3.11 alpine (musl) wheel - 0.92s 72.3M
3.11 alpine (musl) - - 1.03s 71.6M
3.11 slim (glibc) wheel 6.1s 0.79s 75M
3.11 slim (glibc) - - 0.77s 74M
3.12 alpine (musl) wheel - 1.09s 66.7M
3.12 alpine (musl) - - 1.12s 66.1M
3.12 slim (glibc) wheel 5.2s 1.05s 69M
3.12 slim (glibc) - - 1.06s 68M
3.13 alpine (musl) wheel - 1.12s 66.1M
3.13 alpine (musl) - - 1.13s 65.3M
3.13 slim (glibc) wheel 5.3s 1.01s 68M
3.13 slim (glibc) - - 1.07s 68M
3.9 alpine (musl) wheel - 0.70s 48.9M
3.9 alpine (musl) - - 0.71s 48.9M
3.9 slim (glibc) wheel 7.5s 0.65s 51M
3.9 slim (glibc) - - 0.64s 51M

This quickstart demonstrates how to initialize the Storage client, upload a byte string, download it, and then delete the object. Ensure that Google Cloud authentication (e.g., via `GOOGLE_APPLICATION_CREDENTIALS` environment variable) and the `GCLOUD_PROJECT` environment variable are set, or provide them as arguments to the `Storage` client.

import asyncio
import os

from gcloud.aio.storage import Storage

async def main():
    # GCLOUD_PROJECT, GOOGLE_APPLICATION_CREDENTIALS, or similar env vars
    # should be set for authentication.
    project_id = os.environ.get('GCLOUD_PROJECT', 'your-gcp-project-id')
    bucket_name = 'your-gcs-bucket-name'
    file_name = 'hello_gcloud_aio.txt'
    content = b'Hello, gcloud-aio-storage!'

    async with Storage(project=project_id) as storage:
        print(f"Uploading '{file_name}' to bucket '{bucket_name}'...")
        await storage.upload(bucket_name, file_name, content, content_type='text/plain')
        print(f"'{file_name}' uploaded successfully.")

        print(f"Downloading '{file_name}' from bucket '{bucket_name}'...")
        downloaded_content = await storage.download(bucket_name, file_name)
        print(f"Downloaded content: {downloaded_content.decode()}")

        print(f"Deleting '{file_name}' from bucket '{bucket_name}'...")
        await storage.delete(bucket_name, file_name)
        print(f"'{file_name}' deleted successfully.")

if __name__ == '__main__':
    asyncio.run(main())