{"id":14588,"library":"fs-s3fs","title":"Amazon S3 Filesystem for PyFilesystem2","description":"fs-s3fs is a PyFilesystem2 adapter for Amazon S3, allowing you to interact with S3 buckets using the standard PyFilesystem2 API. It's built on top of boto3 and provides a consistent way to manage files and directories on S3. The current version is 1.1.1, with the last release in 2019, indicating a slow maintenance cadence.","status":"maintenance","version":"1.1.1","language":"en","source_language":"en","source_url":"https://github.com/PyFilesystem/s3fs","tags":["filesystem","s3","aws","pyfilesystem","cloud"],"install":[{"cmd":"pip install fs-s3fs","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Core PyFilesystem2 library, on which s3fs is built.","package":"fs"},{"reason":"AWS SDK for Python, used for interacting with S3.","package":"boto3"}],"imports":[{"symbol":"S3FS","correct":"from s3fs import S3FS"}],"quickstart":{"code":"import os\nfrom s3fs import S3FS\n\n# For local testing, ensure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set\n# in your environment variables. You can also pass them directly to the constructor.\n# AWS_REGION_NAME can also be specified if needed.\n\naccess_key = os.environ.get('AWS_ACCESS_KEY_ID', 'YOUR_ACCESS_KEY')\nsecret_key = os.environ.get('AWS_SECRET_ACCESS_KEY', 'YOUR_SECRET_KEY')\nbucket_name = 'your-unique-s3-bucket-name' # IMPORTANT: Replace with an existing S3 bucket name\n\nif access_key == 'YOUR_ACCESS_KEY' or secret_key == 'YOUR_SECRET_KEY':\n    print(\"WARNING: Please set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.\")\n    print(\"         This quickstart will likely fail without valid AWS credentials.\")\n\ntry:\n    # Initialize S3FS. If access_key/secret_key are None, boto3 will look for env vars or IAM roles.\n    fs_s3 = S3FS(bucket_name=bucket_name,\n                 access_key=access_key if access_key != 'YOUR_ACCESS_KEY' else None,\n                 secret_key=secret_key if secret_key != 'YOUR_SECRET_KEY' else None)\n\n    print(f\"Successfully connected to S3 bucket: {bucket_name}\")\n\n    # Example 1: List contents of the bucket root\n    print(f\"\\nContents of '{bucket_name}' (root):\")\n    for info in fs_s3.listdir('/', detail=True):\n        print(f\"- {info['name']} (type: {info['type']}) {'(directory)' if info['is_dir'] else ''}\")\n\n    # Example 2: Create a directory\n    new_dir = 'test_s3fs_dir'\n    fs_s3.makedir(new_dir, recreate=True)\n    print(f\"\\nDirectory '{new_dir}' created.\")\n\n    # Example 3: Create a file inside the new directory\n    file_path = f\"{new_dir}/hello.txt\"\n    with fs_s3.open(file_path, 'w') as f:\n        f.write(\"Hello from fs-s3fs on S3!\")\n    print(f\"File '{file_path}' created with content.\")\n\n    # Example 4: Read the file\n    with fs_s3.open(file_path, 'r') as f:\n        content = f.read()\n    print(f\"Content of '{file_path}': '{content}'\")\n\n    # Example 5: Remove the file (clean up)\n    fs_s3.remove(file_path)\n    print(f\"File '{file_path}' removed.\")\n\n    # Example 6: Remove the directory\n    fs_s3.removedir(new_dir)\n    print(f\"Directory '{new_dir}' removed.\")\n\nexcept Exception as e:\n    print(f\"\\nAn error occurred: {e}\")\nfinally:\n    if 'fs_s3' in locals():\n        fs_s3.close() # Important to close the filesystem\n        print(\"\\nS3FS connection closed.\")","lang":"python","description":"This quickstart demonstrates how to initialize an `S3FS` object, connect to an S3 bucket, create/read/list/delete files and directories. Ensure AWS credentials are set via environment variables or passed directly, and replace `your-unique-s3-bucket-name` with an actual S3 bucket you have access to."},"warnings":[{"fix":"Ensure your project's `fs` and `boto3` versions are compatible with `fs-s3fs==1.1.1` and consider potential API changes in those libraries.","message":"Version 1.1.1 (released 2019) introduced dependency bumps (PyFilesystem2 to 2.4, boto3 to 1.9). This may cause compatibility issues if your project relies on older versions of these underlying libraries, or if their APIs changed in ways not backward compatible with existing code.","severity":"breaking","affected_versions":"1.1.1 and later"},{"fix":"Thoroughly test `fs-s3fs` with your specific versions of `boto3` and `PyFilesystem2`. For new projects, evaluate if a more actively maintained S3 filesystem backend is available if you require cutting-edge features or strict compatibility with the latest dependency versions.","message":"The library has not been updated since August 2019. While functional, it may not be fully compatible with the absolute latest versions of `boto3` or `PyFilesystem2` released since then, or may lack support for newer S3 features (e.g., specific storage classes, access points, etc.).","severity":"gotcha","affected_versions":"All versions since 1.1.1"},{"fix":"Always ensure valid AWS credentials are available to `boto3` through environment variables, IAM roles, or by explicitly passing `access_key` and `secret_key` (and optionally `region_name`) to the `S3FS` constructor.","message":"Credentials for S3 are handled by `boto3`. If `access_key` and `secret_key` are not explicitly passed to the `S3FS` constructor, `boto3` will use its standard credential chain (environment variables like `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, IAM roles, AWS config files). Incorrect credential setup is a common source of connection issues.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always provide the `bucket_name` argument when instantiating `S3FS`.","message":"The `bucket_name` parameter is mandatory for the `S3FS` constructor. Omitting it will result in an `TypeError` or unexpected behavior, as the filesystem needs a specific S3 bucket as its root to operate correctly.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-15T00:00:00.000Z","next_check":"2026-07-14T00:00:00.000Z","problems":[],"ecosystem":"pypi"}