AWS Logging Handlers
AWS Logging Handlers is a Python library that provides multithreaded logging handlers for streaming log records to Amazon S3 and Kinesis Data Streams. It leverages asynchronous uploading with multiple worker threads and supports gzip compression for S3 logs. The library is actively maintained, with version 2.0.5 currently available, and receives regular minor updates for improvements and bug fixes.
Common errors
-
ModuleNotFoundError: No module named 'aws_logging_handlers.s3'
cause This error typically occurs when upgrading from `aws-logging-handlers` version 0.x to 2.x. The module structure was refactored, changing `s3` (lowercase) to `S3` (capitalized).fixUpdate your import statements: change `from aws_logging_handlers.s3 import S3Handler` to `from aws_logging_handlers.S3 import S3Handler`. Apply similar changes for `KinesisHandler`. -
botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the PutObject operation: The provided token has expired.
cause The AWS credentials configured for the Boto3 session (either via environment variables, shared credentials file, or IAM role) are invalid, expired, or lack the necessary permissions to perform the logging operation.fixVerify your AWS credentials and IAM permissions. Ensure environment variables like `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION` are correctly set and not expired. If using an IAM role, confirm it has policies allowing `s3:PutObject` for the target S3 bucket and `kinesis:PutRecord`/`kinesis:PutRecords` for the target Kinesis stream. -
Logs are not appearing in the configured S3 bucket or Kinesis stream, or are significantly delayed.
cause This can happen due to several reasons: `logging.shutdown()` not being called (leading to un-flushed buffers), insufficient worker threads for high log volume, network connectivity issues to AWS, or incorrect configuration of bucket/stream names or regions.fix1. Ensure `logging.shutdown()` is called at the application's exit. 2. Increase the `workers` parameter for `S3Handler` or `KinesisHandler` if dealing with high log volumes. 3. Check network connectivity to AWS. 4. Double-check that `bucket_name`, `stream_name`, and `session_kwargs={'region_name': ...}` are accurate and match your AWS setup.
Warnings
- breaking Major architectural and directory tree refactoring in version 2.0.1 introduced breaking changes, particularly in import paths and potentially handler instantiation. Code written for 0.x versions will likely require updates.
- gotcha Failure to call `logging.shutdown()` can result in lost log messages, especially in short-lived applications (like AWS Lambda functions or scripts). The library uses worker threads for asynchronous uploads, and these threads need a signal to flush remaining buffers before the application exits.
- gotcha The S3 bucket and Kinesis stream specified in the handler configuration must already exist and have appropriate IAM permissions for the AWS credentials being used. The library does not create these resources.
Install
-
pip install aws-logging-handlers
Imports
- S3Handler
from aws_logging_handlers.s3 import S3Handler
from aws_logging_handlers.S3 import S3Handler
- KinesisHandler
from aws_logging_handlers.kinesis import KinesisHandler
from aws_logging_handlers.Kinesis import KinesisHandler
Quickstart
import logging
import os
from aws_logging_handlers.S3 import S3Handler
from aws_logging_handlers.Kinesis import KinesisHandler
# Configure AWS credentials (replace with your actual bucket and stream names)
bucket_name = os.environ.get('AWS_S3_LOG_BUCKET', 'your-s3-log-bucket')
kinesis_stream_name = os.environ.get('AWS_KINESIS_LOG_STREAM', 'your-kinesis-log-stream')
aws_region = os.environ.get('AWS_REGION', 'us-east-1')
# Configure S3 handler (logs rotate every 5MB or 120 seconds)
s3_handler = S3Handler(
log_group='test_log_s3',
bucket_name=bucket_name,
workers=3, # Number of upload worker threads
session_kwargs={'region_name': aws_region}
)
# Configure Kinesis handler
kinesis_handler = KinesisHandler(
log_group='test_log_kinesis',
stream_name=kinesis_stream_name,
workers=1, # Number of upload worker threads
session_kwargs={'region_name': aws_region}
)
# Set up formatter
formatter = logging.Formatter(
'[%(asctime)s] %(filename)s:%(lineno)d} %(levelname)s - %(message)s'
)
s3_handler.setFormatter(formatter)
kinesis_handler.setFormatter(formatter)
# Get logger and add handlers
logger = logging.getLogger('my_app')
logger.setLevel(logging.INFO)
logger.addHandler(s3_handler)
logger.addHandler(kinesis_handler)
# Log some messages
logger.info("This is an info message to S3 and Kinesis.")
logger.warning("A warning occurred in the application.")
logger.error("An error message that should go to AWS services.")
# Ensure all buffered logs are flushed and workers shut down gracefully
logging.shutdown()