Filehash

0.2.dev1 · active · verified Tue Apr 14

Python module and command-line tool that wraps around `hashlib` and `zlib` to facilitate generating checksums/hashes of files and directories. It supports various algorithms like MD5, SHA-*, Adler-32, and CRC32, handling large files efficiently by processing them in chunks. The current version is 0.2.dev1 and it appears to be in active development based on the versioning.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize the `FileHash` class and calculate the hash of a single file using different algorithms like SHA256 and MD5. It creates a temporary file for a runnable example and cleans it up afterwards. The `FileHash` constructor also accepts an optional `chunk_size` parameter for fine-tuning performance with very large files.

import os
import tempfile
from filehash import FileHash

# Create a temporary file for demonstration
with tempfile.NamedTemporaryFile(mode='w', delete=False, encoding='utf-8') as temp_file:
    temp_file.write('This is a temporary file for hashing demonstration.')
    temp_file.write('\nAnother line of content.')
    temp_file_path = temp_file.name

try:
    # Initialize FileHash with a desired algorithm (defaults to 'sha256')
    file_hasher = FileHash('sha256')

    # Calculate the hash of the temporary file
    file_hash = file_hasher.hash_file(temp_file_path)
    print(f"SHA256 hash of '{os.path.basename(temp_file_path)}': {file_hash}")

    # You can also specify other algorithms, e.g., MD5
    file_hasher_md5 = FileHash('md5')
    md5_hash = file_hasher_md5.hash_file(temp_file_path)
    print(f"MD5 hash of '{os.path.basename(temp_file_path)}': {md5_hash}")

    # Example of hashing a directory (requires creating dummy directory and files)
    # For this quickstart, we'll keep it to single file hash.

except Exception as e:
    print(f"An error occurred: {e}")
finally:
    # Clean up the temporary file
    if os.path.exists(temp_file_path):
        os.remove(temp_file_path)

view raw JSON →