SparkMD5

3.0.2 · active · verified Sun Apr 19

SparkMD5 is a JavaScript library providing a fast and efficient implementation of the MD5 hashing algorithm. Currently at version 3.0.2, it offers both normal (static `.hash()` method) and incremental hashing capabilities (instance-based `.append()` and `.end()`), making it well-suited for processing large amounts of data, such as files in chunks. Its core differentiators include optimized performance based on the JKM md5 library, robust UTF8 string conversion, and documented fixes for overflow issues when hashing extensive datasets in previous versions. It supports ArrayBuffers and integrates seamlessly into CommonJS and AMD environments, functioning effectively in both browser and Node.js contexts, though it is particularly highlighted for browser usage. The library emphasizes memory efficiency for large file operations through its incremental API, making it ideal for client-side file uploads or large data processing.

Common errors

Warnings

Install

Imports

Quickstart

Demonstrates how to incrementally hash a large file in a web browser using ArrayBuffers and the File API, splitting the file into chunks to manage memory efficiently during the hashing process.

document.getElementById('file').addEventListener('change', function () {
    var blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice,
        file = this.files[0],
        chunkSize = 2097152,                             // Read in chunks of 2MB
        chunks = Math.ceil(file.size / chunkSize),
        currentChunk = 0,
        spark = new SparkMD5.ArrayBuffer(),
        fileReader = new FileReader();

    fileReader.onload = function (e) {
        console.log('read chunk nr', currentChunk + 1, 'of', chunks);
        spark.append(e.target.result);                   // Append array buffer
        currentChunk++;

        if (currentChunk < chunks) {
            loadNext();
        } else {
            console.log('finished loading');
            console.info('computed hash', spark.end());  // Compute hash
        }
    };

    fileReader.onerror = function () {
        console.warn('oops, something went wrong.');
    };

    function loadNext() {
        var start = currentChunk * chunkSize,
            end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize;

        fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));
    }

    loadNext();
});

view raw JSON →