{"id":12752,"library":"spark-md5","title":"SparkMD5","description":"SparkMD5 is a JavaScript library providing a fast and efficient implementation of the MD5 hashing algorithm. Currently at version 3.0.2, it offers both normal (static `.hash()` method) and incremental hashing capabilities (instance-based `.append()` and `.end()`), making it well-suited for processing large amounts of data, such as files in chunks. Its core differentiators include optimized performance based on the JKM md5 library, robust UTF8 string conversion, and documented fixes for overflow issues when hashing extensive datasets in previous versions. It supports ArrayBuffers and integrates seamlessly into CommonJS and AMD environments, functioning effectively in both browser and Node.js contexts, though it is particularly highlighted for browser usage. The library emphasizes memory efficiency for large file operations through its incremental API, making it ideal for client-side file uploads or large data processing.","status":"active","version":"3.0.2","language":"javascript","source_language":"en","source_url":"ssh://git@github.com/satazor/js-spark-md5","tags":["javascript","md5","fast","spark","incremental"],"install":[{"cmd":"npm install spark-md5","lang":"bash","label":"npm"},{"cmd":"yarn add spark-md5","lang":"bash","label":"yarn"},{"cmd":"pnpm add spark-md5","lang":"bash","label":"pnpm"}],"dependencies":[],"imports":[{"note":"SparkMD5 is typically consumed as a default export in modern ESM projects. It exposes both static methods (e.g., SparkMD5.hash) and a constructor for incremental hashing (new SparkMD5()). For CommonJS environments, `require()` is appropriate.","wrong":"const SparkMD5 = require('spark-md5');","symbol":"SparkMD5","correct":"import SparkMD5 from 'spark-md5';"},{"note":"The ArrayBuffer-specific hashing utility is exposed as a property on the main SparkMD5 default export, not as a separate named export. Access it via the imported `SparkMD5` object.","wrong":"import { ArrayBuffer } from 'spark-md5';","symbol":"SparkMD5.ArrayBuffer","correct":"import SparkMD5 from 'spark-md5';\nconst sparkAB = new SparkMD5.ArrayBuffer();"},{"note":"The direct hashing method (`.hash()`) is a static utility function on the SparkMD5 class, not an instance method. For incremental hashing, you must create an instance using `new SparkMD5()`.","wrong":"const spark = new SparkMD5(); spark.hash('your string');","symbol":"SparkMD5.hash","correct":"import SparkMD5 from 'spark-md5';\nconst hexHash = SparkMD5.hash('your string');"}],"quickstart":{"code":"document.getElementById('file').addEventListener('change', function () {\n    var blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice,\n        file = this.files[0],\n        chunkSize = 2097152,                             // Read in chunks of 2MB\n        chunks = Math.ceil(file.size / chunkSize),\n        currentChunk = 0,\n        spark = new SparkMD5.ArrayBuffer(),\n        fileReader = new FileReader();\n\n    fileReader.onload = function (e) {\n        console.log('read chunk nr', currentChunk + 1, 'of', chunks);\n        spark.append(e.target.result);                   // Append array buffer\n        currentChunk++;\n\n        if (currentChunk < chunks) {\n            loadNext();\n        } else {\n            console.log('finished loading');\n            console.info('computed hash', spark.end());  // Compute hash\n        }\n    };\n\n    fileReader.onerror = function () {\n        console.warn('oops, something went wrong.');\n    };\n\n    function loadNext() {\n        var start = currentChunk * chunkSize,\n            end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize;\n\n        fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));\n    }\n\n    loadNext();\n});\n","lang":"javascript","description":"Demonstrates how to incrementally hash a large file in a web browser using ArrayBuffers and the File API, splitting the file into chunks to manage memory efficiently during the hashing process."},"warnings":[{"fix":"Upgrade to SparkMD5 version 3.x or later to ensure correct computations for large data sets. Always test hash consistency after major version upgrades, especially when dealing with critical data integrity.","message":"Earlier versions of SparkMD5 (prior to internal fixes) could yield incorrect MD5 hashes for extremely large data inputs due to internal overflow issues. If upgrading from a very old version (e.g., pre-2.x), verify hash consistency for large files after the upgrade.","severity":"breaking","affected_versions":"<=2.x"},{"fix":"For general string input, always use `SparkMD5#append(str)` which correctly handles UTF-8 encoding. Only use `appendBinary(str)` if you are absolutely certain the input `str` is a true binary string and not a standard UTF-8 encoded string.","message":"The `SparkMD5#appendBinary(str)` method is designed to accept 'binary strings' which were historically generated by deprecated browser APIs like `FileReader.readAsBinaryString()`. Using this method with standard JavaScript strings may lead to incorrect hash results if the strings contain non-ASCII characters, as `appendBinary` does not perform UTF-8 encoding.","severity":"gotcha","affected_versions":">=1.0"},{"fix":"For Chrome, launch the browser with the `--allow-file-access-from-files` flag. Alternatively, serve your HTML file via a local web server (e.g., `http-server`, `serve`, or a simple Python web server) to avoid these browser-level security restrictions.","message":"When testing file hashing in a local browser environment using the `file://` protocol, browsers like Chrome often impose security restrictions that prevent `FileReader` from accessing local files. This can result in `DOMException` or `SecurityError` during file operations.","severity":"gotcha","affected_versions":">=1.0"}],"env_vars":null,"last_verified":"2026-04-19T00:00:00.000Z","next_check":"2026-07-18T00:00:00.000Z","problems":[{"fix":"For direct, non-incremental hashing of a single string, call `SparkMD5.hash('your string')`. For incremental hashing, create an instance: `const spark = new SparkMD5(); spark.append('...'); spark.end();`","cause":"This error occurs when you attempt to call `hash()` on an *instance* of SparkMD5 (e.g., `new SparkMD5().hash()`) instead of using it as a static method on the main `SparkMD5` object.","error":"TypeError: SparkMD5.hash is not a function"},{"fix":"Ensure your import statement matches your module system: `import SparkMD5 from 'spark-md5';` for ES Modules (often with `esModuleInterop: true` in TypeScript) or `const SparkMD5 = require('spark-md5');` for CommonJS. If using TypeScript, check your `tsconfig.json` for `allowSyntheticDefaultImports` and `esModuleInterop` options.","cause":"This typically indicates an issue with how the `spark-md5` module is being imported or required in your environment, especially when mixing CommonJS (`require()`) and ES Modules (`import`) or incorrect bundler configurations.","error":"TypeError: SparkMD5 is not a constructor"},{"fix":"Ensure your target web browser or runtime environment supports the necessary File and Blob APIs. For extremely old browsers, consider using polyfills for these APIs if available, or specify a more modern browser as a minimum requirement for your application.","cause":"This error often arises when `File.prototype.slice` or related Blob/File API methods are invoked in an environment that lacks support for these features, such as very old web browsers or specific non-browser JavaScript runtimes.","error":"Uncaught DOMException: The operation is not supported."}],"ecosystem":"npm"}