SparkMD5
SparkMD5 is a JavaScript library providing a fast and efficient implementation of the MD5 hashing algorithm. Currently at version 3.0.2, it offers both normal (static `.hash()` method) and incremental hashing capabilities (instance-based `.append()` and `.end()`), making it well-suited for processing large amounts of data, such as files in chunks. Its core differentiators include optimized performance based on the JKM md5 library, robust UTF8 string conversion, and documented fixes for overflow issues when hashing extensive datasets in previous versions. It supports ArrayBuffers and integrates seamlessly into CommonJS and AMD environments, functioning effectively in both browser and Node.js contexts, though it is particularly highlighted for browser usage. The library emphasizes memory efficiency for large file operations through its incremental API, making it ideal for client-side file uploads or large data processing.
Common errors
-
TypeError: SparkMD5.hash is not a function
cause This error occurs when you attempt to call `hash()` on an *instance* of SparkMD5 (e.g., `new SparkMD5().hash()`) instead of using it as a static method on the main `SparkMD5` object.fixFor direct, non-incremental hashing of a single string, call `SparkMD5.hash('your string')`. For incremental hashing, create an instance: `const spark = new SparkMD5(); spark.append('...'); spark.end();` -
TypeError: SparkMD5 is not a constructor
cause This typically indicates an issue with how the `spark-md5` module is being imported or required in your environment, especially when mixing CommonJS (`require()`) and ES Modules (`import`) or incorrect bundler configurations.fixEnsure your import statement matches your module system: `import SparkMD5 from 'spark-md5';` for ES Modules (often with `esModuleInterop: true` in TypeScript) or `const SparkMD5 = require('spark-md5');` for CommonJS. If using TypeScript, check your `tsconfig.json` for `allowSyntheticDefaultImports` and `esModuleInterop` options. -
Uncaught DOMException: The operation is not supported.
cause This error often arises when `File.prototype.slice` or related Blob/File API methods are invoked in an environment that lacks support for these features, such as very old web browsers or specific non-browser JavaScript runtimes.fixEnsure your target web browser or runtime environment supports the necessary File and Blob APIs. For extremely old browsers, consider using polyfills for these APIs if available, or specify a more modern browser as a minimum requirement for your application.
Warnings
- breaking Earlier versions of SparkMD5 (prior to internal fixes) could yield incorrect MD5 hashes for extremely large data inputs due to internal overflow issues. If upgrading from a very old version (e.g., pre-2.x), verify hash consistency for large files after the upgrade.
- gotcha The `SparkMD5#appendBinary(str)` method is designed to accept 'binary strings' which were historically generated by deprecated browser APIs like `FileReader.readAsBinaryString()`. Using this method with standard JavaScript strings may lead to incorrect hash results if the strings contain non-ASCII characters, as `appendBinary` does not perform UTF-8 encoding.
- gotcha When testing file hashing in a local browser environment using the `file://` protocol, browsers like Chrome often impose security restrictions that prevent `FileReader` from accessing local files. This can result in `DOMException` or `SecurityError` during file operations.
Install
-
npm install spark-md5 -
yarn add spark-md5 -
pnpm add spark-md5
Imports
- SparkMD5
const SparkMD5 = require('spark-md5');import SparkMD5 from 'spark-md5';
- SparkMD5.ArrayBuffer
import { ArrayBuffer } from 'spark-md5';import SparkMD5 from 'spark-md5'; const sparkAB = new SparkMD5.ArrayBuffer();
- SparkMD5.hash
const spark = new SparkMD5(); spark.hash('your string');import SparkMD5 from 'spark-md5'; const hexHash = SparkMD5.hash('your string');
Quickstart
document.getElementById('file').addEventListener('change', function () {
var blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice,
file = this.files[0],
chunkSize = 2097152, // Read in chunks of 2MB
chunks = Math.ceil(file.size / chunkSize),
currentChunk = 0,
spark = new SparkMD5.ArrayBuffer(),
fileReader = new FileReader();
fileReader.onload = function (e) {
console.log('read chunk nr', currentChunk + 1, 'of', chunks);
spark.append(e.target.result); // Append array buffer
currentChunk++;
if (currentChunk < chunks) {
loadNext();
} else {
console.log('finished loading');
console.info('computed hash', spark.end()); // Compute hash
}
};
fileReader.onerror = function () {
console.warn('oops, something went wrong.');
};
function loadNext() {
var start = currentChunk * chunkSize,
end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize;
fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));
}
loadNext();
});