Knox S3 Client
Knox is an Amazon S3 client library for Node.js, last updated in 2015 with version 0.9.3. It provided a familiar, low-level HTTP-client-like API for S3 operations such as `get()`, `put()`, and streaming uploads/downloads. At the time, it offered convenience methods like `putFile()` and `putStream()` for common file operations. Designed for Node.js environments from version 0.8 upwards, it was a common choice for S3 integration before the widespread adoption of the official AWS SDK v2/v3. However, the project is no longer maintained, with its last commit over eight years ago, and it is now considered abandoned. Users are strongly advised to migrate to the official AWS SDK for JavaScript for any new or existing S3 interactions due to security and compatibility concerns with unmaintained software.
Common errors
-
TypeError: knox.createClient is not a function
cause Attempting to use ES module import syntax (`import { createClient } from 'knox'`) with an older CommonJS-only library.fixChange your import statement to `const knox = require('knox');` in a CommonJS context. -
Error: SignatureDoesNotMatch
cause Incorrect AWS credentials (key, secret) or an outdated signature algorithm being used by Knox that is no longer accepted by S3 for certain operations or regions. This is common with older S3 clients.fixDouble-check your `key` and `secret` values. If they are correct, this indicates an incompatibility with modern S3 authentication, reinforcing the need to migrate to the official AWS SDK. -
Request body too small
cause When uploading string data directly with `client.put()`, the `Content-Length` header was incorrectly set to `string.length` instead of `Buffer.byteLength(string)`, leading to a mismatch in the expected byte count.fixCalculate `Content-Length` using `Buffer.byteLength(yourString)` for accurate byte sizing in the HTTP header. -
UnknownEndpoint: Inaccessible host: `bucket.s3.amazonaws.com`. This service may not be available in the `us-west-2` region.
cause Misconfigured S3 region or endpoint, or the older Knox client may not correctly resolve specific S3 regional endpoints, especially for newer regions or custom S3-compatible services.fixEnsure the `region` option in `createClient` is correct. If using a custom S3-compatible service, also provide the `endpoint` option. If the issue persists, the client is likely too old for the S3 infrastructure. -
Request entity too large
cause Attempting to upload a very large file (>100MB) without using multipart upload capabilities. Older versions of Knox or simplified `put` methods might not automatically handle multipart uploads.fixFor large files, utilize `client.putStream()` with proper `Content-Length` or consider using external libraries like `knox-mpu` (if absolutely necessary to stick with Knox) or, preferably, migrate to the AWS SDK which handles multipart uploads automatically.
Warnings
- breaking The Knox library is abandoned and has not been updated since 2015. It is not compatible with modern AWS SDKs or potentially newer S3 API changes and authentication mechanisms. Using it in new projects or maintaining it in old ones is highly discouraged.
- deprecated The entire Knox library is considered deprecated due to its age and lack of maintenance. There will be no further updates, bug fixes, or security patches. Its functionalities are superseded by the official AWS SDK.
- gotcha When using `client.put()` with string data, `Content-Length` must be set to `Buffer.byteLength(string)` rather than `string.length` to correctly account for multi-byte characters, or S3 may reject the request or truncate the content.
- gotcha After making an HTTP request with Knox (e.g., `client.put()`, `client.get()`), the response stream (`res`) must be consumed or explicitly resumed (e.g., `res.resume()`) to prevent the request from hanging and resources from leaking.
- security Knox is an unmaintained library last updated in 2015. It likely contains unpatched security vulnerabilities in its own codebase or its outdated dependencies. Using it can expose applications to significant security risks, including supply chain attacks or unauthorized access to S3 buckets.
- gotcha By default, Knox sets the `x-amz-acl` header to `private`. If you intend for uploaded objects to be publicly accessible, you must explicitly set `'x-amz-acl': 'public-read'` in the headers for each upload operation.
Install
-
npm install knox -
yarn add knox -
pnpm add knox
Imports
- createClient
import { createClient } from 'knox';var knox = require('knox'); var client = knox.createClient({...});
Quickstart
const knox = require('knox');
const fs = require('fs');
const path = require('path');
// Ensure you set these environment variables or replace with actual credentials
const S3_KEY = process.env.AWS_ACCESS_KEY_ID ?? 'YOUR_AWS_ACCESS_KEY_ID';
const S3_SECRET = process.env.AWS_SECRET_ACCESS_KEY ?? 'YOUR_AWS_SECRET_ACCESS_KEY';
const S3_BUCKET = process.env.AWS_BUCKET_NAME ?? 'your-bucket-name';
if (!S3_KEY || !S3_SECRET || !S3_BUCKET) {
console.error('AWS credentials and bucket name must be set via environment variables or directly.');
process.exit(1);
}
const client = knox.createClient({
key: S3_KEY,
secret: S3_SECRET,
bucket: S3_BUCKET,
region: 'us-east-1' // Specify your S3 region
});
const testFilePath = path.join(__dirname, 'test.txt');
fs.writeFileSync(testFilePath, 'Hello, Knox! This is a test file.');
// Example: Upload a file using putFile
client.putFile(testFilePath, '/uploads/test.txt', { 'x-amz-acl': 'public-read' }, (err, res) => {
if (err) {
console.error('Error uploading file:', err);
return;
}
if (res && res.statusCode === 200) {
console.log(`File uploaded successfully to: ${res.req.url}`);
// Always either do something with `res` or at least call `res.resume()`.
res.resume();
} else {
console.log(`File upload failed with status code: ${res ? res.statusCode : 'N/A'}`);
if (res) res.resume();
}
// Clean up the local test file
fs.unlinkSync(testFilePath);
});