s3db.js: S3 Document Database ORM
s3db.js transforms AWS S3 into a powerful, cost-effective document database by leveraging S3's metadata capabilities to store document data up to 2KB per object, providing a serverless ORM-like interface. Currently at version 21.6.2, the library maintains an active release cadence with frequent patch and minor updates, indicating ongoing development and feature expansion. It differentiates itself by offering automatic encryption, schema validation, a streaming API for efficient data handling, and an extensive plugin architecture that supports multi-backend operations, including `RedDBClient` and integration with various other AWS services and external databases. Designed for serverless applications, cost-conscious projects, and rapid prototyping, it aims to reduce database management overhead. While its core leverages S3 for storage, its `DatabaseManager` allows integration with alternative storage backends and services, making it a flexible solution for diverse cloud data needs.
Common errors
-
TypeError: (0 , import_s3db.DatabaseManager) is not a constructor
cause Using CommonJS require syntax for `s3db.js` which is primarily an ESM module, or an incorrect module resolution in build tools.fixEnsure you are using `import { DatabaseManager } from 's3db.js';` and that your project is configured for ESM. Check `tsconfig.json` for `moduleResolution` and `module` settings, and `package.json` for `"type": "module"` if running directly with Node.js. -
Error: Missing AWS credentials in config
cause The S3Client or DatabaseManager was initialized without valid AWS credentials (access key ID, secret access key) or a specified region. This commonly occurs in development environments.fixSet `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION` environment variables, or pass them directly to the `S3Client` constructor. Ensure the credentials have necessary S3 permissions for the specified bucket. -
Error: Document size exceeds S3 metadata limit. Max 2KB allowed.
cause Attempted to `insert` or `update` a document whose serialized data, when stored in S3 metadata, exceeds the 2KB limit imposed by S3.fixRefactor your document schema to store large data fields in the S3 object body instead of metadata. s3db.js typically offers mechanisms (like 'text' field type with deflate compression, or custom behaviors) to manage larger content. Consult the library's documentation for strategies on handling oversized documents. -
Error: 'my-s3db-test-bucket' not found or access denied.
cause The S3 bucket specified in the `DatabaseManager` configuration does not exist, or the provided AWS credentials lack the necessary permissions (e.g., `s3:PutObject`, `s3:GetObject`, `s3:DeleteObject`, `s3:ListBucket`) for the bucket.fixVerify that the `bucketName` in your `DatabaseManager` configuration is correct and that the S3 bucket exists. Ensure your AWS IAM user/role has the appropriate S3 permissions for the specified bucket.
Warnings
- breaking s3db.js requires Node.js version 24 or newer. Older Node.js versions will result in runtime errors due to reliance on modern JavaScript features.
- gotcha Core S3 functionality implicitly requires `@aws-sdk/client-s3`. While s3db.js lists many other `@aws-sdk/client-*` packages as peer dependencies for its various plugins, `@aws-sdk/client-s3` is fundamental for interacting with S3 itself and must be installed separately.
- gotcha s3db.js stores document data directly within S3 object metadata. This imposes a strict 2KB size limit per document. Attempting to store larger documents will lead to data truncation or errors.
- gotcha The package lists a vast number of peer dependencies for various AWS services and other data stores (e.g., BigQuery, Redis, PostgreSQL). Users should only install the specific peer dependencies corresponding to the `s3db.js` plugins or features they are actively using to avoid unnecessary package bloat.
Install
-
npm install s3db.js -
yarn add s3db.js -
pnpm add s3db.js
Imports
- DatabaseManager
const { DatabaseManager } = require('s3db.js');import { DatabaseManager } from 's3db.js'; - Model
import Model from 's3db.js'; // Model is a named export
import { Model } from 's3db.js'; - Field
import { Field } from 's3db.js';
Quickstart
import { DatabaseManager, Model, Field } from 's3db.js';
import { S3Client } from '@aws-sdk/client-s3';
// Initialize AWS S3 Client with credentials from environment variables
const s3Client = new S3Client({
region: process.env.AWS_REGION ?? 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY ?? '',
},
});
// Configure DatabaseManager for the S3 backend
const dbManager = new DatabaseManager({
backends: {
s3: {
type: 's3',
client: s3Client,
bucketName: process.env.S3_BUCKET_NAME ?? 'my-s3db-test-bucket',
prefix: 'my-app-data/', // Optional: prefix for keys within the bucket
},
},
defaultBackend: 's3',
});
// Define a User Model with a schema
class User extends Model {
static schema = {
id: Field.id(), // Primary key
name: Field.string({ required: true, maxLength: 100 }),
email: Field.string({ required: true, unique: true, validator: (v: string) => v.includes('@') }),
age: Field.number({ min: 18, max: 120, optional: true }),
isActive: Field.boolean({ default: true }),
createdAt: Field.timestamp({ auto: true }),
};
constructor(data?: any, options?: any) {
super(data, { ...options, dbManager, backend: 's3' });
}
}
async function runS3DBExample() {
// Ensure S3 bucket exists and credentials are valid.
await dbManager.connect();
console.log('Connected to S3DB backend.');
try {
// 1. Create a new user
const newUser = new User({
name: 'Alice Wonderland',
email: 'alice@example.com',
age: 30,
});
const createdUser = await newUser.insert();
console.log('Created User:', createdUser.toObject());
// 2. Find a user by ID
const foundUser = await User.findById(createdUser.id); // Static method for finding by ID
if (foundUser) {
console.log('Found User by ID:', foundUser.toObject());
// 3. Update the user
foundUser.age = 31;
foundUser.isActive = false;
const updatedUser = await foundUser.update();
console.log('Updated User:', updatedUser.toObject());
}
// 4. List all users (note: for S3 metadata, complex queries might involve scanning)
const allUsers = await User.findMany({}); // Fetches all documents under the model's prefix
console.log(`Total users found: ${allUsers.length}`);
if (allUsers.length > 0) {
console.log('First user from findMany:', allUsers[0].toObject());
}
// 5. Delete the user
if (foundUser) {
await foundUser.delete();
console.log(`Deleted User with ID: ${foundUser.id}`);
}
} catch (error) {
console.error('Error during S3DB operations:', error);
} finally {
// In S3's stateless nature, explicit disconnects are often not needed,
// but a manager might offer a cleanup method.
}
}
runS3DBExample();