Face API for JavaScript
face-api.js is a robust JavaScript API designed for performing real-time face detection, face recognition, face landmark detection, face expression recognition, and age and gender estimation. It is built on top of the TensorFlow.js core library, enabling these advanced computer vision capabilities directly within web browsers and Node.js environments. The current stable version is 0.22.2. While a strict release cadence isn't explicitly stated, the project appears actively maintained with regular updates and tutorials, often aligning with TensorFlow.js advancements. Its key differentiator is providing a high-level, easy-to-use API over complex TensorFlow.js operations, simplifying the integration of sophisticated facial analysis features into JavaScript applications without requiring deep machine learning expertise. It also provides pre-trained models for various tasks, abstracting away the complexities of model management.
Common errors
-
Error: No backend registered for 'webgl'
cause The TensorFlow.js WebGL backend was not imported or initialized, preventing GPU acceleration in the browser.fixEnsure `@tensorflow/tfjs-backend-webgl` is installed and imported in your project (e.g., `import '@tensorflow/tfjs-backend-webgl';`). If running in Node.js, import `@tensorflow/tfjs-node` instead. -
Failed to load model from '/models/ssd_mobilenetv1_model_weights_manifest.json' - 404 Not Found
cause The pre-trained models are not accessible at the specified path on the web server.fixVerify that the 'models' directory is correctly placed in your public assets folder and served by your web server. The path provided to `load()` methods (e.g., `/models`) must match the actual URL where the model files are hosted. -
TypeError: Cannot read properties of undefined (reading 'detectAllFaces') or 'faceapi is not defined'
cause The `face-api.js` library or its `faceapi` namespace was not correctly imported or is not available in the current scope when trying to call its methods.fixEnsure you are using the correct `import * as faceapi from 'face-api.js'` syntax and that the import statement is executed before any `faceapi` calls. For CommonJS, use `const faceapi = require('face-api.js');` for older versions. -
TypeError: video.getContext is not a function
cause Attempting to call `getContext` on a video element instead of a canvas element.fixEnsure you are targeting an HTMLCanvasElement for drawing operations (e.g., `canvas.getContext('2d')`) and not an HTMLVideoElement.
Warnings
- breaking Model loading paths are crucial and frequently cause issues. Models must be hosted on a web server at a path accessible to your application (e.g., `/models`). Direct file system access will fail in browsers.
- gotcha Performance can vary drastically based on the client device's hardware (CPU/GPU) and the chosen TensorFlow.js backend. Using the 'tiny' models and WebGL backend is generally recommended for browsers.
- gotcha If running in Node.js, you must explicitly import and register a TensorFlow.js backend (e.g., `require('@tensorflow/tfjs-node')` or `import '@tensorflow/tfjs-node'`) before loading any `face-api.js` models.
- deprecated Older versions of face-api.js and its examples might use CommonJS `require()` syntax. While some versions might still support it, modern JavaScript applications and newer versions of the library primarily target ES Modules.
Install
-
npm install face-api.js -
yarn add face-api.js -
pnpm add face-api.js
Imports
- faceapi
import faceapi from 'face-api.js'
import * as faceapi from 'face-api.js'
- nets
import { nets } from 'face-api.js' - draw
import { draw } from 'face-api.js' - createCanvasFromMedia
import { createCanvasFromMedia } from 'face-api.js'
Quickstart
import * as faceapi from 'face-api.js';
const video = document.getElementById('video') as HTMLVideoElement;
const canvas = document.getElementById('overlay') as HTMLCanvasElement;
async function initializeFaceApi() {
// Ensure models are served from a public path (e.g., /models folder)
await Promise.all([
faceapi.nets.tinyFaceDetector.load('/models'),
faceapi.nets.faceLandmark68Net.load('/models'),
faceapi.nets.faceRecognitionNet.load('/models'),
faceapi.nets.faceExpressionNet.load('/models'),
faceapi.nets.ageGenderNet.load('/models')
]);
console.log('All face-api.js models loaded successfully.');
startWebcamStream();
}
async function startWebcamStream() {
try {
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
video.srcObject = stream;
video.onloadedmetadata = () => {
// Set canvas dimensions to match video
const displaySize = { width: video.width, height: video.height };
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceExpressions()
.withAgeAndGender()
.withFaceDescriptors();
const resizedDetections = faceapi.resizeResults(detections, displaySize);
canvas.getContext('2d')?.clearRect(0, 0, canvas.width, canvas.height);
faceapi.draw.drawDetections(canvas, resizedDetections);
faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
resizedDetections.forEach(detection => {
const { age, gender, genderProbability } = detection;
new faceapi.draw.DrawTextField(
[
`${faceapi.utils.round(age, 0)} years`,
`${gender} (${faceapi.utils.round(genderProbability)})`
],
detection.detection.box.bottomLeft
).draw(canvas);
});
}, 100); // Run detection every 100ms
};
} catch (err) {
console.error("Error accessing webcam or initializing Face API:", err);
}
}
document.addEventListener('DOMContentLoaded', initializeFaceApi);