GPU.js - GPU Accelerated JavaScript

2.16.0 · active · verified Tue Apr 21

GPU.js is a JavaScript library designed for General Purpose computing on Graphics Processing Units (GPGPU), enabling high-performance numerical computations in both web browsers and Node.js environments. It achieves this by automatically transpiling ordinary JavaScript functions into shader language (e.g., GLSL for WebGL) which then executes directly on the GPU, leveraging parallel processing capabilities for significant speedups, often 1-15x faster than CPU-bound operations. The library includes a robust fallback mechanism, ensuring that if a GPU is unavailable, computations seamlessly revert to standard JavaScript execution on the CPU. The current stable version is 2.16.0, with recent releases primarily focusing on maintenance, bug fixes (including security and memory leak issues in earlier 2.x versions), and performance enhancements. A key differentiator is its abstraction over complex shader programming, allowing developers to write GPU-accelerated code using familiar JavaScript syntax and a `this.thread.x/y/z` model for accessing thread indices within the kernel.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up `gpu.js`, define a GPU kernel for 512x512 matrix multiplication, and execute it, showcasing the basic API for offloading computations.

import { GPU } from 'gpu.js';

// Initialize GPU.js
const gpu = new GPU();

// Define two 512x512 matrices for multiplication
const matrixA = Array(512).fill(0).map(() => Array(512).fill(Math.random()));
const matrixB = Array(512).fill(0).map(() => Array(512).fill(Math.random()));

// Create a GPU accelerated kernel function for matrix multiplication
const multiplyMatrix = gpu.createKernel(function(a: number[][], b: number[][]) {
  let sum = 0;
  for (let i = 0; i < 512; i++) {
    // this.thread.x and this.thread.y represent the current thread's coordinates
    sum += a[this.thread.y][i] * b[i][this.thread.x];
  }
  return sum;
}).setOutput([512, 512]); // Define the output dimensions of the kernel

// Run the kernel with the input matrices
const c = multiplyMatrix(matrixA, matrixB) as number[][];

console.log('Matrix multiplication completed on GPU (or CPU fallback).');
console.log('Resulting matrix dimensions:', c.length, 'x', c[0].length);
// For a real application, you'd inspect 'c' for results.

view raw JSON →