{"id":10569,"library":"bench-node","title":"Node.js Micro-benchmarking Suite","description":"Bench-node is a powerful Node.js module specifically designed for micro-benchmarking JavaScript code blocks, accurately measuring operations per second (ops/sec). Its current stable version is 0.14.0, and it maintains an active development pace with frequent minor releases every few weeks to months, indicating robust maintenance. A primary differentiator of `bench-node` is its explicit use of V8 deoptimization (via `%NeverOptimizeFunction`) to ensure that benchmarked code is not aggressively optimized away by the V8 engine. While this approach yields highly stable and reproducible results for micro-benchmarks, users should be aware that these \"accurate\" measurements might not perfectly reflect \"realistic\" performance in a fully optimized production environment. To further assist developers, the library includes an opt-in Dead Code Elimination (DCE) detection plugin that warns if benchmarked code is being optimized out, although enabling it disables V8 deoptimization. Additionally, `bench-node` offers statistical significance testing (Welch's t-test) to evaluate if observed performance differences are statistically meaningful, a crucial feature for benchmarking in high-variance environments. It provides a rich set of built-in reporters (text, chart, HTML, JSON, CSV, pretty) for result visualization and ships with TypeScript types for enhanced developer experience. Other features include support for setup/teardown routines, execution in worker threads for isolation, and both operations and time-based benchmarking modes.","status":"active","version":"0.14.0","language":"javascript","source_language":"en","source_url":"https://github.com/RafaelGSS/bench-node","tags":["javascript","benchmark","nodejs","typescript"],"install":[{"cmd":"npm install bench-node","lang":"bash","label":"npm"},{"cmd":"yarn add bench-node","lang":"bash","label":"yarn"},{"cmd":"pnpm add bench-node","lang":"bash","label":"pnpm"}],"dependencies":[],"imports":[{"note":"The primary entry point for creating benchmark suites. The `require` syntax is also widely supported for CJS environments, as shown in the package's documentation.","wrong":"const Suite = require('bench-node').Suite;","symbol":"Suite","correct":"import { Suite } from 'bench-node';"},{"note":"One of the built-in reporters. Since v0.13.0, reporters also export `to<Format>` functions for programmatic use instead of direct stdout printing.","wrong":"const textReport = require('bench-node').textReport;","symbol":"textReport","correct":"import { textReport } from 'bench-node';"},{"note":"A reporter that outputs benchmark results as a bar chart. As with `textReport`, it also exports `toChart` since v0.13.0.","wrong":"const chartReport = require('bench-node').chartReport;","symbol":"chartReport","correct":"import { chartReport } from 'bench-node';"}],"quickstart":{"code":"import { Suite, chartReport } from 'bench-node';\n\nconst suite = new Suite({\n  // Optionally configure the reporter\n  reporter: chartReport,\n  reporterOptions: {\n    printHeader: true // Controls whether system info header is printed\n  }\n});\n\nsuite.add('Using Array.push', () => {\n  const arr = [];\n  for (let i = 0; i < 1000; i++) {\n    arr.push(i);\n  }\n  return arr.length; // Ensure result is used to prevent DCE\n});\n\nsuite.add('Using Array literal concatenation', () => {\n  let arr = [];\n  for (let i = 0; i < 1000; i++) {\n    arr = [...arr, i];\n  }\n  return arr.length; // Ensure result is used to prevent DCE\n});\n\nsuite.add('Using direct index assignment', () => {\n  const arr = [];\n  for (let i = 0; i < 1000; i++) {\n    arr[i] = i;\n  }\n  return arr.length; // Ensure result is used to prevent DCE\n});\n\nsuite.run();","lang":"typescript","description":"This example demonstrates how to set up a `Suite` with a custom reporter, add multiple benchmarks, and run them. It compares different array population methods and explicitly uses the result to prevent dead code elimination."},"warnings":[{"fix":"Always interpret microbenchmark results with caution. For real-world performance assessments, consider profiling under production-like conditions or using macro-benchmarks that allow V8 to optimize code naturally.","message":"Bench-node's primary mechanism uses V8 deoptimization to make microbenchmarks more stable and reproducible. However, this means the code under test is explicitly prevented from undergoing V8's real-world optimizations, leading to 'accurate but not realistic' performance numbers.","severity":"gotcha","affected_versions":">=0.1.0"},{"fix":"Ensure that the results of your benchmarked operations are always used, for example, by returning them, asserting against them, or logging them, to prevent V8 from eliminating the code. Refer to the 'Dead Code Elimination Detection' section in the documentation for specific patterns to avoid.","message":"Dead Code Elimination (DCE) can significantly skew benchmark results if the V8 engine optimizes away portions of your benchmarked code because its output or side effects are not utilized. While `bench-node` provides an opt-in DCE detection plugin, enabling it disables the `V8NeverOptimizePlugin`.","severity":"gotcha","affected_versions":">=0.1.0"},{"fix":"Review existing code that directly invokes reporter functions. If you were relying on implicit stdout printing, adapt your code to use the new `to<Format>()` methods or pass the reporter to the `Suite` options directly.","message":"In `v0.13.0`, reporter modules were refactored to separate formatting logic from stdout printing. Direct use of reporters that previously printed to stdout might now require calling explicit `to<Format>()` methods (e.g., `toText()`, `toJson()`) for programmatic consumption.","severity":"breaking","affected_versions":">=0.13.0"},{"fix":"Isolate resource-intensive setup logic outside the timed block by performing it before `suite.add()`, or carefully consider if manual timing is truly necessary and understand its implications.","message":"When using the `timer` argument for manual timing within a benchmark function, the setup code executed before `timer.start()` can also be deoptimized by V8, potentially impacting the accuracy of the setup phase's performance.","severity":"gotcha","affected_versions":">=0.1.0"},{"fix":"Be aware of the increased execution time when enabling `ttest: true`. Adjust `repeatSuite` manually if fewer repetitions are acceptable for preliminary analysis, or ensure your CI/CD pipeline can accommodate longer runtimes for statistically robust comparisons.","message":"Statistical significance testing (t-test mode) was introduced in v0.14.0. While useful for comparing benchmarks, it automatically sets `repeatSuite=30`, potentially increasing benchmark execution time significantly.","severity":"gotcha","affected_versions":">=0.14.0"}],"env_vars":null,"last_verified":"2026-04-19T00:00:00.000Z","next_check":"2026-07-18T00:00:00.000Z","problems":[{"fix":"Ensure the benchmarked function's result is used or has observable side effects. For example, assign the result to a variable that is then used in a conditional `if (result !== expected) throw new Error('Unexpected');` or return the result from the benchmark function. Enable `detectDeadCodeElimination` in suite options to get these warnings.","cause":"The V8 engine has optimized away or partially eliminated the code being benchmarked because its return value or side effects are not used.","error":"Dead code elimination detected. Result of benchmark might be inaccurate."},{"fix":"If using ES Modules, ensure you're using `import { Suite } from 'bench-node';`. If in a CommonJS environment, use `const { Suite } = require('bench-node');` to correctly destructure the named export. Ensure your build configuration (e.g., TypeScript, Babel) handles module interop correctly.","cause":"This error typically indicates an incorrect import statement, often when attempting to use a CommonJS `require()` syntax with a module that is primarily designed for ES Modules (ESM) or when a named export is treated as a default export.","error":"TypeError: (0, _bench_node.Suite) is not a constructor"},{"fix":"Modify your benchmark code to ensure that the operations performed have a visible effect or that their results are consumed. For instance, return the result, store it in an external variable (if not part of the timed operation), or perform a simple check like `if (result === undefined) throw new Error();`. Using the `detectDeadCodeElimination` option can help identify such cases.","cause":"The V8 JIT compiler has likely optimized away the benchmarked code entirely or partially because it detects no observable side effects or the return value isn't used, leading to misleadingly fast results.","error":"Benchmark results show extremely high ops/sec (e.g., millions or billions) for trivial operations, seeming unrealistic."}],"ecosystem":"npm"}