X Tutup
{ "type": "module", "source": "doc/api/test.md", "modules": [ { "textRaw": "Test runner", "name": "test_runner", "introduced_in": "v18.0.0", "type": "module", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [ { "version": "v20.0.0", "pr-url": "https://github.com/nodejs/node/pull/46983", "description": "The test runner is now stable." } ] }, "stability": 2, "stabilityText": "Stable", "desc": "

The node:test module facilitates the creation of JavaScript tests.\nTo access it:

\n
import test from 'node:test';\n
\n
const test = require('node:test');\n
\n

This module is only available under the node: scheme.

\n

Tests created via the test module consist of a single function that is\nprocessed in one of three ways:

\n
    \n
  1. A synchronous function that is considered failing if it throws an exception,\nand is considered passing otherwise.
  2. \n
  3. A function that returns a Promise that is considered failing if the\nPromise rejects, and is considered passing if the Promise fulfills.
  4. \n
  5. A function that receives a callback function. If the callback receives any\ntruthy value as its first argument, the test is considered failing. If a\nfalsy value is passed as the first argument to the callback, the test is\nconsidered passing. If the test function receives a callback function and\nalso returns a Promise, the test will fail.
  6. \n
\n

The following example illustrates how tests are written using the\ntest module.

\n
test('synchronous passing test', (t) => {\n  // This test passes because it does not throw an exception.\n  assert.strictEqual(1, 1);\n});\n\ntest('synchronous failing test', (t) => {\n  // This test fails because it throws an exception.\n  assert.strictEqual(1, 2);\n});\n\ntest('asynchronous passing test', async (t) => {\n  // This test passes because the Promise returned by the async\n  // function is settled and not rejected.\n  assert.strictEqual(1, 1);\n});\n\ntest('asynchronous failing test', async (t) => {\n  // This test fails because the Promise returned by the async\n  // function is rejected.\n  assert.strictEqual(1, 2);\n});\n\ntest('failing test using Promises', (t) => {\n  // Promises can be used directly as well.\n  return new Promise((resolve, reject) => {\n    setImmediate(() => {\n      reject(new Error('this will cause the test to fail'));\n    });\n  });\n});\n\ntest('callback passing test', (t, done) => {\n  // done() is the callback function. When the setImmediate() runs, it invokes\n  // done() with no arguments.\n  setImmediate(done);\n});\n\ntest('callback failing test', (t, done) => {\n  // When the setImmediate() runs, done() is invoked with an Error object and\n  // the test fails.\n  setImmediate(() => {\n    done(new Error('callback failure'));\n  });\n});\n
\n

If any tests fail, the process exit code is set to 1.

", "modules": [ { "textRaw": "Subtests", "name": "subtests", "type": "module", "desc": "

The test context's test() method allows subtests to be created.\nIt allows you to structure your tests in a hierarchical manner,\nwhere you can create nested tests within a larger test.\nThis method behaves identically to the top level test() function.\nThe following example demonstrates the creation of a\ntop level test with two subtests.

\n
test('top level test', async (t) => {\n  await t.test('subtest 1', (t) => {\n    assert.strictEqual(1, 1);\n  });\n\n  await t.test('subtest 2', (t) => {\n    assert.strictEqual(2, 2);\n  });\n});\n
\n
\n

Note: beforeEach and afterEach hooks are triggered\nbetween each subtest execution.

\n
\n

In this example, await is used to ensure that both subtests have completed.\nThis is necessary because tests do not wait for their subtests to\ncomplete, unlike tests created within suites.\nAny subtests that are still outstanding when their parent finishes\nare cancelled and treated as failures. Any subtest failures cause the parent\ntest to fail.

", "displayName": "Subtests" }, { "textRaw": "Rerunning failed tests", "name": "rerunning_failed_tests", "type": "module", "desc": "

The test runner supports persisting the state of the run to a file, allowing\nthe test runner to rerun failed tests without having to re-run the entire test suite.\nUse the --test-rerun-failures command-line option to specify a file path where the\nstate of the run is stored. if the state file does not exist, the test runner will\ncreate it.\nthe state file is a JSON file that contains an array of run attempts.\nEach run attempt is an object mapping successful tests to the attempt they have passed in.\nThe key identifying a test in this map is the test file path, with the line and column where the test is defined.\nin a case where a test defined in a specific location is run multiple times,\nfor example within a function or a loop,\na counter will be appended to the key, to disambiguate the test runs.\nnote changing the order of test execution or the location of a test can lead the test runner\nto consider tests as passed on a previous attempt,\nmeaning --test-rerun-failures should be used when tests run in a deterministic order.

\n

example of a state file:

\n
[\n  {\n    \"test.js:10:5\": { \"passed_on_attempt\": 0, \"name\": \"test 1\" }\n  },\n  {\n    \"test.js:10:5\": { \"passed_on_attempt\": 0, \"name\": \"test 1\" },\n    \"test.js:20:5\": { \"passed_on_attempt\": 1, \"name\": \"test 2\" }\n  }\n]\n
\n

in this example, there are two run attempts, with two tests defined in test.js,\nthe first test succeeded on the first attempt, and the second test succeeded on the second attempt.

\n

When the --test-rerun-failures option is used, the test runner will only run tests that have not yet passed.

\n
node --test-rerun-failures /path/to/state/file\n
", "displayName": "Rerunning failed tests" }, { "textRaw": "`describe()` and `it()` aliases", "name": "`describe()`_and_`it()`_aliases", "type": "module", "desc": "

Suites and tests can also be written using the describe() and it()\nfunctions. describe() is an alias for suite(), and it() is an\nalias for test().

\n
describe('A thing', () => {\n  it('should work', () => {\n    assert.strictEqual(1, 1);\n  });\n\n  it('should be ok', () => {\n    assert.strictEqual(2, 2);\n  });\n\n  describe('a nested thing', () => {\n    it('should work', () => {\n      assert.strictEqual(3, 3);\n    });\n  });\n});\n
\n

describe() and it() are imported from the node:test module.

\n
import { describe, it } from 'node:test';\n
\n
const { describe, it } = require('node:test');\n
", "displayName": "`describe()` and `it()` aliases" }, { "textRaw": "Skipping tests", "name": "skipping_tests", "type": "module", "desc": "

Individual tests can be skipped by passing the skip option to the test, or by\ncalling the test context's skip() method as shown in the\nfollowing example.

\n
// The skip option is used, but no message is provided.\ntest('skip option', { skip: true }, (t) => {\n  // This code is never executed.\n});\n\n// The skip option is used, and a message is provided.\ntest('skip option with message', { skip: 'this is skipped' }, (t) => {\n  // This code is never executed.\n});\n\ntest('skip() method', (t) => {\n  // Make sure to return here as well if the test contains additional logic.\n  t.skip();\n});\n\ntest('skip() method with message', (t) => {\n  // Make sure to return here as well if the test contains additional logic.\n  t.skip('this is skipped');\n});\n
", "displayName": "Skipping tests" }, { "textRaw": "TODO tests", "name": "todo_tests", "type": "module", "desc": "

Individual tests can be marked as flaky or incomplete by passing the todo\noption to the test, or by calling the test context's todo() method, as shown\nin the following example. These tests represent a pending implementation or bug\nthat needs to be fixed. TODO tests are executed, but are not treated as test\nfailures, and therefore do not affect the process exit code. If a test is marked\nas both TODO and skipped, the TODO option is ignored.

\n
// The todo option is used, but no message is provided.\ntest('todo option', { todo: true }, (t) => {\n  // This code is executed, but not treated as a failure.\n  throw new Error('this does not fail the test');\n});\n\n// The todo option is used, and a message is provided.\ntest('todo option with message', { todo: 'this is a todo test' }, (t) => {\n  // This code is executed.\n});\n\ntest('todo() method', (t) => {\n  t.todo();\n});\n\ntest('todo() method with message', (t) => {\n  t.todo('this is a todo test and is not treated as a failure');\n  throw new Error('this does not fail the test');\n});\n
", "displayName": "TODO tests" }, { "textRaw": "Expecting tests to fail", "name": "expecting_tests_to_fail", "type": "module", "meta": { "added": [ "v25.5.0" ], "changes": [] }, "desc": "

This flips the pass/fail reporting for a specific test or suite: a flagged test\ncase must throw in order to pass, and a flagged test case that does not throw\nfails.

\n

In each of the following, doTheThing() fails to return true, but since the\ntests are flagged expectFailure, they pass.

\n
it.expectFailure('should do the thing', () => {\n  assert.strictEqual(doTheThing(), true);\n});\n\nit('should do the thing', { expectFailure: true }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n\nit('should do the thing', { expectFailure: 'feature not implemented' }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n
\n

If the value of expectFailure is a\n |\n |\n |\n,\nthe tests will pass only if they throw a matching value.\nSee assert.throws for how each value type is handled.

\n

Each of the following tests fails despite being flagged expectFailure\nbecause the failure does not match the specific expected failure.

\n
it('fails because regex does not match', {\n  expectFailure: /expected message/,\n}, () => {\n  throw new Error('different message');\n});\n\nit('fails because object matcher does not match', {\n  expectFailure: { code: 'ERR_EXPECTED' },\n}, () => {\n  const err = new Error('boom');\n  err.code = 'ERR_ACTUAL';\n  throw err;\n});\n
\n

To supply both a reason and specific error for expectFailure, use { label, match }.

\n
it('should fail with specific error and reason', {\n  expectFailure: {\n    label: 'reason for failure',\n    match: /error message/,\n  },\n}, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n
\n

skip and/or todo are mutually exclusive to expectFailure, and skip or todo\nwill \"win\" when both are applied (skip wins against both, and todo wins\nagainst expectFailure).

\n

These tests will be skipped (and not run):

\n
it.expectFailure('should do the thing', { skip: true }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n\nit.skip('should do the thing', { expectFailure: true }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n
\n

These tests will be marked \"todo\" (silencing errors):

\n
it.expectFailure('should do the thing', { todo: true }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n\nit.todo('should do the thing', { expectFailure: true }, () => {\n  assert.strictEqual(doTheThing(), true);\n});\n
", "displayName": "Expecting tests to fail" }, { "textRaw": "`only` tests", "name": "`only`_tests", "type": "module", "desc": "

If Node.js is started with the --test-only command-line option, or test\nisolation is disabled, it is possible to skip all tests except for a selected\nsubset by passing the only option to the tests that should run. When a test\nwith the only option is set, all subtests are also run.\nIf a suite has the only option set, all tests within the suite are run,\nunless it has descendants with the only option set, in which case only those\ntests are run.

\n

When using subtests within a test()/it(), it is required to mark\nall ancestor tests with the only option to run only a\nselected subset of tests.

\n

The test context's runOnly()\nmethod can be used to implement the same behavior at the subtest level. Tests\nthat are not executed are omitted from the test runner output.

\n
// Assume Node.js is run with the --test-only command-line option.\n// The suite's 'only' option is set, so these tests are run.\ntest('this test is run', { only: true }, async (t) => {\n  // Within this test, all subtests are run by default.\n  await t.test('running subtest');\n\n  // The test context can be updated to run subtests with the 'only' option.\n  t.runOnly(true);\n  await t.test('this subtest is now skipped');\n  await t.test('this subtest is run', { only: true });\n\n  // Switch the context back to execute all tests.\n  t.runOnly(false);\n  await t.test('this subtest is now run');\n\n  // Explicitly do not run these tests.\n  await t.test('skipped subtest 3', { only: false });\n  await t.test('skipped subtest 4', { skip: true });\n});\n\n// The 'only' option is not set, so this test is skipped.\ntest('this test is not run', () => {\n  // This code is not run.\n  throw new Error('fail');\n});\n\ndescribe('a suite', () => {\n  // The 'only' option is set, so this test is run.\n  it('this test is run', { only: true }, () => {\n    // This code is run.\n  });\n\n  it('this test is not run', () => {\n    // This code is not run.\n    throw new Error('fail');\n  });\n});\n\ndescribe.only('a suite', () => {\n  // The 'only' option is set, so this test is run.\n  it('this test is run', () => {\n    // This code is run.\n  });\n\n  it('this test is run', () => {\n    // This code is run.\n  });\n});\n
", "displayName": "`only` tests" }, { "textRaw": "Filtering tests by name", "name": "filtering_tests_by_name", "type": "module", "desc": "

The --test-name-pattern command-line option can be used to only run\ntests whose name matches the provided pattern, and the\n--test-skip-pattern option can be used to skip tests whose name\nmatches the provided pattern. Test name patterns are interpreted as\nJavaScript regular expressions. The --test-name-pattern and\n--test-skip-pattern options can be specified multiple times in order to run\nnested tests. For each test that is executed, any corresponding test hooks,\nsuch as beforeEach(), are also run. Tests that are not executed are omitted\nfrom the test runner output.

\n

Given the following test file, starting Node.js with the\n--test-name-pattern=\"test [1-3]\" option would cause the test runner to execute\ntest 1, test 2, and test 3. If test 1 did not match the test name\npattern, then its subtests would not execute, despite matching the pattern. The\nsame set of tests could also be executed by passing --test-name-pattern\nmultiple times (e.g. --test-name-pattern=\"test 1\",\n--test-name-pattern=\"test 2\", etc.).

\n
test('test 1', async (t) => {\n  await t.test('test 2');\n  await t.test('test 3');\n});\n\ntest('Test 4', async (t) => {\n  await t.test('Test 5');\n  await t.test('test 6');\n});\n
\n

Test name patterns can also be specified using regular expression literals. This\nallows regular expression flags to be used. In the previous example, starting\nNode.js with --test-name-pattern=\"/test [4-5]/i\" (or --test-skip-pattern=\"/test [4-5]/i\")\nwould match Test 4 and Test 5 because the pattern is case-insensitive.

\n

To match a single test with a pattern, you can prefix it with all its ancestor\ntest names separated by space, to ensure it is unique.\nFor example, given the following test file:

\n
describe('test 1', (t) => {\n  it('some test');\n});\n\ndescribe('test 2', (t) => {\n  it('some test');\n});\n
\n

Starting Node.js with --test-name-pattern=\"test 1 some test\" would match\nonly some test in test 1.

\n

Test name patterns do not change the set of files that the test runner executes.

\n

If both --test-name-pattern and --test-skip-pattern are supplied,\ntests must satisfy both requirements in order to be executed.

", "displayName": "Filtering tests by name" }, { "textRaw": "Extraneous asynchronous activity", "name": "extraneous_asynchronous_activity", "type": "module", "desc": "

Once a test function finishes executing, the results are reported as quickly\nas possible while maintaining the order of the tests. However, it is possible\nfor the test function to generate asynchronous activity that outlives the test\nitself. The test runner handles this type of activity, but does not delay the\nreporting of test results in order to accommodate it.

\n

In the following example, a test completes with two setImmediate()\noperations still outstanding. The first setImmediate() attempts to create a\nnew subtest. Because the parent test has already finished and output its\nresults, the new subtest is immediately marked as failed, and reported later\nto the <TestsStream>.

\n

The second setImmediate() creates an uncaughtException event.\nuncaughtException and unhandledRejection events originating from a completed\ntest are marked as failed by the test module and reported as diagnostic\nwarnings at the top level by the <TestsStream>.

\n
test('a test that creates asynchronous activity', (t) => {\n  setImmediate(() => {\n    t.test('subtest that is created too late', (t) => {\n      throw new Error('error1');\n    });\n  });\n\n  setImmediate(() => {\n    throw new Error('error2');\n  });\n\n  // The test finishes after this line.\n});\n
", "displayName": "Extraneous asynchronous activity" }, { "textRaw": "Watch mode", "name": "watch_mode", "type": "module", "meta": { "added": [ "v19.2.0", "v18.13.0" ], "changes": [] }, "stability": 1, "stabilityText": "Experimental", "desc": "

The Node.js test runner supports running in watch mode by passing the --watch flag:

\n
node --test --watch\n
\n

In watch mode, the test runner will watch for changes to test files and\ntheir dependencies. When a change is detected, the test runner will\nrerun the tests affected by the change.\nThe test runner will continue to run until the process is terminated.

", "displayName": "Watch mode" }, { "textRaw": "Global setup and teardown", "name": "global_setup_and_teardown", "type": "module", "meta": { "added": [ "v24.0.0" ], "changes": [] }, "stability": 1, "stabilityText": "Early development", "desc": "

The test runner supports specifying a module that will be evaluated before all tests are executed and\ncan be used to setup global state or fixtures for tests. This is useful for preparing resources or setting up\nshared state that is required by multiple tests.

\n

This module can export any of the following:

\n
    \n
  • A globalSetup function which runs once before all tests start
  • \n
  • A globalTeardown function which runs once after all tests complete
  • \n
\n

The module is specified using the --test-global-setup flag when running tests from the command line.

\n
// setup-module.js\nasync function globalSetup() {\n  // Setup shared resources, state, or environment\n  console.log('Global setup executed');\n  // Run servers, create files, prepare databases, etc.\n}\n\nasync function globalTeardown() {\n  // Clean up resources, state, or environment\n  console.log('Global teardown executed');\n  // Close servers, remove files, disconnect from databases, etc.\n}\n\nmodule.exports = { globalSetup, globalTeardown };\n
\n
// setup-module.mjs\nexport async function globalSetup() {\n  // Setup shared resources, state, or environment\n  console.log('Global setup executed');\n  // Run servers, create files, prepare databases, etc.\n}\n\nexport async function globalTeardown() {\n  // Clean up resources, state, or environment\n  console.log('Global teardown executed');\n  // Close servers, remove files, disconnect from databases, etc.\n}\n
\n

If the global setup function throws an error, no tests will be run and the process will exit with a non-zero exit code.\nThe global teardown function will not be called in this case.

", "displayName": "Global setup and teardown" }, { "textRaw": "Running tests from the command line", "name": "running_tests_from_the_command_line", "type": "module", "desc": "

The Node.js test runner can be invoked from the command line by passing the\n--test flag:

\n
node --test\n
\n

By default, Node.js will run all files matching these patterns:

\n
    \n
  • **/*.test.{cjs,mjs,js}
  • \n
  • **/*-test.{cjs,mjs,js}
  • \n
  • **/*_test.{cjs,mjs,js}
  • \n
  • **/test-*.{cjs,mjs,js}
  • \n
  • **/test.{cjs,mjs,js}
  • \n
  • **/test/**/*.{cjs,mjs,js}
  • \n
\n

Unless --no-strip-types is supplied, the following\nadditional patterns are also matched:

\n
    \n
  • **/*.test.{cts,mts,ts}
  • \n
  • **/*-test.{cts,mts,ts}
  • \n
  • **/*_test.{cts,mts,ts}
  • \n
  • **/test-*.{cts,mts,ts}
  • \n
  • **/test.{cts,mts,ts}
  • \n
  • **/test/**/*.{cts,mts,ts}
  • \n
\n

Alternatively, one or more glob patterns can be provided as the\nfinal argument(s) to the Node.js command, as shown below.\nGlob patterns follow the behavior of glob(7).\nThe glob patterns should be enclosed in double quotes on the command line to\nprevent shell expansion, which can reduce portability across systems.

\n
node --test \"**/*.test.js\" \"**/*.spec.js\"\n
\n

Matching files are executed as test files.\nMore information on the test file execution can be found\nin the test runner execution model section.

", "modules": [ { "textRaw": "Test runner execution model", "name": "test_runner_execution_model", "type": "module", "desc": "

When process-level test isolation is enabled, each matching test file is\nexecuted in a separate child process. The maximum number of child processes\nrunning at any time is controlled by the --test-concurrency flag. If the\nchild process finishes with an exit code of 0, the test is considered passing.\nOtherwise, the test is considered to be a failure. Test files must be executable\nby Node.js, but are not required to use the node:test module internally.

\n

Each test file is executed as if it was a regular script. That is, if the test\nfile itself uses node:test to define tests, all of those tests will be\nexecuted within a single application thread, regardless of the value of the\nconcurrency option of test().

\n

When process-level test isolation is disabled, each matching test file is\nimported into the test runner process. Once all test files have been loaded, the\ntop level tests are executed with a concurrency of one. Because the test files\nare all run within the same context, it is possible for tests to interact with\neach other in ways that are not possible when isolation is enabled. For example,\nif a test relies on global state, it is possible for that state to be modified\nby a test originating from another file.

", "modules": [ { "textRaw": "Child process option inheritance", "name": "child_process_option_inheritance", "type": "module", "desc": "

When running tests in process isolation mode (the default), spawned child processes\ninherit Node.js options from the parent process, including those specified in\nconfiguration files. However, certain flags are filtered out to enable proper\ntest runner functionality:

\n
    \n
  • --test - Prevented to avoid recursive test execution
  • \n
  • --experimental-test-coverage - Managed by the test runner
  • \n
  • --watch - Watch mode is handled at the parent level
  • \n
  • --experimental-default-config-file - Config file loading is handled by the parent
  • \n
  • --test-reporter - Reporting is managed by the parent process
  • \n
  • --test-reporter-destination - Output destinations are controlled by the parent
  • \n
  • --experimental-config-file - Config file paths are managed by the parent
  • \n
\n

All other Node.js options from command line arguments, environment variables,\nand configuration files are inherited by the child processes.

", "displayName": "Child process option inheritance" } ], "displayName": "Test runner execution model" } ], "displayName": "Running tests from the command line" }, { "textRaw": "Collecting code coverage", "name": "collecting_code_coverage", "type": "module", "stability": 1, "stabilityText": "Experimental", "desc": "

When Node.js is started with the --experimental-test-coverage\ncommand-line flag, code coverage is collected and statistics are reported once\nall tests have completed. If the NODE_V8_COVERAGE environment variable is\nused to specify a code coverage directory, the generated V8 coverage files are\nwritten to that directory. Node.js core modules and files within\nnode_modules/ directories are, by default, not included in the coverage report.\nHowever, they can be explicitly included via the --test-coverage-include flag.\nBy default all the matching test files are excluded from the coverage report.\nExclusions can be overridden by using the --test-coverage-exclude flag.\nIf coverage is enabled, the coverage report is sent to any test reporters via\nthe 'test:coverage' event.

\n

Coverage can be disabled on a series of lines using the following\ncomment syntax:

\n
/* node:coverage disable */\nif (anAlwaysFalseCondition) {\n  // Code in this branch will never be executed, but the lines are ignored for\n  // coverage purposes. All lines following the 'disable' comment are ignored\n  // until a corresponding 'enable' comment is encountered.\n  console.log('this is never executed');\n}\n/* node:coverage enable */\n
\n

Coverage can also be disabled for a specified number of lines. After the\nspecified number of lines, coverage will be automatically reenabled. If the\nnumber of lines is not explicitly provided, a single line is ignored.

\n
/* node:coverage ignore next */\nif (anAlwaysFalseCondition) { console.log('this is never executed'); }\n\n/* node:coverage ignore next 3 */\nif (anAlwaysFalseCondition) {\n  console.log('this is never executed');\n}\n
", "modules": [ { "textRaw": "Coverage reporters", "name": "coverage_reporters", "type": "module", "desc": "

The tap and spec reporters will print a summary of the coverage statistics.\nThere is also an lcov reporter that will generate an lcov file which can be\nused as an in depth coverage report.

\n
node --test --experimental-test-coverage --test-reporter=lcov --test-reporter-destination=lcov.info\n
\n
    \n
  • No test results are reported by this reporter.
  • \n
  • This reporter should ideally be used alongside another reporter.
  • \n
", "displayName": "Coverage reporters" } ], "displayName": "Collecting code coverage" }, { "textRaw": "Mocking", "name": "mocking", "type": "module", "desc": "

The node:test module supports mocking during testing via a top-level mock\nobject. The following example creates a spy on a function that adds two numbers\ntogether. The spy is then used to assert that the function was called as\nexpected.

\n
import assert from 'node:assert';\nimport { mock, test } from 'node:test';\n\ntest('spies on a function', () => {\n  const sum = mock.fn((a, b) => {\n    return a + b;\n  });\n\n  assert.strictEqual(sum.mock.callCount(), 0);\n  assert.strictEqual(sum(3, 4), 7);\n  assert.strictEqual(sum.mock.callCount(), 1);\n\n  const call = sum.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3, 4]);\n  assert.strictEqual(call.result, 7);\n  assert.strictEqual(call.error, undefined);\n\n  // Reset the globally tracked mocks.\n  mock.reset();\n});\n
\n
'use strict';\nconst assert = require('node:assert');\nconst { mock, test } = require('node:test');\n\ntest('spies on a function', () => {\n  const sum = mock.fn((a, b) => {\n    return a + b;\n  });\n\n  assert.strictEqual(sum.mock.callCount(), 0);\n  assert.strictEqual(sum(3, 4), 7);\n  assert.strictEqual(sum.mock.callCount(), 1);\n\n  const call = sum.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3, 4]);\n  assert.strictEqual(call.result, 7);\n  assert.strictEqual(call.error, undefined);\n\n  // Reset the globally tracked mocks.\n  mock.reset();\n});\n
\n

The same mocking functionality is also exposed on the TestContext object\nof each test. The following example creates a spy on an object method using the\nAPI exposed on the TestContext. The benefit of mocking via the test context is\nthat the test runner will automatically restore all mocked functionality once\nthe test finishes.

\n
test('spies on an object method', (t) => {\n  const number = {\n    value: 5,\n    add(a) {\n      return this.value + a;\n    },\n  };\n\n  t.mock.method(number, 'add');\n  assert.strictEqual(number.add.mock.callCount(), 0);\n  assert.strictEqual(number.add(3), 8);\n  assert.strictEqual(number.add.mock.callCount(), 1);\n\n  const call = number.add.mock.calls[0];\n\n  assert.deepStrictEqual(call.arguments, [3]);\n  assert.strictEqual(call.result, 8);\n  assert.strictEqual(call.target, undefined);\n  assert.strictEqual(call.this, number);\n});\n
", "modules": [ { "textRaw": "Timers", "name": "timers", "type": "module", "desc": "

Mocking timers is a technique commonly used in software testing to simulate and\ncontrol the behavior of timers, such as setInterval and setTimeout,\nwithout actually waiting for the specified time intervals.

\n

Refer to the MockTimers class for a full list of methods and features.

\n

This allows developers to write more reliable and\npredictable tests for time-dependent functionality.

\n

The example below shows how to mock setTimeout.\nUsing .enable({ apis: ['setTimeout'] });\nit will mock the setTimeout functions in the node:timers and\nnode:timers/promises modules,\nas well as from the Node.js global context.

\n

Note: Destructuring functions such as\nimport { setTimeout } from 'node:timers'\nis currently not supported by this API.

\n
import assert from 'node:assert';\nimport { mock, test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', () => {\n  const fn = mock.fn();\n\n  // Optionally choose what to mock\n  mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n\n  // Reset the globally tracked mocks.\n  mock.timers.reset();\n\n  // If you call reset mock instance, it will also reset timers instance\n  mock.reset();\n});\n
\n
const assert = require('node:assert');\nconst { mock, test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', () => {\n  const fn = mock.fn();\n\n  // Optionally choose what to mock\n  mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n\n  // Reset the globally tracked mocks.\n  mock.timers.reset();\n\n  // If you call reset mock instance, it will also reset timers instance\n  mock.reset();\n});\n
\n

The same mocking functionality is also exposed in the mock property on the TestContext object\nof each test. The benefit of mocking via the test context is\nthat the test runner will automatically restore all mocked timers\nfunctionality once the test finishes.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
", "displayName": "Timers" }, { "textRaw": "Dates", "name": "dates", "type": "module", "desc": "

The mock timers API also allows the mocking of the Date object. This is a\nuseful feature for testing time-dependent functionality, or to simulate\ninternal calendar functions such as Date.now().

\n

The dates implementation is also part of the MockTimers class. Refer to it\nfor a full list of methods and features.

\n

Note: Dates and timers are dependent when mocked together. This means that\nif you have both the Date and setTimeout mocked, advancing the time will\nalso advance the mocked date as they simulate a single internal clock.

\n

The example below show how to mock the Date object and obtain the current\nDate.now() value.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks the Date object', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'] });\n  // If not specified, the initial date will be based on 0 in the UNIX epoch\n  assert.strictEqual(Date.now(), 0);\n\n  // Advance in time will also advance the date\n  context.mock.timers.tick(9999);\n  assert.strictEqual(Date.now(), 9999);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks the Date object', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'] });\n  // If not specified, the initial date will be based on 0 in the UNIX epoch\n  assert.strictEqual(Date.now(), 0);\n\n  // Advance in time will also advance the date\n  context.mock.timers.tick(9999);\n  assert.strictEqual(Date.now(), 9999);\n});\n
\n

If there is no initial epoch set, the initial date will be based on 0 in the\nUnix epoch. This is January 1st, 1970, 00:00:00 UTC. You can set an initial date\nby passing a now property to the .enable() method. This value will be used\nas the initial date for the mocked Date object. It can either be a positive\ninteger, or another Date object.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks the Date object with initial time', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'], now: 100 });\n  assert.strictEqual(Date.now(), 100);\n\n  // Advance in time will also advance the date\n  context.mock.timers.tick(200);\n  assert.strictEqual(Date.now(), 300);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks the Date object with initial time', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'], now: 100 });\n  assert.strictEqual(Date.now(), 100);\n\n  // Advance in time will also advance the date\n  context.mock.timers.tick(200);\n  assert.strictEqual(Date.now(), 300);\n});\n
\n

You can use the .setTime() method to manually move the mocked date to another\ntime. This method only accepts a positive integer.

\n

Note: This method will not execute any mocked timers that are in the past\nfrom the new time.

\n

In the below example we are setting a new time for the mocked date.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('sets the time of a date object', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'], now: 100 });\n  assert.strictEqual(Date.now(), 100);\n\n  // Advance in time will also advance the date\n  context.mock.timers.setTime(1000);\n  context.mock.timers.tick(200);\n  assert.strictEqual(Date.now(), 1200);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('sets the time of a date object', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['Date'], now: 100 });\n  assert.strictEqual(Date.now(), 100);\n\n  // Advance in time will also advance the date\n  context.mock.timers.setTime(1000);\n  context.mock.timers.tick(200);\n  assert.strictEqual(Date.now(), 1200);\n});\n
\n

Timers scheduled in the past will not run when you call setTime(). To execute those timers, you can use\nthe .tick() method to move forward from the new time.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('setTime does not execute timers', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const fn = context.mock.fn();\n  setTimeout(fn, 1000);\n\n  context.mock.timers.setTime(800);\n  // Timer is not executed as the time is not yet reached\n  assert.strictEqual(fn.mock.callCount(), 0);\n  assert.strictEqual(Date.now(), 800);\n\n  context.mock.timers.setTime(1200);\n  // Timer is still not executed\n  assert.strictEqual(fn.mock.callCount(), 0);\n  // Advance in time to execute the timer\n  context.mock.timers.tick(0);\n  assert.strictEqual(fn.mock.callCount(), 1);\n  assert.strictEqual(Date.now(), 1200);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('runs timers as setTime passes ticks', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const fn = context.mock.fn();\n  setTimeout(fn, 1000);\n\n  context.mock.timers.setTime(800);\n  // Timer is not executed as the time is not yet reached\n  assert.strictEqual(fn.mock.callCount(), 0);\n  assert.strictEqual(Date.now(), 800);\n\n  context.mock.timers.setTime(1200);\n  // Timer is executed as the time is now reached\n  assert.strictEqual(fn.mock.callCount(), 1);\n  assert.strictEqual(Date.now(), 1200);\n});\n
\n

Using .runAll() will execute all timers that are currently in the queue. This\nwill also advance the mocked date to the time of the last timer that was\nexecuted as if the time has passed.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('runs timers as setTime passes ticks', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const fn = context.mock.fn();\n  setTimeout(fn, 1000);\n  setTimeout(fn, 2000);\n  setTimeout(fn, 3000);\n\n  context.mock.timers.runAll();\n  // All timers are executed as the time is now reached\n  assert.strictEqual(fn.mock.callCount(), 3);\n  assert.strictEqual(Date.now(), 3000);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('runs timers as setTime passes ticks', (context) => {\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const fn = context.mock.fn();\n  setTimeout(fn, 1000);\n  setTimeout(fn, 2000);\n  setTimeout(fn, 3000);\n\n  context.mock.timers.runAll();\n  // All timers are executed as the time is now reached\n  assert.strictEqual(fn.mock.callCount(), 3);\n  assert.strictEqual(Date.now(), 3000);\n});\n
", "displayName": "Dates" } ], "displayName": "Mocking" }, { "textRaw": "Snapshot testing", "name": "snapshot_testing", "type": "module", "meta": { "added": [ "v22.3.0" ], "changes": [ { "version": "v23.4.0", "pr-url": "https://github.com/nodejs/node/pull/55897", "description": "Snapshot testing is no longer experimental." } ] }, "desc": "

Snapshot tests allow arbitrary values to be serialized into string values and\ncompared against a set of known good values. The known good values are known as\nsnapshots, and are stored in a snapshot file. Snapshot files are managed by the\ntest runner, but are designed to be human readable to aid in debugging. Best\npractice is for snapshot files to be checked into source control along with your\ntest files.

\n

Snapshot files are generated by starting Node.js with the\n--test-update-snapshots command-line flag. A separate snapshot file is\ngenerated for each test file. By default, the snapshot file has the same name\nas the test file with a .snapshot file extension. This behavior can be\nconfigured using the snapshot.setResolveSnapshotPath() function. Each\nsnapshot assertion corresponds to an export in the snapshot file.

\n

An example snapshot test is shown below. The first time this test is executed,\nit will fail because the corresponding snapshot file does not exist.

\n
// test.js\nsuite('suite of snapshot tests', () => {\n  test('snapshot test', (t) => {\n    t.assert.snapshot({ value1: 1, value2: 2 });\n    t.assert.snapshot(5);\n  });\n});\n
\n

Generate the snapshot file by running the test file with\n--test-update-snapshots. The test should pass, and a file named\ntest.js.snapshot is created in the same directory as the test file. The\ncontents of the snapshot file are shown below. Each snapshot is identified by\nthe full name of test and a counter to differentiate between snapshots in the\nsame test.

\n
exports[`suite of snapshot tests > snapshot test 1`] = `\n{\n  \"value1\": 1,\n  \"value2\": 2\n}\n`;\n\nexports[`suite of snapshot tests > snapshot test 2`] = `\n5\n`;\n
\n

Once the snapshot file is created, run the tests again without the\n--test-update-snapshots flag. The tests should pass now.

", "displayName": "Snapshot testing" }, { "textRaw": "Test reporters", "name": "test_reporters", "type": "module", "meta": { "added": [ "v19.6.0", "v18.15.0" ], "changes": [ { "version": "v23.0.0", "pr-url": "https://github.com/nodejs/node/pull/54548", "description": "The default reporter on non-TTY stdout is changed from `tap` to `spec`, aligning with TTY stdout." }, { "version": [ "v19.9.0", "v18.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/47238", "description": "Reporters are now exposed at `node:test/reporters`." } ] }, "desc": "

The node:test module supports passing --test-reporter\nflags for the test runner to use a specific reporter.

\n

The following built-reporters are supported:

\n
    \n
  • \n

    spec\nThe spec reporter outputs the test results in a human-readable format. This\nis the default reporter.

    \n
  • \n
  • \n

    tap\nThe tap reporter outputs the test results in the TAP format.

    \n
  • \n
  • \n

    dot\nThe dot reporter outputs the test results in a compact format,\nwhere each passing test is represented by a .,\nand each failing test is represented by a X.

    \n
  • \n
  • \n

    junit\nThe junit reporter outputs test results in a jUnit XML format

    \n
  • \n
  • \n

    lcov\nThe lcov reporter outputs test coverage when used with the\n--experimental-test-coverage flag.

    \n
  • \n
\n

The exact output of these reporters is subject to change between versions of\nNode.js, and should not be relied on programmatically. If programmatic access\nto the test runner's output is required, use the events emitted by the\n<TestsStream>.

\n

The reporters are available via the node:test/reporters module:

\n
import { tap, spec, dot, junit, lcov } from 'node:test/reporters';\n
\n
const { tap, spec, dot, junit, lcov } = require('node:test/reporters');\n
", "modules": [ { "textRaw": "Custom reporters", "name": "custom_reporters", "type": "module", "desc": "

--test-reporter can be used to specify a path to custom reporter.\nA custom reporter is a module that exports a value\naccepted by stream.compose.\nReporters should transform events emitted by a <TestsStream>

\n

Example of a custom reporter using <stream.Transform>:

\n
import { Transform } from 'node:stream';\n\nconst customReporter = new Transform({\n  writableObjectMode: true,\n  transform(event, encoding, callback) {\n    switch (event.type) {\n      case 'test:dequeue':\n        callback(null, `test ${event.data.name} dequeued`);\n        break;\n      case 'test:enqueue':\n        callback(null, `test ${event.data.name} enqueued`);\n        break;\n      case 'test:watch:drained':\n        callback(null, 'test watch queue drained');\n        break;\n      case 'test:watch:restarted':\n        callback(null, 'test watch restarted due to file change');\n        break;\n      case 'test:start':\n        callback(null, `test ${event.data.name} started`);\n        break;\n      case 'test:pass':\n        callback(null, `test ${event.data.name} passed`);\n        break;\n      case 'test:fail':\n        callback(null, `test ${event.data.name} failed`);\n        break;\n      case 'test:plan':\n        callback(null, 'test plan');\n        break;\n      case 'test:diagnostic':\n      case 'test:stderr':\n      case 'test:stdout':\n        callback(null, event.data.message);\n        break;\n      case 'test:coverage': {\n        const { totalLineCount } = event.data.summary.totals;\n        callback(null, `total line count: ${totalLineCount}\\n`);\n        break;\n      }\n    }\n  },\n});\n\nexport default customReporter;\n
\n
const { Transform } = require('node:stream');\n\nconst customReporter = new Transform({\n  writableObjectMode: true,\n  transform(event, encoding, callback) {\n    switch (event.type) {\n      case 'test:dequeue':\n        callback(null, `test ${event.data.name} dequeued`);\n        break;\n      case 'test:enqueue':\n        callback(null, `test ${event.data.name} enqueued`);\n        break;\n      case 'test:watch:drained':\n        callback(null, 'test watch queue drained');\n        break;\n      case 'test:watch:restarted':\n        callback(null, 'test watch restarted due to file change');\n        break;\n      case 'test:start':\n        callback(null, `test ${event.data.name} started`);\n        break;\n      case 'test:pass':\n        callback(null, `test ${event.data.name} passed`);\n        break;\n      case 'test:fail':\n        callback(null, `test ${event.data.name} failed`);\n        break;\n      case 'test:plan':\n        callback(null, 'test plan');\n        break;\n      case 'test:diagnostic':\n      case 'test:stderr':\n      case 'test:stdout':\n        callback(null, event.data.message);\n        break;\n      case 'test:coverage': {\n        const { totalLineCount } = event.data.summary.totals;\n        callback(null, `total line count: ${totalLineCount}\\n`);\n        break;\n      }\n    }\n  },\n});\n\nmodule.exports = customReporter;\n
\n

Example of a custom reporter using a generator function:

\n
export default async function * customReporter(source) {\n  for await (const event of source) {\n    switch (event.type) {\n      case 'test:dequeue':\n        yield `test ${event.data.name} dequeued\\n`;\n        break;\n      case 'test:enqueue':\n        yield `test ${event.data.name} enqueued\\n`;\n        break;\n      case 'test:watch:drained':\n        yield 'test watch queue drained\\n';\n        break;\n      case 'test:watch:restarted':\n        yield 'test watch restarted due to file change\\n';\n        break;\n      case 'test:start':\n        yield `test ${event.data.name} started\\n`;\n        break;\n      case 'test:pass':\n        yield `test ${event.data.name} passed\\n`;\n        break;\n      case 'test:fail':\n        yield `test ${event.data.name} failed\\n`;\n        break;\n      case 'test:plan':\n        yield 'test plan\\n';\n        break;\n      case 'test:diagnostic':\n      case 'test:stderr':\n      case 'test:stdout':\n        yield `${event.data.message}\\n`;\n        break;\n      case 'test:coverage': {\n        const { totalLineCount } = event.data.summary.totals;\n        yield `total line count: ${totalLineCount}\\n`;\n        break;\n      }\n    }\n  }\n}\n
\n
module.exports = async function * customReporter(source) {\n  for await (const event of source) {\n    switch (event.type) {\n      case 'test:dequeue':\n        yield `test ${event.data.name} dequeued\\n`;\n        break;\n      case 'test:enqueue':\n        yield `test ${event.data.name} enqueued\\n`;\n        break;\n      case 'test:watch:drained':\n        yield 'test watch queue drained\\n';\n        break;\n      case 'test:watch:restarted':\n        yield 'test watch restarted due to file change\\n';\n        break;\n      case 'test:start':\n        yield `test ${event.data.name} started\\n`;\n        break;\n      case 'test:pass':\n        yield `test ${event.data.name} passed\\n`;\n        break;\n      case 'test:fail':\n        yield `test ${event.data.name} failed\\n`;\n        break;\n      case 'test:plan':\n        yield 'test plan\\n';\n        break;\n      case 'test:diagnostic':\n      case 'test:stderr':\n      case 'test:stdout':\n        yield `${event.data.message}\\n`;\n        break;\n      case 'test:coverage': {\n        const { totalLineCount } = event.data.summary.totals;\n        yield `total line count: ${totalLineCount}\\n`;\n        break;\n      }\n    }\n  }\n};\n
\n

The value provided to --test-reporter should be a string like one used in an\nimport() in JavaScript code, or a value provided for --import.

", "displayName": "Custom reporters" }, { "textRaw": "Multiple reporters", "name": "multiple_reporters", "type": "module", "desc": "

The --test-reporter flag can be specified multiple times to report test\nresults in several formats. In this situation\nit is required to specify a destination for each reporter\nusing --test-reporter-destination.\nDestination can be stdout, stderr, or a file path.\nReporters and destinations are paired according\nto the order they were specified.

\n

In the following example, the spec reporter will output to stdout,\nand the dot reporter will output to file.txt:

\n
node --test-reporter=spec --test-reporter=dot --test-reporter-destination=stdout --test-reporter-destination=file.txt\n
\n

When a single reporter is specified, the destination will default to stdout,\nunless a destination is explicitly provided.

", "displayName": "Multiple reporters" } ], "displayName": "Test reporters" }, { "textRaw": "`assert`", "name": "`assert`", "type": "module", "meta": { "added": [ "v23.7.0", "v22.14.0" ], "changes": [] }, "desc": "

An object whose methods are used to configure available assertions on the\nTestContext objects in the current process. The methods from node:assert\nand snapshot testing functions are available by default.

\n

It is possible to apply the same configuration to all files by placing common\nconfiguration code in a module\npreloaded with --require or --import.

", "methods": [ { "textRaw": "`assert.register(name, fn)`", "name": "register", "type": "method", "meta": { "added": [ "v23.7.0", "v22.14.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name" }, { "name": "fn" } ] } ], "desc": "

Defines a new assertion function with the provided name and function. If an\nassertion already exists with the same name, it is overwritten.

" } ], "displayName": "`assert`" }, { "textRaw": "`snapshot`", "name": "`snapshot`", "type": "module", "meta": { "added": [ "v22.3.0" ], "changes": [] }, "desc": "

An object whose methods are used to configure default snapshot settings in the\ncurrent process. It is possible to apply the same configuration to all files by\nplacing common configuration code in a module preloaded with --require or\n--import.

", "methods": [ { "textRaw": "`snapshot.setDefaultSnapshotSerializers(serializers)`", "name": "setDefaultSnapshotSerializers", "type": "method", "meta": { "added": [ "v22.3.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`serializers` {Array} An array of synchronous functions used as the default serializers for snapshot tests.", "name": "serializers", "type": "Array", "desc": "An array of synchronous functions used as the default serializers for snapshot tests." } ] } ], "desc": "

This function is used to customize the default serialization mechanism used by\nthe test runner. By default, the test runner performs serialization by calling\nJSON.stringify(value, null, 2) on the provided value. JSON.stringify() does\nhave limitations regarding circular structures and supported data types. If a\nmore robust serialization mechanism is required, this function should be used.

" }, { "textRaw": "`snapshot.setResolveSnapshotPath(fn)`", "name": "setResolveSnapshotPath", "type": "method", "meta": { "added": [ "v22.3.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function} A function used to compute the location of the snapshot file. The function receives the path of the test file as its only argument. If the test is not associated with a file (for example in the REPL), the input is undefined. `fn()` must return a string specifying the location of the snapshot snapshot file.", "name": "fn", "type": "Function", "desc": "A function used to compute the location of the snapshot file. The function receives the path of the test file as its only argument. If the test is not associated with a file (for example in the REPL), the input is undefined. `fn()` must return a string specifying the location of the snapshot snapshot file." } ] } ], "desc": "

This function is used to customize the location of the snapshot file used for\nsnapshot testing. By default, the snapshot filename is the same as the entry\npoint filename with a .snapshot file extension.

" } ], "displayName": "`snapshot`" } ], "methods": [ { "textRaw": "`run([options])`", "name": "run", "type": "method", "meta": { "added": [ "v18.9.0", "v16.19.0" ], "changes": [ { "version": "v25.6.0", "pr-url": "https://github.com/nodejs/node/pull/61367", "description": "Add the `env` option." }, { "version": "v24.7.0", "pr-url": "https://github.com/nodejs/node/pull/59443", "description": "Added a rerunFailuresFilePath option." }, { "version": "v23.0.0", "pr-url": "https://github.com/nodejs/node/pull/54705", "description": "Added the `cwd` option." }, { "version": [ "v23.0.0", "v22.10.0" ], "pr-url": "https://github.com/nodejs/node/pull/53937", "description": "Added coverage options." }, { "version": "v22.8.0", "pr-url": "https://github.com/nodejs/node/pull/53927", "description": "Added the `isolation` option." }, { "version": "v22.6.0", "pr-url": "https://github.com/nodejs/node/pull/53866", "description": "Added the `globPatterns` option." }, { "version": [ "v22.0.0", "v20.14.0" ], "pr-url": "https://github.com/nodejs/node/pull/52038", "description": "Added the `forceExit` option." }, { "version": [ "v20.1.0", "v18.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/47628", "description": "Add a testNamePatterns option." } ] }, "signatures": [ { "params": [ { "textRaw": "`options` {Object} Configuration options for running tests. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for running tests. The following properties are supported:", "options": [ { "textRaw": "`concurrency` {number|boolean} If a number is provided, then that many test processes would run in parallel, where each process corresponds to one test file. If `true`, it would run `os.availableParallelism() - 1` test files in parallel. If `false`, it would only run one test file at a time. **Default:** `false`.", "name": "concurrency", "type": "number|boolean", "default": "`false`", "desc": "If a number is provided, then that many test processes would run in parallel, where each process corresponds to one test file. If `true`, it would run `os.availableParallelism() - 1` test files in parallel. If `false`, it would only run one test file at a time." }, { "textRaw": "`cwd` {string} Specifies the current working directory to be used by the test runner. Serves as the base path for resolving files as if running tests from the command line from that directory. **Default:** `process.cwd()`.", "name": "cwd", "type": "string", "default": "`process.cwd()`", "desc": "Specifies the current working directory to be used by the test runner. Serves as the base path for resolving files as if running tests from the command line from that directory." }, { "textRaw": "`files` {Array} An array containing the list of files to run. **Default:** Same as running tests from the command line.", "name": "files", "type": "Array", "default": "Same as running tests from the command line", "desc": "An array containing the list of files to run." }, { "textRaw": "`forceExit` {boolean} Configures the test runner to exit the process once all known tests have finished executing even if the event loop would otherwise remain active. **Default:** `false`.", "name": "forceExit", "type": "boolean", "default": "`false`", "desc": "Configures the test runner to exit the process once all known tests have finished executing even if the event loop would otherwise remain active." }, { "textRaw": "`globPatterns` {Array} An array containing the list of glob patterns to match test files. This option cannot be used together with `files`. **Default:** Same as running tests from the command line.", "name": "globPatterns", "type": "Array", "default": "Same as running tests from the command line", "desc": "An array containing the list of glob patterns to match test files. This option cannot be used together with `files`." }, { "textRaw": "`inspectPort` {number|Function} Sets inspector port of test child process. This can be a number, or a function that takes no arguments and returns a number. If a nullish value is provided, each process gets its own port, incremented from the primary's `process.debugPort`. This option is ignored if the `isolation` option is set to `'none'` as no child processes are spawned. **Default:** `undefined`.", "name": "inspectPort", "type": "number|Function", "default": "`undefined`", "desc": "Sets inspector port of test child process. This can be a number, or a function that takes no arguments and returns a number. If a nullish value is provided, each process gets its own port, incremented from the primary's `process.debugPort`. This option is ignored if the `isolation` option is set to `'none'` as no child processes are spawned." }, { "textRaw": "`isolation` {string} Configures the type of test isolation. If set to `'process'`, each test file is run in a separate child process. If set to `'none'`, all test files run in the current process. **Default:** `'process'`.", "name": "isolation", "type": "string", "default": "`'process'`", "desc": "Configures the type of test isolation. If set to `'process'`, each test file is run in a separate child process. If set to `'none'`, all test files run in the current process." }, { "textRaw": "`only` {boolean} If truthy, the test context will only run tests that have the `only` option set", "name": "only", "type": "boolean", "desc": "If truthy, the test context will only run tests that have the `only` option set" }, { "textRaw": "`setup` {Function} A function that accepts the `TestsStream` instance and can be used to setup listeners before any tests are run. **Default:** `undefined`.", "name": "setup", "type": "Function", "default": "`undefined`", "desc": "A function that accepts the `TestsStream` instance and can be used to setup listeners before any tests are run." }, { "textRaw": "`execArgv` {Array} An array of CLI flags to pass to the `node` executable when spawning the subprocesses. This option has no effect when `isolation` is `'none`'. **Default:** `[]`", "name": "execArgv", "type": "Array", "default": "`[]`", "desc": "An array of CLI flags to pass to the `node` executable when spawning the subprocesses. This option has no effect when `isolation` is `'none`'." }, { "textRaw": "`argv` {Array} An array of CLI flags to pass to each test file when spawning the subprocesses. This option has no effect when `isolation` is `'none'`. **Default:** `[]`.", "name": "argv", "type": "Array", "default": "`[]`", "desc": "An array of CLI flags to pass to each test file when spawning the subprocesses. This option has no effect when `isolation` is `'none'`." }, { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress test execution.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress test execution." }, { "textRaw": "`testNamePatterns` {string|RegExp|Array} A String, RegExp or a RegExp Array, that can be used to only run tests whose name matches the provided pattern. Test name patterns are interpreted as JavaScript regular expressions. For each test that is executed, any corresponding test hooks, such as `beforeEach()`, are also run. **Default:** `undefined`.", "name": "testNamePatterns", "type": "string|RegExp|Array", "default": "`undefined`", "desc": "A String, RegExp or a RegExp Array, that can be used to only run tests whose name matches the provided pattern. Test name patterns are interpreted as JavaScript regular expressions. For each test that is executed, any corresponding test hooks, such as `beforeEach()`, are also run." }, { "textRaw": "`testSkipPatterns` {string|RegExp|Array} A String, RegExp or a RegExp Array, that can be used to exclude running tests whose name matches the provided pattern. Test name patterns are interpreted as JavaScript regular expressions. For each test that is executed, any corresponding test hooks, such as `beforeEach()`, are also run. **Default:** `undefined`.", "name": "testSkipPatterns", "type": "string|RegExp|Array", "default": "`undefined`", "desc": "A String, RegExp or a RegExp Array, that can be used to exclude running tests whose name matches the provided pattern. Test name patterns are interpreted as JavaScript regular expressions. For each test that is executed, any corresponding test hooks, such as `beforeEach()`, are also run." }, { "textRaw": "`timeout` {number} A number of milliseconds the test execution will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the test execution will fail after. If unspecified, subtests inherit this value from their parent." }, { "textRaw": "`watch` {boolean} Whether to run in watch mode or not. **Default:** `false`.", "name": "watch", "type": "boolean", "default": "`false`", "desc": "Whether to run in watch mode or not." }, { "textRaw": "`shard` {Object} Running tests in a specific shard. **Default:** `undefined`.", "name": "shard", "type": "Object", "default": "`undefined`", "desc": "Running tests in a specific shard.", "options": [ { "textRaw": "`index` {number} is a positive integer between 1 and {total} that specifies the index of the shard to run. This option is _required_.", "name": "index", "type": "number", "desc": "is a positive integer between 1 and {total} that specifies the index of the shard to run. This option is _required_." }, { "textRaw": "`total` {number} is a positive integer that specifies the total number of shards to split the test files to. This option is _required_.", "name": "total", "type": "number", "desc": "is a positive integer that specifies the total number of shards to split the test files to. This option is _required_." } ] }, { "textRaw": "`rerunFailuresFilePath` {string} A file path where the test runner will store the state of the tests to allow rerunning only the failed tests on a next run. see [Rerunning failed tests][] for more information. **Default:** `undefined`.", "name": "rerunFailuresFilePath", "type": "string", "default": "`undefined`", "desc": "A file path where the test runner will store the state of the tests to allow rerunning only the failed tests on a next run. see [Rerunning failed tests][] for more information." }, { "textRaw": "`coverage` {boolean} enable code coverage collection. **Default:** `false`.", "name": "coverage", "type": "boolean", "default": "`false`", "desc": "enable code coverage collection." }, { "textRaw": "`coverageExcludeGlobs` {string|Array} Excludes specific files from code coverage using a glob pattern, which can match both absolute and relative file paths. This property is only applicable when `coverage` was set to `true`. If both `coverageExcludeGlobs` and `coverageIncludeGlobs` are provided, files must meet **both** criteria to be included in the coverage report. **Default:** `undefined`.", "name": "coverageExcludeGlobs", "type": "string|Array", "default": "`undefined`", "desc": "Excludes specific files from code coverage using a glob pattern, which can match both absolute and relative file paths. This property is only applicable when `coverage` was set to `true`. If both `coverageExcludeGlobs` and `coverageIncludeGlobs` are provided, files must meet **both** criteria to be included in the coverage report." }, { "textRaw": "`coverageIncludeGlobs` {string|Array} Includes specific files in code coverage using a glob pattern, which can match both absolute and relative file paths. This property is only applicable when `coverage` was set to `true`. If both `coverageExcludeGlobs` and `coverageIncludeGlobs` are provided, files must meet **both** criteria to be included in the coverage report. **Default:** `undefined`.", "name": "coverageIncludeGlobs", "type": "string|Array", "default": "`undefined`", "desc": "Includes specific files in code coverage using a glob pattern, which can match both absolute and relative file paths. This property is only applicable when `coverage` was set to `true`. If both `coverageExcludeGlobs` and `coverageIncludeGlobs` are provided, files must meet **both** criteria to be included in the coverage report." }, { "textRaw": "`lineCoverage` {number} Require a minimum percent of covered lines. If code coverage does not reach the threshold specified, the process will exit with code `1`. **Default:** `0`.", "name": "lineCoverage", "type": "number", "default": "`0`", "desc": "Require a minimum percent of covered lines. If code coverage does not reach the threshold specified, the process will exit with code `1`." }, { "textRaw": "`branchCoverage` {number} Require a minimum percent of covered branches. If code coverage does not reach the threshold specified, the process will exit with code `1`. **Default:** `0`.", "name": "branchCoverage", "type": "number", "default": "`0`", "desc": "Require a minimum percent of covered branches. If code coverage does not reach the threshold specified, the process will exit with code `1`." }, { "textRaw": "`functionCoverage` {number} Require a minimum percent of covered functions. If code coverage does not reach the threshold specified, the process will exit with code `1`. **Default:** `0`.", "name": "functionCoverage", "type": "number", "default": "`0`", "desc": "Require a minimum percent of covered functions. If code coverage does not reach the threshold specified, the process will exit with code `1`." }, { "textRaw": "`env` {Object} Specify environment variables to be passed along to the test process. This options is not compatible with `isolation='none'`. These variables will override those from the main process, and are not merged with `process.env`. **Default:** `process.env`.", "name": "env", "type": "Object", "default": "`process.env`", "desc": "Specify environment variables to be passed along to the test process. This options is not compatible with `isolation='none'`. These variables will override those from the main process, and are not merged with `process.env`." } ], "optional": true } ], "return": { "textRaw": "Returns: {TestsStream}", "name": "return", "type": "TestsStream" } } ], "desc": "

Note: shard is used to horizontally parallelize test running across\nmachines or processes, ideal for large-scale executions across varied\nenvironments. It's incompatible with watch mode, tailored for rapid\ncode iteration by automatically rerunning tests on file changes.

\n
import { tap } from 'node:test/reporters';\nimport { run } from 'node:test';\nimport process from 'node:process';\nimport path from 'node:path';\n\nrun({ files: [path.resolve('./tests/test.js')] })\n .on('test:fail', () => {\n   process.exitCode = 1;\n })\n .compose(tap)\n .pipe(process.stdout);\n
\n
const { tap } = require('node:test/reporters');\nconst { run } = require('node:test');\nconst path = require('node:path');\n\nrun({ files: [path.resolve('./tests/test.js')] })\n .on('test:fail', () => {\n   process.exitCode = 1;\n })\n .compose(tap)\n .pipe(process.stdout);\n
" }, { "textRaw": "`suite([name][, options][, fn])`", "name": "suite", "type": "method", "meta": { "added": [ "v22.0.0", "v20.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`name` {string} The name of the suite, which is displayed when reporting test results. **Default:** The `name` property of `fn`, or `''` if `fn` does not have a name.", "name": "name", "type": "string", "default": "The `name` property of `fn`, or `''` if `fn` does not have a name", "desc": "The name of the suite, which is displayed when reporting test results.", "optional": true }, { "textRaw": "`options` {Object} Optional configuration options for the suite. This supports the same options as `test([name][, options][, fn])`.", "name": "options", "type": "Object", "desc": "Optional configuration options for the suite. This supports the same options as `test([name][, options][, fn])`.", "optional": true }, { "textRaw": "`fn` {Function|AsyncFunction} The suite function declaring nested tests and suites. The first argument to this function is a `SuiteContext` object. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The suite function declaring nested tests and suites. The first argument to this function is a `SuiteContext` object.", "optional": true } ], "return": { "textRaw": "Returns: {Promise} Immediately fulfilled with `undefined`.", "name": "return", "type": "Promise", "desc": "Immediately fulfilled with `undefined`." } } ], "desc": "

The suite() function is imported from the node:test module.

" }, { "textRaw": "`suite.skip([name][, options][, fn])`", "name": "skip", "type": "method", "meta": { "added": [ "v22.0.0", "v20.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for skipping a suite. This is the same as\nsuite([name], { skip: true }[, fn]).

" }, { "textRaw": "`suite.todo([name][, options][, fn])`", "name": "todo", "type": "method", "meta": { "added": [ "v22.0.0", "v20.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a suite as TODO. This is the same as\nsuite([name], { todo: true }[, fn]).

" }, { "textRaw": "`suite.only([name][, options][, fn])`", "name": "only", "type": "method", "meta": { "added": [ "v22.0.0", "v20.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a suite as only. This is the same as\nsuite([name], { only: true }[, fn]).

" }, { "textRaw": "`test([name][, options][, fn])`", "name": "test", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [ { "version": [ "v20.2.0", "v18.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/47909", "description": "Added the `skip`, `todo`, and `only` shorthands." }, { "version": [ "v18.8.0", "v16.18.0" ], "pr-url": "https://github.com/nodejs/node/pull/43554", "description": "Add a `signal` option." }, { "version": [ "v18.7.0", "v16.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/43505", "description": "Add a `timeout` option." } ] }, "signatures": [ { "params": [ { "textRaw": "`name` {string} The name of the test, which is displayed when reporting test results. **Default:** The `name` property of `fn`, or `''` if `fn` does not have a name.", "name": "name", "type": "string", "default": "The `name` property of `fn`, or `''` if `fn` does not have a name", "desc": "The name of the test, which is displayed when reporting test results.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the test. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the test. The following properties are supported:", "options": [ { "textRaw": "`concurrency` {number|boolean} If a number is provided, then that many tests would run asynchronously (they are still managed by the single-threaded event loop). If `true`, all scheduled asynchronous tests run concurrently within the thread. If `false`, only one test runs at a time. If unspecified, subtests inherit this value from their parent. **Default:** `false`.", "name": "concurrency", "type": "number|boolean", "default": "`false`", "desc": "If a number is provided, then that many tests would run asynchronously (they are still managed by the single-threaded event loop). If `true`, all scheduled asynchronous tests run concurrently within the thread. If `false`, only one test runs at a time. If unspecified, subtests inherit this value from their parent." }, { "textRaw": "`expectFailure` {boolean|string|RegExp|Function|Object|Error} If truthy, the test is expected to fail. If a non-empty string is provided, that string is displayed in the test results as the reason why the test is expected to fail. If a , , , or is provided directly (without wrapping in `{ match: … }`), the test passes only if the thrown error matches, following the behavior of `assert.throws`. To provide both a reason and validation, pass an object with `label` (string) and `match` (RegExp, Function, Object, or Error). **Default:** `false`.", "name": "expectFailure", "type": "boolean|string|RegExp|Function|Object|Error", "default": "`false`", "desc": "If truthy, the test is expected to fail. If a non-empty string is provided, that string is displayed in the test results as the reason why the test is expected to fail. If a , , , or is provided directly (without wrapping in `{ match: … }`), the test passes only if the thrown error matches, following the behavior of `assert.throws`. To provide both a reason and validation, pass an object with `label` (string) and `match` (RegExp, Function, Object, or Error)." }, { "textRaw": "`only` {boolean} If truthy, and the test context is configured to run `only` tests, then this test will be run. Otherwise, the test is skipped. **Default:** `false`.", "name": "only", "type": "boolean", "default": "`false`", "desc": "If truthy, and the test context is configured to run `only` tests, then this test will be run. Otherwise, the test is skipped." }, { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress test.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress test." }, { "textRaw": "`skip` {boolean|string} If truthy, the test is skipped. If a string is provided, that string is displayed in the test results as the reason for skipping the test. **Default:** `false`.", "name": "skip", "type": "boolean|string", "default": "`false`", "desc": "If truthy, the test is skipped. If a string is provided, that string is displayed in the test results as the reason for skipping the test." }, { "textRaw": "`todo` {boolean|string} If truthy, the test marked as `TODO`. If a string is provided, that string is displayed in the test results as the reason why the test is `TODO`. **Default:** `false`.", "name": "todo", "type": "boolean|string", "default": "`false`", "desc": "If truthy, the test marked as `TODO`. If a string is provided, that string is displayed in the test results as the reason why the test is `TODO`." }, { "textRaw": "`timeout` {number} A number of milliseconds the test will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the test will fail after. If unspecified, subtests inherit this value from their parent." }, { "textRaw": "`plan` {number} The number of assertions and subtests expected to be run in the test. If the number of assertions run in the test does not match the number specified in the plan, the test will fail. **Default:** `undefined`.", "name": "plan", "type": "number", "default": "`undefined`", "desc": "The number of assertions and subtests expected to be run in the test. If the number of assertions run in the test does not match the number specified in the plan, the test will fail." } ], "optional": true }, { "textRaw": "`fn` {Function|AsyncFunction} The function under test. The first argument to this function is a `TestContext` object. If the test uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The function under test. The first argument to this function is a `TestContext` object. If the test uses callbacks, the callback function is passed as the second argument.", "optional": true } ], "return": { "textRaw": "Returns: {Promise} Fulfilled with `undefined` once the test completes, or immediately if the test runs within a suite.", "name": "return", "type": "Promise", "desc": "Fulfilled with `undefined` once the test completes, or immediately if the test runs within a suite." } } ], "desc": "

The test() function is the value imported from the test module. Each\ninvocation of this function results in reporting the test to the <TestsStream>.

\n

The TestContext object passed to the fn argument can be used to perform\nactions related to the current test. Examples include skipping the test, adding\nadditional diagnostic information, or creating subtests.

\n

test() returns a Promise that fulfills once the test completes.\nif test() is called within a suite, it fulfills immediately.\nThe return value can usually be discarded for top level tests.\nHowever, the return value from subtests should be used to prevent the parent\ntest from finishing first and cancelling the subtest\nas shown in the following example.

\n
test('top level test', async (t) => {\n  // The setTimeout() in the following subtest would cause it to outlive its\n  // parent test if 'await' is removed on the next line. Once the parent test\n  // completes, it will cancel any outstanding subtests.\n  await t.test('longer running subtest', async (t) => {\n    return new Promise((resolve, reject) => {\n      setTimeout(resolve, 1000);\n    });\n  });\n});\n
\n

The timeout option can be used to fail the test if it takes longer than\ntimeout milliseconds to complete. However, it is not a reliable mechanism for\ncanceling tests because a running test might block the application thread and\nthus prevent the scheduled cancellation.

" }, { "textRaw": "`test.skip([name][, options][, fn])`", "name": "skip", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for skipping a test,\nsame as test([name], { skip: true }[, fn]).

" }, { "textRaw": "`test.todo([name][, options][, fn])`", "name": "todo", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a test as TODO,\nsame as test([name], { todo: true }[, fn]).

" }, { "textRaw": "`test.only([name][, options][, fn])`", "name": "only", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a test as only,\nsame as test([name], { only: true }[, fn]).

" }, { "textRaw": "`describe([name][, options][, fn])`", "name": "describe", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Alias for suite().

\n

The describe() function is imported from the node:test module.

" }, { "textRaw": "`describe.skip([name][, options][, fn])`", "name": "skip", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for skipping a suite. This is the same as\ndescribe([name], { skip: true }[, fn]).

" }, { "textRaw": "`describe.todo([name][, options][, fn])`", "name": "todo", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a suite as TODO. This is the same as\ndescribe([name], { todo: true }[, fn]).

" }, { "textRaw": "`describe.only([name][, options][, fn])`", "name": "only", "type": "method", "meta": { "added": [ "v19.8.0", "v18.15.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a suite as only. This is the same as\ndescribe([name], { only: true }[, fn]).

" }, { "textRaw": "`it([name][, options][, fn])`", "name": "it", "type": "method", "meta": { "added": [ "v18.6.0", "v16.17.0" ], "changes": [ { "version": [ "v19.8.0", "v18.16.0" ], "pr-url": "https://github.com/nodejs/node/pull/46889", "description": "Calling `it()` is now equivalent to calling `test()`." } ] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Alias for test().

\n

The it() function is imported from the node:test module.

" }, { "textRaw": "`it.skip([name][, options][, fn])`", "name": "skip", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for skipping a test,\nsame as it([name], { skip: true }[, fn]).

" }, { "textRaw": "`it.todo([name][, options][, fn])`", "name": "todo", "type": "method", "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a test as TODO,\nsame as it([name], { todo: true }[, fn]).

" }, { "textRaw": "`it.only([name][, options][, fn])`", "name": "only", "type": "method", "meta": { "added": [ "v19.8.0", "v18.15.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "name", "optional": true }, { "name": "options", "optional": true }, { "name": "fn", "optional": true } ] } ], "desc": "

Shorthand for marking a test as only,\nsame as it([name], { only: true }[, fn]).

" }, { "textRaw": "`before([fn][, options])`", "name": "before", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function creates a hook that runs before executing a suite.

\n
describe('tests', async () => {\n  before(() => console.log('about to run some test'));\n  it('is a subtest', () => {\n    // Some relevant assertions here\n  });\n});\n
" }, { "textRaw": "`after([fn][, options])`", "name": "after", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function creates a hook that runs after executing a suite.

\n
describe('tests', async () => {\n  after(() => console.log('finished running tests'));\n  it('is a subtest', () => {\n    // Some relevant assertion here\n  });\n});\n
\n

Note: The after hook is guaranteed to run,\neven if tests within the suite fail.

" }, { "textRaw": "`beforeEach([fn][, options])`", "name": "beforeEach", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function creates a hook that runs before each test in the current suite.

\n
describe('tests', async () => {\n  beforeEach(() => console.log('about to run a test'));\n  it('is a subtest', () => {\n    // Some relevant assertion here\n  });\n});\n
" }, { "textRaw": "`afterEach([fn][, options])`", "name": "afterEach", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function creates a hook that runs after each test in the current suite.\nThe afterEach() hook is run even if the test fails.

\n
describe('tests', async () => {\n  afterEach(() => console.log('finished running a test'));\n  it('is a subtest', () => {\n    // Some relevant assertion here\n  });\n});\n
" } ], "classes": [ { "textRaw": "Class: `MockFunctionContext`", "name": "MockFunctionContext", "type": "class", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "desc": "

The MockFunctionContext class is used to inspect or manipulate the behavior of\nmocks created via the MockTracker APIs.

", "properties": [ { "textRaw": "Type: {Array}", "name": "calls", "type": "Array", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "desc": "

A getter that returns a copy of the internal array used to track calls to the\nmock. Each entry in the array is an object with the following properties.

\n
    \n
  • arguments <Array> An array of the arguments passed to the mock function.
  • \n
  • error <any> If the mocked function threw then this property contains the\nthrown value. Default: undefined.
  • \n
  • result <any> The value returned by the mocked function.
  • \n
  • stack <Error> An Error object whose stack can be used to determine the\ncallsite of the mocked function invocation.
  • \n
  • target <Function> | <undefined> If the mocked function is a constructor, this\nfield contains the class being constructed. Otherwise this will be undefined.
  • \n
  • this <any> The mocked function's this value.
  • \n
" } ], "methods": [ { "textRaw": "`ctx.callCount()`", "name": "callCount", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [], "return": { "textRaw": "Returns: {integer} The number of times that this mock has been invoked.", "name": "return", "type": "integer", "desc": "The number of times that this mock has been invoked." } } ], "desc": "

This function returns the number of times that this mock has been invoked. This\nfunction is more efficient than checking ctx.calls.length because ctx.calls\nis a getter that creates a copy of the internal call tracking array.

" }, { "textRaw": "`ctx.mockImplementation(implementation)`", "name": "mockImplementation", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`implementation` {Function|AsyncFunction} The function to be used as the mock's new implementation.", "name": "implementation", "type": "Function|AsyncFunction", "desc": "The function to be used as the mock's new implementation." } ] } ], "desc": "

This function is used to change the behavior of an existing mock.

\n

The following example creates a mock function using t.mock.fn(), calls the\nmock function, and then changes the mock implementation to a different function.

\n
test('changes a mock behavior', (t) => {\n  let cnt = 0;\n\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n\n  const fn = t.mock.fn(addOne);\n\n  assert.strictEqual(fn(), 1);\n  fn.mock.mockImplementation(addTwo);\n  assert.strictEqual(fn(), 3);\n  assert.strictEqual(fn(), 5);\n});\n
" }, { "textRaw": "`ctx.mockImplementationOnce(implementation[, onCall])`", "name": "mockImplementationOnce", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`implementation` {Function|AsyncFunction} The function to be used as the mock's implementation for the invocation number specified by `onCall`.", "name": "implementation", "type": "Function|AsyncFunction", "desc": "The function to be used as the mock's implementation for the invocation number specified by `onCall`." }, { "textRaw": "`onCall` {integer} The invocation number that will use `implementation`. If the specified invocation has already occurred then an exception is thrown. **Default:** The number of the next invocation.", "name": "onCall", "type": "integer", "default": "The number of the next invocation", "desc": "The invocation number that will use `implementation`. If the specified invocation has already occurred then an exception is thrown.", "optional": true } ] } ], "desc": "

This function is used to change the behavior of an existing mock for a single\ninvocation. Once invocation onCall has occurred, the mock will revert to\nwhatever behavior it would have used had mockImplementationOnce() not been\ncalled.

\n

The following example creates a mock function using t.mock.fn(), calls the\nmock function, changes the mock implementation to a different function for the\nnext invocation, and then resumes its previous behavior.

\n
test('changes a mock behavior once', (t) => {\n  let cnt = 0;\n\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n\n  const fn = t.mock.fn(addOne);\n\n  assert.strictEqual(fn(), 1);\n  fn.mock.mockImplementationOnce(addTwo);\n  assert.strictEqual(fn(), 3);\n  assert.strictEqual(fn(), 4);\n});\n
" }, { "textRaw": "`ctx.resetCalls()`", "name": "resetCalls", "type": "method", "meta": { "added": [ "v19.3.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

Resets the call history of the mock function.

" }, { "textRaw": "`ctx.restore()`", "name": "restore", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

Resets the implementation of the mock function to its original behavior. The\nmock can still be used after calling this function.

" } ] }, { "textRaw": "Class: `MockModuleContext`", "name": "MockModuleContext", "type": "class", "meta": { "added": [ "v22.3.0", "v20.18.0" ], "changes": [] }, "stability": 1, "stabilityText": "Early development", "desc": "

The MockModuleContext class is used to manipulate the behavior of module mocks\ncreated via the MockTracker APIs.

", "methods": [ { "textRaw": "`ctx.restore()`", "name": "restore", "type": "method", "meta": { "added": [ "v22.3.0", "v20.18.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

Resets the implementation of the mock module.

" } ] }, { "textRaw": "Class: `MockPropertyContext`", "name": "MockPropertyContext", "type": "class", "meta": { "added": [ "v24.3.0", "v22.20.0" ], "changes": [] }, "desc": "

The MockPropertyContext class is used to inspect or manipulate the behavior\nof property mocks created via the MockTracker APIs.

", "properties": [ { "textRaw": "Type: {Array}", "name": "accesses", "type": "Array", "desc": "

A getter that returns a copy of the internal array used to track accesses (get/set) to\nthe mocked property. Each entry in the array is an object with the following properties:

\n
    \n
  • type <string> Either 'get' or 'set', indicating the type of access.
  • \n
  • value <any> The value that was read (for 'get') or written (for 'set').
  • \n
  • stack <Error> An Error object whose stack can be used to determine the\ncallsite of the mocked function invocation.
  • \n
" } ], "methods": [ { "textRaw": "`ctx.accessCount()`", "name": "accessCount", "type": "method", "signatures": [ { "params": [], "return": { "textRaw": "Returns: {integer} The number of times that the property was accessed (read or written).", "name": "return", "type": "integer", "desc": "The number of times that the property was accessed (read or written)." } } ], "desc": "

This function returns the number of times that the property was accessed.\nThis function is more efficient than checking ctx.accesses.length because\nctx.accesses is a getter that creates a copy of the internal access tracking array.

" }, { "textRaw": "`ctx.mockImplementation(value)`", "name": "mockImplementation", "type": "method", "signatures": [ { "params": [ { "textRaw": "`value` {any} The new value to be set as the mocked property value.", "name": "value", "type": "any", "desc": "The new value to be set as the mocked property value." } ] } ], "desc": "

This function is used to change the value returned by the mocked property getter.

" }, { "textRaw": "`ctx.mockImplementationOnce(value[, onAccess])`", "name": "mockImplementationOnce", "type": "method", "signatures": [ { "params": [ { "textRaw": "`value` {any} The value to be used as the mock's implementation for the invocation number specified by `onAccess`.", "name": "value", "type": "any", "desc": "The value to be used as the mock's implementation for the invocation number specified by `onAccess`." }, { "textRaw": "`onAccess` {integer} The invocation number that will use `value`. If the specified invocation has already occurred then an exception is thrown. **Default:** The number of the next invocation.", "name": "onAccess", "type": "integer", "default": "The number of the next invocation", "desc": "The invocation number that will use `value`. If the specified invocation has already occurred then an exception is thrown.", "optional": true } ] } ], "desc": "

This function is used to change the behavior of an existing mock for a single\ninvocation. Once invocation onAccess has occurred, the mock will revert to\nwhatever behavior it would have used had mockImplementationOnce() not been\ncalled.

\n

The following example creates a mock function using t.mock.property(), calls the\nmock property, changes the mock implementation to a different value for the\nnext invocation, and then resumes its previous behavior.

\n
test('changes a mock behavior once', (t) => {\n  const obj = { foo: 1 };\n\n  const prop = t.mock.property(obj, 'foo', 5);\n\n  assert.strictEqual(obj.foo, 5);\n  prop.mock.mockImplementationOnce(25);\n  assert.strictEqual(obj.foo, 25);\n  assert.strictEqual(obj.foo, 5);\n});\n
", "modules": [ { "textRaw": "Caveat", "name": "caveat", "type": "module", "desc": "

For consistency with the rest of the mocking API, this function treats both property gets and sets\nas accesses. If a property set occurs at the same access index, the \"once\" value will be consumed\nby the set operation, and the mocked property value will be changed to the \"once\" value. This may\nlead to unexpected behavior if you intend the \"once\" value to only be used for a get operation.

", "displayName": "Caveat" } ] }, { "textRaw": "`ctx.resetAccesses()`", "name": "resetAccesses", "type": "method", "signatures": [ { "params": [] } ], "desc": "

Resets the access history of the mocked property.

" }, { "textRaw": "`ctx.restore()`", "name": "restore", "type": "method", "signatures": [ { "params": [] } ], "desc": "

Resets the implementation of the mock property to its original behavior. The\nmock can still be used after calling this function.

" } ] }, { "textRaw": "Class: `MockTracker`", "name": "MockTracker", "type": "class", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "desc": "

The MockTracker class is used to manage mocking functionality. The test runner\nmodule provides a top level mock export which is a MockTracker instance.\nEach test also provides its own MockTracker instance via the test context's\nmock property.

", "methods": [ { "textRaw": "`mock.fn([original[, implementation]][, options])`", "name": "fn", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`original` {Function|AsyncFunction} An optional function to create a mock on. **Default:** A no-op function.", "name": "original", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "An optional function to create a mock on.", "optional": true }, { "textRaw": "`implementation` {Function|AsyncFunction} An optional function used as the mock implementation for `original`. This is useful for creating mocks that exhibit one behavior for a specified number of calls and then restore the behavior of `original`. **Default:** The function specified by `original`.", "name": "implementation", "type": "Function|AsyncFunction", "default": "The function specified by `original`", "desc": "An optional function used as the mock implementation for `original`. This is useful for creating mocks that exhibit one behavior for a specified number of calls and then restore the behavior of `original`.", "optional": true }, { "textRaw": "`options` {Object} Optional configuration options for the mock function. The following properties are supported:", "name": "options", "type": "Object", "desc": "Optional configuration options for the mock function. The following properties are supported:", "options": [ { "textRaw": "`times` {integer} The number of times that the mock will use the behavior of `implementation`. Once the mock function has been called `times` times, it will automatically restore the behavior of `original`. This value must be an integer greater than zero. **Default:** `Infinity`.", "name": "times", "type": "integer", "default": "`Infinity`", "desc": "The number of times that the mock will use the behavior of `implementation`. Once the mock function has been called `times` times, it will automatically restore the behavior of `original`. This value must be an integer greater than zero." } ], "optional": true } ], "return": { "textRaw": "Returns: {Proxy} The mocked function. The mocked function contains a special `mock` property, which is an instance of `MockFunctionContext`, and can be used for inspecting and changing the behavior of the mocked function.", "name": "return", "type": "Proxy", "desc": "The mocked function. The mocked function contains a special `mock` property, which is an instance of `MockFunctionContext`, and can be used for inspecting and changing the behavior of the mocked function." } } ], "desc": "

This function is used to create a mock function.

\n

The following example creates a mock function that increments a counter by one\non each invocation. The times option is used to modify the mock behavior such\nthat the first two invocations add two to the counter instead of one.

\n
test('mocks a counting function', (t) => {\n  let cnt = 0;\n\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n\n  const fn = t.mock.fn(addOne, addTwo, { times: 2 });\n\n  assert.strictEqual(fn(), 2);\n  assert.strictEqual(fn(), 4);\n  assert.strictEqual(fn(), 5);\n  assert.strictEqual(fn(), 6);\n});\n
" }, { "textRaw": "`mock.getter(object, methodName[, implementation][, options])`", "name": "getter", "type": "method", "meta": { "added": [ "v19.3.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "object" }, { "name": "methodName" }, { "name": "implementation", "optional": true }, { "name": "options", "optional": true } ] } ], "desc": "

This function is syntax sugar for MockTracker.method with options.getter\nset to true.

" }, { "textRaw": "`mock.method(object, methodName[, implementation][, options])`", "name": "method", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`object` {Object} The object whose method is being mocked.", "name": "object", "type": "Object", "desc": "The object whose method is being mocked." }, { "textRaw": "`methodName` {string|symbol} The identifier of the method on `object` to mock. If `object[methodName]` is not a function, an error is thrown.", "name": "methodName", "type": "string|symbol", "desc": "The identifier of the method on `object` to mock. If `object[methodName]` is not a function, an error is thrown." }, { "textRaw": "`implementation` {Function|AsyncFunction} An optional function used as the mock implementation for `object[methodName]`. **Default:** The original method specified by `object[methodName]`.", "name": "implementation", "type": "Function|AsyncFunction", "default": "The original method specified by `object[methodName]`", "desc": "An optional function used as the mock implementation for `object[methodName]`.", "optional": true }, { "textRaw": "`options` {Object} Optional configuration options for the mock method. The following properties are supported:", "name": "options", "type": "Object", "desc": "Optional configuration options for the mock method. The following properties are supported:", "options": [ { "textRaw": "`getter` {boolean} If `true`, `object[methodName]` is treated as a getter. This option cannot be used with the `setter` option. **Default:** false.", "name": "getter", "type": "boolean", "default": "false", "desc": "If `true`, `object[methodName]` is treated as a getter. This option cannot be used with the `setter` option." }, { "textRaw": "`setter` {boolean} If `true`, `object[methodName]` is treated as a setter. This option cannot be used with the `getter` option. **Default:** false.", "name": "setter", "type": "boolean", "default": "false", "desc": "If `true`, `object[methodName]` is treated as a setter. This option cannot be used with the `getter` option." }, { "textRaw": "`times` {integer} The number of times that the mock will use the behavior of `implementation`. Once the mocked method has been called `times` times, it will automatically restore the original behavior. This value must be an integer greater than zero. **Default:** `Infinity`.", "name": "times", "type": "integer", "default": "`Infinity`", "desc": "The number of times that the mock will use the behavior of `implementation`. Once the mocked method has been called `times` times, it will automatically restore the original behavior. This value must be an integer greater than zero." } ], "optional": true } ], "return": { "textRaw": "Returns: {Proxy} The mocked method. The mocked method contains a special `mock` property, which is an instance of `MockFunctionContext`, and can be used for inspecting and changing the behavior of the mocked method.", "name": "return", "type": "Proxy", "desc": "The mocked method. The mocked method contains a special `mock` property, which is an instance of `MockFunctionContext`, and can be used for inspecting and changing the behavior of the mocked method." } } ], "desc": "

This function is used to create a mock on an existing object method. The\nfollowing example demonstrates how a mock is created on an existing object\nmethod.

\n
test('spies on an object method', (t) => {\n  const number = {\n    value: 5,\n    subtract(a) {\n      return this.value - a;\n    },\n  };\n\n  t.mock.method(number, 'subtract');\n  assert.strictEqual(number.subtract.mock.callCount(), 0);\n  assert.strictEqual(number.subtract(3), 2);\n  assert.strictEqual(number.subtract.mock.callCount(), 1);\n\n  const call = number.subtract.mock.calls[0];\n\n  assert.deepStrictEqual(call.arguments, [3]);\n  assert.strictEqual(call.result, 2);\n  assert.strictEqual(call.error, undefined);\n  assert.strictEqual(call.target, undefined);\n  assert.strictEqual(call.this, number);\n});\n
" }, { "textRaw": "`mock.module(specifier[, options])`", "name": "module", "type": "method", "meta": { "added": [ "v22.3.0", "v20.18.0" ], "changes": [ { "version": [ "v24.0.0", "v22.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/58007", "description": "Support JSON modules." } ] }, "stability": 1, "stabilityText": "Early development", "signatures": [ { "params": [ { "textRaw": "`specifier` {string|URL} A string identifying the module to mock.", "name": "specifier", "type": "string|URL", "desc": "A string identifying the module to mock." }, { "textRaw": "`options` {Object} Optional configuration options for the mock module. The following properties are supported:", "name": "options", "type": "Object", "desc": "Optional configuration options for the mock module. The following properties are supported:", "options": [ { "textRaw": "`cache` {boolean} If `false`, each call to `require()` or `import()` generates a new mock module. If `true`, subsequent calls will return the same module mock, and the mock module is inserted into the CommonJS cache. **Default:** false.", "name": "cache", "type": "boolean", "default": "false", "desc": "If `false`, each call to `require()` or `import()` generates a new mock module. If `true`, subsequent calls will return the same module mock, and the mock module is inserted into the CommonJS cache." }, { "textRaw": "`defaultExport` {any} An optional value used as the mocked module's default export. If this value is not provided, ESM mocks do not include a default export. If the mock is a CommonJS or builtin module, this setting is used as the value of `module.exports`. If this value is not provided, CJS and builtin mocks use an empty object as the value of `module.exports`.", "name": "defaultExport", "type": "any", "desc": "An optional value used as the mocked module's default export. If this value is not provided, ESM mocks do not include a default export. If the mock is a CommonJS or builtin module, this setting is used as the value of `module.exports`. If this value is not provided, CJS and builtin mocks use an empty object as the value of `module.exports`." }, { "textRaw": "`namedExports` {Object} An optional object whose keys and values are used to create the named exports of the mock module. If the mock is a CommonJS or builtin module, these values are copied onto `module.exports`. Therefore, if a mock is created with both named exports and a non-object default export, the mock will throw an exception when used as a CJS or builtin module.", "name": "namedExports", "type": "Object", "desc": "An optional object whose keys and values are used to create the named exports of the mock module. If the mock is a CommonJS or builtin module, these values are copied onto `module.exports`. Therefore, if a mock is created with both named exports and a non-object default export, the mock will throw an exception when used as a CJS or builtin module." } ], "optional": true } ], "return": { "textRaw": "Returns: {MockModuleContext} An object that can be used to manipulate the mock.", "name": "return", "type": "MockModuleContext", "desc": "An object that can be used to manipulate the mock." } } ], "desc": "

This function is used to mock the exports of ECMAScript modules, CommonJS modules, JSON modules, and\nNode.js builtin modules. Any references to the original module prior to mocking are not impacted. In\norder to enable module mocking, Node.js must be started with the\n--experimental-test-module-mocks command-line flag.

\n

The following example demonstrates how a mock is created for a module.

\n
test('mocks a builtin module in both module systems', async (t) => {\n  // Create a mock of 'node:readline' with a named export named 'fn', which\n  // does not exist in the original 'node:readline' module.\n  const mock = t.mock.module('node:readline', {\n    namedExports: { fn() { return 42; } },\n  });\n\n  let esmImpl = await import('node:readline');\n  let cjsImpl = require('node:readline');\n\n  // cursorTo() is an export of the original 'node:readline' module.\n  assert.strictEqual(esmImpl.cursorTo, undefined);\n  assert.strictEqual(cjsImpl.cursorTo, undefined);\n  assert.strictEqual(esmImpl.fn(), 42);\n  assert.strictEqual(cjsImpl.fn(), 42);\n\n  mock.restore();\n\n  // The mock is restored, so the original builtin module is returned.\n  esmImpl = await import('node:readline');\n  cjsImpl = require('node:readline');\n\n  assert.strictEqual(typeof esmImpl.cursorTo, 'function');\n  assert.strictEqual(typeof cjsImpl.cursorTo, 'function');\n  assert.strictEqual(esmImpl.fn, undefined);\n  assert.strictEqual(cjsImpl.fn, undefined);\n});\n
" }, { "textRaw": "`mock.property(object, propertyName[, value])`", "name": "property", "type": "method", "meta": { "added": [ "v24.3.0", "v22.20.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`object` {Object} The object whose value is being mocked.", "name": "object", "type": "Object", "desc": "The object whose value is being mocked." }, { "textRaw": "`propertyName` {string|symbol} The identifier of the property on `object` to mock.", "name": "propertyName", "type": "string|symbol", "desc": "The identifier of the property on `object` to mock." }, { "textRaw": "`value` {any} An optional value used as the mock value for `object[propertyName]`. **Default:** The original property value.", "name": "value", "type": "any", "default": "The original property value", "desc": "An optional value used as the mock value for `object[propertyName]`.", "optional": true } ], "return": { "textRaw": "Returns: {Proxy} A proxy to the mocked object. The mocked object contains a special `mock` property, which is an instance of `MockPropertyContext`, and can be used for inspecting and changing the behavior of the mocked property.", "name": "return", "type": "Proxy", "desc": "A proxy to the mocked object. The mocked object contains a special `mock` property, which is an instance of `MockPropertyContext`, and can be used for inspecting and changing the behavior of the mocked property." } } ], "desc": "

Creates a mock for a property value on an object. This allows you to track and control access to a specific property,\nincluding how many times it is read (getter) or written (setter), and to restore the original value after mocking.

\n
test('mocks a property value', (t) => {\n  const obj = { foo: 42 };\n  const prop = t.mock.property(obj, 'foo', 100);\n\n  assert.strictEqual(obj.foo, 100);\n  assert.strictEqual(prop.mock.accessCount(), 1);\n  assert.strictEqual(prop.mock.accesses[0].type, 'get');\n  assert.strictEqual(prop.mock.accesses[0].value, 100);\n\n  obj.foo = 200;\n  assert.strictEqual(prop.mock.accessCount(), 2);\n  assert.strictEqual(prop.mock.accesses[1].type, 'set');\n  assert.strictEqual(prop.mock.accesses[1].value, 200);\n\n  prop.mock.restore();\n  assert.strictEqual(obj.foo, 42);\n});\n
" }, { "textRaw": "`mock.reset()`", "name": "reset", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

This function restores the default behavior of all mocks that were previously\ncreated by this MockTracker and disassociates the mocks from the\nMockTracker instance. Once disassociated, the mocks can still be used, but the\nMockTracker instance can no longer be used to reset their behavior or\notherwise interact with them.

\n

After each test completes, this function is called on the test context's\nMockTracker. If the global MockTracker is used extensively, calling this\nfunction manually is recommended.

" }, { "textRaw": "`mock.restoreAll()`", "name": "restoreAll", "type": "method", "meta": { "added": [ "v19.1.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

This function restores the default behavior of all mocks that were previously\ncreated by this MockTracker. Unlike mock.reset(), mock.restoreAll() does\nnot disassociate the mocks from the MockTracker instance.

" }, { "textRaw": "`mock.setter(object, methodName[, implementation][, options])`", "name": "setter", "type": "method", "meta": { "added": [ "v19.3.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "object" }, { "name": "methodName" }, { "name": "implementation", "optional": true }, { "name": "options", "optional": true } ] } ], "desc": "

This function is syntax sugar for MockTracker.method with options.setter\nset to true.

" } ] }, { "textRaw": "Class: `MockTimers`", "name": "MockTimers", "type": "class", "meta": { "added": [ "v20.4.0", "v18.19.0" ], "changes": [ { "version": "v23.1.0", "pr-url": "https://github.com/nodejs/node/pull/55398", "description": "The Mock Timers is now stable." } ] }, "desc": "

Mocking timers is a technique commonly used in software testing to simulate and\ncontrol the behavior of timers, such as setInterval and setTimeout,\nwithout actually waiting for the specified time intervals.

\n

MockTimers is also able to mock the Date object.

\n

The MockTracker provides a top-level timers export\nwhich is a MockTimers instance.

", "methods": [ { "textRaw": "`timers.enable([enableOptions])`", "name": "enable", "type": "method", "meta": { "added": [ "v20.4.0", "v18.19.0" ], "changes": [ { "version": [ "v21.2.0", "v20.11.0" ], "pr-url": "https://github.com/nodejs/node/pull/48638", "description": "Updated parameters to be an option object with available APIs and the default initial epoch." } ] }, "signatures": [ { "params": [ { "textRaw": "`enableOptions` {Object} Optional configuration options for enabling timer mocking. The following properties are supported:", "name": "enableOptions", "type": "Object", "desc": "Optional configuration options for enabling timer mocking. The following properties are supported:", "options": [ { "textRaw": "`apis` {Array} An optional array containing the timers to mock. The currently supported timer values are `'setInterval'`, `'setTimeout'`, `'setImmediate'`, and `'Date'`. **Default:** `['setInterval', 'setTimeout', 'setImmediate', 'Date']`. If no array is provided, all time related APIs (`'setInterval'`, `'clearInterval'`, `'setTimeout'`, `'clearTimeout'`, `'setImmediate'`, `'clearImmediate'`, and `'Date'`) will be mocked by default.", "name": "apis", "type": "Array", "default": "`['setInterval', 'setTimeout', 'setImmediate', 'Date']`. If no array is provided, all time related APIs (`'setInterval'`, `'clearInterval'`, `'setTimeout'`, `'clearTimeout'`, `'setImmediate'`, `'clearImmediate'`, and `'Date'`) will be mocked by default", "desc": "An optional array containing the timers to mock. The currently supported timer values are `'setInterval'`, `'setTimeout'`, `'setImmediate'`, and `'Date'`." }, { "textRaw": "`now` {number|Date} An optional number or Date object representing the initial time (in milliseconds) to use as the value for `Date.now()`. **Default:** `0`.", "name": "now", "type": "number|Date", "default": "`0`", "desc": "An optional number or Date object representing the initial time (in milliseconds) to use as the value for `Date.now()`." } ], "optional": true } ] } ], "desc": "

Enables timer mocking for the specified timers.

\n

Note: When you enable mocking for a specific timer, its associated\nclear function will also be implicitly mocked.

\n

Note: Mocking Date will affect the behavior of the mocked timers\nas they use the same internal clock.

\n

Example usage without setting initial time:

\n
import { mock } from 'node:test';\nmock.timers.enable({ apis: ['setInterval'] });\n
\n
const { mock } = require('node:test');\nmock.timers.enable({ apis: ['setInterval'] });\n
\n

The above example enables mocking for the setInterval timer and\nimplicitly mocks the clearInterval function. Only the setInterval\nand clearInterval functions from node:timers,\nnode:timers/promises, and\nglobalThis will be mocked.

\n

Example usage with initial time set

\n
import { mock } from 'node:test';\nmock.timers.enable({ apis: ['Date'], now: 1000 });\n
\n
const { mock } = require('node:test');\nmock.timers.enable({ apis: ['Date'], now: 1000 });\n
\n

Example usage with initial Date object as time set

\n
import { mock } from 'node:test';\nmock.timers.enable({ apis: ['Date'], now: new Date() });\n
\n
const { mock } = require('node:test');\nmock.timers.enable({ apis: ['Date'], now: new Date() });\n
\n

Alternatively, if you call mock.timers.enable() without any parameters:

\n

All timers ('setInterval', 'clearInterval', 'setTimeout', 'clearTimeout',\n'setImmediate', and 'clearImmediate') will be mocked. The setInterval,\nclearInterval, setTimeout, clearTimeout, setImmediate, and\nclearImmediate functions from node:timers, node:timers/promises, and\nglobalThis will be mocked. As well as the global Date object.

" }, { "textRaw": "`timers.reset()`", "name": "reset", "type": "method", "meta": { "added": [ "v20.4.0", "v18.19.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

This function restores the default behavior of all mocks that were previously\ncreated by this MockTimers instance and disassociates the mocks\nfrom the MockTracker instance.

\n

Note: After each test completes, this function is called on\nthe test context's MockTracker.

\n
import { mock } from 'node:test';\nmock.timers.reset();\n
\n
const { mock } = require('node:test');\nmock.timers.reset();\n
" }, { "textRaw": "`timers[Symbol.dispose]()`", "name": "[Symbol.dispose]", "type": "method", "signatures": [ { "params": [] } ], "desc": "

Calls timers.reset().

" }, { "textRaw": "`timers.tick([milliseconds])`", "name": "tick", "type": "method", "meta": { "added": [ "v20.4.0", "v18.19.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`milliseconds` {number} The amount of time, in milliseconds, to advance the timers. **Default:** `1`.", "name": "milliseconds", "type": "number", "default": "`1`", "desc": "The amount of time, in milliseconds, to advance the timers.", "optional": true } ] } ], "desc": "

Advances time for all mocked timers.

\n

Note: This diverges from how setTimeout in Node.js behaves and accepts\nonly positive numbers. In Node.js, setTimeout with negative numbers is\nonly supported for web compatibility reasons.

\n

The following example mocks a setTimeout function and\nby using .tick advances in\ntime triggering all pending timers.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n\n  setTimeout(fn, 9999);\n\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
\n

Alternatively, the .tick function can be called many times

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  const nineSecs = 9000;\n  setTimeout(fn, nineSecs);\n\n  const threeSeconds = 3000;\n  context.mock.timers.tick(threeSeconds);\n  context.mock.timers.tick(threeSeconds);\n  context.mock.timers.tick(threeSeconds);\n\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  const nineSecs = 9000;\n  setTimeout(fn, nineSecs);\n\n  const threeSeconds = 3000;\n  context.mock.timers.tick(threeSeconds);\n  context.mock.timers.tick(threeSeconds);\n  context.mock.timers.tick(threeSeconds);\n\n  assert.strictEqual(fn.mock.callCount(), 1);\n});\n
\n

Advancing time using .tick will also advance the time for any Date object\ncreated after the mock was enabled (if Date was also set to be mocked).

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  setTimeout(fn, 9999);\n\n  assert.strictEqual(fn.mock.callCount(), 0);\n  assert.strictEqual(Date.now(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n  assert.strictEqual(Date.now(), 9999);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n\n  setTimeout(fn, 9999);\n  assert.strictEqual(fn.mock.callCount(), 0);\n  assert.strictEqual(Date.now(), 0);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(fn.mock.callCount(), 1);\n  assert.strictEqual(Date.now(), 9999);\n});\n
", "modules": [ { "textRaw": "Using clear functions", "name": "using_clear_functions", "type": "module", "desc": "

As mentioned, all clear functions from timers (clearTimeout, clearInterval,and\nclearImmediate) are implicitly mocked. Take a look at this example using setTimeout:

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  const id = setTimeout(fn, 9999);\n\n  // Implicitly mocked as well\n  clearTimeout(id);\n  context.mock.timers.tick(9999);\n\n  // As that setTimeout was cleared the mock function will never be called\n  assert.strictEqual(fn.mock.callCount(), 0);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', (context) => {\n  const fn = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  const id = setTimeout(fn, 9999);\n\n  // Implicitly mocked as well\n  clearTimeout(id);\n  context.mock.timers.tick(9999);\n\n  // As that setTimeout was cleared the mock function will never be called\n  assert.strictEqual(fn.mock.callCount(), 0);\n});\n
", "displayName": "Using clear functions" }, { "textRaw": "Working with Node.js timers modules", "name": "working_with_node.js_timers_modules", "type": "module", "desc": "

Once you enable mocking timers, node:timers,\nnode:timers/promises modules,\nand timers from the Node.js global context are enabled:

\n

Note: Destructuring functions such as\nimport { setTimeout } from 'node:timers' is currently\nnot supported by this API.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\nimport nodeTimers from 'node:timers';\nimport nodeTimersPromises from 'node:timers/promises';\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', async (context) => {\n  const globalTimeoutObjectSpy = context.mock.fn();\n  const nodeTimerSpy = context.mock.fn();\n  const nodeTimerPromiseSpy = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(globalTimeoutObjectSpy, 9999);\n  nodeTimers.setTimeout(nodeTimerSpy, 9999);\n\n  const promise = nodeTimersPromises.setTimeout(9999).then(nodeTimerPromiseSpy);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(globalTimeoutObjectSpy.mock.callCount(), 1);\n  assert.strictEqual(nodeTimerSpy.mock.callCount(), 1);\n  await promise;\n  assert.strictEqual(nodeTimerPromiseSpy.mock.callCount(), 1);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\nconst nodeTimers = require('node:timers');\nconst nodeTimersPromises = require('node:timers/promises');\n\ntest('mocks setTimeout to be executed synchronously without having to actually wait for it', async (context) => {\n  const globalTimeoutObjectSpy = context.mock.fn();\n  const nodeTimerSpy = context.mock.fn();\n  const nodeTimerPromiseSpy = context.mock.fn();\n\n  // Optionally choose what to mock\n  context.mock.timers.enable({ apis: ['setTimeout'] });\n  setTimeout(globalTimeoutObjectSpy, 9999);\n  nodeTimers.setTimeout(nodeTimerSpy, 9999);\n\n  const promise = nodeTimersPromises.setTimeout(9999).then(nodeTimerPromiseSpy);\n\n  // Advance in time\n  context.mock.timers.tick(9999);\n  assert.strictEqual(globalTimeoutObjectSpy.mock.callCount(), 1);\n  assert.strictEqual(nodeTimerSpy.mock.callCount(), 1);\n  await promise;\n  assert.strictEqual(nodeTimerPromiseSpy.mock.callCount(), 1);\n});\n
\n

In Node.js, setInterval from node:timers/promises\nis an AsyncGenerator and is also supported by this API:

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\nimport nodeTimersPromises from 'node:timers/promises';\ntest('should tick five times testing a real use case', async (context) => {\n  context.mock.timers.enable({ apis: ['setInterval'] });\n\n  const expectedIterations = 3;\n  const interval = 1000;\n  const startedAt = Date.now();\n  async function run() {\n    const times = [];\n    for await (const time of nodeTimersPromises.setInterval(interval, startedAt)) {\n      times.push(time);\n      if (times.length === expectedIterations) break;\n    }\n    return times;\n  }\n\n  const r = run();\n  context.mock.timers.tick(interval);\n  context.mock.timers.tick(interval);\n  context.mock.timers.tick(interval);\n\n  const timeResults = await r;\n  assert.strictEqual(timeResults.length, expectedIterations);\n  for (let it = 1; it < expectedIterations; it++) {\n    assert.strictEqual(timeResults[it - 1], startedAt + (interval * it));\n  }\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\nconst nodeTimersPromises = require('node:timers/promises');\ntest('should tick five times testing a real use case', async (context) => {\n  context.mock.timers.enable({ apis: ['setInterval'] });\n\n  const expectedIterations = 3;\n  const interval = 1000;\n  const startedAt = Date.now();\n  async function run() {\n    const times = [];\n    for await (const time of nodeTimersPromises.setInterval(interval, startedAt)) {\n      times.push(time);\n      if (times.length === expectedIterations) break;\n    }\n    return times;\n  }\n\n  const r = run();\n  context.mock.timers.tick(interval);\n  context.mock.timers.tick(interval);\n  context.mock.timers.tick(interval);\n\n  const timeResults = await r;\n  assert.strictEqual(timeResults.length, expectedIterations);\n  for (let it = 1; it < expectedIterations; it++) {\n    assert.strictEqual(timeResults[it - 1], startedAt + (interval * it));\n  }\n});\n
", "displayName": "Working with Node.js timers modules" } ] }, { "textRaw": "`timers.runAll()`", "name": "runAll", "type": "method", "meta": { "added": [ "v20.4.0", "v18.19.0" ], "changes": [] }, "signatures": [ { "params": [] } ], "desc": "

Triggers all pending mocked timers immediately. If the Date object is also\nmocked, it will also advance the Date object to the furthest timer's time.

\n

The example below triggers all pending timers immediately,\ncausing them to execute without any delay.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('runAll functions following the given order', (context) => {\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const results = [];\n  setTimeout(() => results.push(1), 9999);\n\n  // Notice that if both timers have the same timeout,\n  // the order of execution is guaranteed\n  setTimeout(() => results.push(3), 8888);\n  setTimeout(() => results.push(2), 8888);\n\n  assert.deepStrictEqual(results, []);\n\n  context.mock.timers.runAll();\n  assert.deepStrictEqual(results, [3, 2, 1]);\n  // The Date object is also advanced to the furthest timer's time\n  assert.strictEqual(Date.now(), 9999);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('runAll functions following the given order', (context) => {\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const results = [];\n  setTimeout(() => results.push(1), 9999);\n\n  // Notice that if both timers have the same timeout,\n  // the order of execution is guaranteed\n  setTimeout(() => results.push(3), 8888);\n  setTimeout(() => results.push(2), 8888);\n\n  assert.deepStrictEqual(results, []);\n\n  context.mock.timers.runAll();\n  assert.deepStrictEqual(results, [3, 2, 1]);\n  // The Date object is also advanced to the furthest timer's time\n  assert.strictEqual(Date.now(), 9999);\n});\n
\n

Note: The runAll() function is specifically designed for\ntriggering timers in the context of timer mocking.\nIt does not have any effect on real-time system\nclocks or actual timers outside of the mocking environment.

" }, { "textRaw": "`timers.setTime(milliseconds)`", "name": "setTime", "type": "method", "meta": { "added": [ "v21.2.0", "v20.11.0" ], "changes": [] }, "signatures": [ { "params": [ { "name": "milliseconds" } ] } ], "desc": "

Sets the current Unix timestamp that will be used as reference for any mocked\nDate objects.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('runAll functions following the given order', (context) => {\n  const now = Date.now();\n  const setTime = 1000;\n  // Date.now is not mocked\n  assert.deepStrictEqual(Date.now(), now);\n\n  context.mock.timers.enable({ apis: ['Date'] });\n  context.mock.timers.setTime(setTime);\n  // Date.now is now 1000\n  assert.strictEqual(Date.now(), setTime);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('setTime replaces current time', (context) => {\n  const now = Date.now();\n  const setTime = 1000;\n  // Date.now is not mocked\n  assert.deepStrictEqual(Date.now(), now);\n\n  context.mock.timers.enable({ apis: ['Date'] });\n  context.mock.timers.setTime(setTime);\n  // Date.now is now 1000\n  assert.strictEqual(Date.now(), setTime);\n});\n
", "modules": [ { "textRaw": "Dates and Timers working together", "name": "dates_and_timers_working_together", "type": "module", "desc": "

Dates and timer objects are dependent on each other. If you use setTime() to\npass the current time to the mocked Date object, the set timers with\nsetTimeout and setInterval will not be affected.

\n

However, the tick method will advance the mocked Date object.

\n
import assert from 'node:assert';\nimport { test } from 'node:test';\n\ntest('runAll functions following the given order', (context) => {\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const results = [];\n  setTimeout(() => results.push(1), 9999);\n\n  assert.deepStrictEqual(results, []);\n  context.mock.timers.setTime(12000);\n  assert.deepStrictEqual(results, []);\n  // The date is advanced but the timers don't tick\n  assert.strictEqual(Date.now(), 12000);\n});\n
\n
const assert = require('node:assert');\nconst { test } = require('node:test');\n\ntest('runAll functions following the given order', (context) => {\n  context.mock.timers.enable({ apis: ['setTimeout', 'Date'] });\n  const results = [];\n  setTimeout(() => results.push(1), 9999);\n\n  assert.deepStrictEqual(results, []);\n  context.mock.timers.setTime(12000);\n  assert.deepStrictEqual(results, []);\n  // The date is advanced but the timers don't tick\n  assert.strictEqual(Date.now(), 12000);\n});\n
", "displayName": "Dates and Timers working together" } ] } ] }, { "textRaw": "Class: `TestsStream`", "name": "TestsStream", "type": "class", "meta": { "added": [ "v18.9.0", "v16.19.0" ], "changes": [ { "version": [ "v20.0.0", "v19.9.0", "v18.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/47094", "description": "added type to test:pass and test:fail events for when the test is a suite." } ] }, "desc": "\n

A successful call to run() method will return a new <TestsStream>\nobject, streaming a series of events representing the execution of the tests. TestsStream will emit events, in the order of the tests definition

\n

Some of the events are guaranteed to be emitted in the same order as the tests\nare defined, while others are emitted in the order that the tests execute.

", "events": [ { "textRaw": "Event: `'test:coverage'`", "name": "test:coverage", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`summary` {Object} An object containing the coverage report.", "name": "summary", "type": "Object", "desc": "An object containing the coverage report.", "options": [ { "textRaw": "`files` {Array} An array of coverage reports for individual files. Each report is an object with the following schema:", "name": "files", "type": "Array", "desc": "An array of coverage reports for individual files. Each report is an object with the following schema:", "options": [ { "textRaw": "`path` {string} The absolute path of the file.", "name": "path", "type": "string", "desc": "The absolute path of the file." }, { "textRaw": "`totalLineCount` {number} The total number of lines.", "name": "totalLineCount", "type": "number", "desc": "The total number of lines." }, { "textRaw": "`totalBranchCount` {number} The total number of branches.", "name": "totalBranchCount", "type": "number", "desc": "The total number of branches." }, { "textRaw": "`totalFunctionCount` {number} The total number of functions.", "name": "totalFunctionCount", "type": "number", "desc": "The total number of functions." }, { "textRaw": "`coveredLineCount` {number} The number of covered lines.", "name": "coveredLineCount", "type": "number", "desc": "The number of covered lines." }, { "textRaw": "`coveredBranchCount` {number} The number of covered branches.", "name": "coveredBranchCount", "type": "number", "desc": "The number of covered branches." }, { "textRaw": "`coveredFunctionCount` {number} The number of covered functions.", "name": "coveredFunctionCount", "type": "number", "desc": "The number of covered functions." }, { "textRaw": "`coveredLinePercent` {number} The percentage of lines covered.", "name": "coveredLinePercent", "type": "number", "desc": "The percentage of lines covered." }, { "textRaw": "`coveredBranchPercent` {number} The percentage of branches covered.", "name": "coveredBranchPercent", "type": "number", "desc": "The percentage of branches covered." }, { "textRaw": "`coveredFunctionPercent` {number} The percentage of functions covered.", "name": "coveredFunctionPercent", "type": "number", "desc": "The percentage of functions covered." }, { "textRaw": "`functions` {Array} An array of functions representing function coverage.", "name": "functions", "type": "Array", "desc": "An array of functions representing function coverage.", "options": [ { "textRaw": "`name` {string} The name of the function.", "name": "name", "type": "string", "desc": "The name of the function." }, { "textRaw": "`line` {number} The line number where the function is defined.", "name": "line", "type": "number", "desc": "The line number where the function is defined." }, { "textRaw": "`count` {number} The number of times the function was called.", "name": "count", "type": "number", "desc": "The number of times the function was called." } ] }, { "textRaw": "`branches` {Array} An array of branches representing branch coverage.", "name": "branches", "type": "Array", "desc": "An array of branches representing branch coverage.", "options": [ { "textRaw": "`line` {number} The line number where the branch is defined.", "name": "line", "type": "number", "desc": "The line number where the branch is defined." }, { "textRaw": "`count` {number} The number of times the branch was taken.", "name": "count", "type": "number", "desc": "The number of times the branch was taken." } ] }, { "textRaw": "`lines` {Array} An array of lines representing line numbers and the number of times they were covered.", "name": "lines", "type": "Array", "desc": "An array of lines representing line numbers and the number of times they were covered.", "options": [ { "textRaw": "`line` {number} The line number.", "name": "line", "type": "number", "desc": "The line number." }, { "textRaw": "`count` {number} The number of times the line was covered.", "name": "count", "type": "number", "desc": "The number of times the line was covered." } ] } ] }, { "textRaw": "`thresholds` {Object} An object containing whether or not the coverage for each coverage type.", "name": "thresholds", "type": "Object", "desc": "An object containing whether or not the coverage for each coverage type.", "options": [ { "textRaw": "`function` {number} The function coverage threshold.", "name": "function", "type": "number", "desc": "The function coverage threshold." }, { "textRaw": "`branch` {number} The branch coverage threshold.", "name": "branch", "type": "number", "desc": "The branch coverage threshold." }, { "textRaw": "`line` {number} The line coverage threshold.", "name": "line", "type": "number", "desc": "The line coverage threshold." } ] }, { "textRaw": "`totals` {Object} An object containing a summary of coverage for all files.", "name": "totals", "type": "Object", "desc": "An object containing a summary of coverage for all files.", "options": [ { "textRaw": "`totalLineCount` {number} The total number of lines.", "name": "totalLineCount", "type": "number", "desc": "The total number of lines." }, { "textRaw": "`totalBranchCount` {number} The total number of branches.", "name": "totalBranchCount", "type": "number", "desc": "The total number of branches." }, { "textRaw": "`totalFunctionCount` {number} The total number of functions.", "name": "totalFunctionCount", "type": "number", "desc": "The total number of functions." }, { "textRaw": "`coveredLineCount` {number} The number of covered lines.", "name": "coveredLineCount", "type": "number", "desc": "The number of covered lines." }, { "textRaw": "`coveredBranchCount` {number} The number of covered branches.", "name": "coveredBranchCount", "type": "number", "desc": "The number of covered branches." }, { "textRaw": "`coveredFunctionCount` {number} The number of covered functions.", "name": "coveredFunctionCount", "type": "number", "desc": "The number of covered functions." }, { "textRaw": "`coveredLinePercent` {number} The percentage of lines covered.", "name": "coveredLinePercent", "type": "number", "desc": "The percentage of lines covered." }, { "textRaw": "`coveredBranchPercent` {number} The percentage of branches covered.", "name": "coveredBranchPercent", "type": "number", "desc": "The percentage of branches covered." }, { "textRaw": "`coveredFunctionPercent` {number} The percentage of functions covered.", "name": "coveredFunctionPercent", "type": "number", "desc": "The percentage of functions covered." } ] }, { "textRaw": "`workingDirectory` {string} The working directory when code coverage began. This is useful for displaying relative path names in case the tests changed the working directory of the Node.js process.", "name": "workingDirectory", "type": "string", "desc": "The working directory when code coverage began. This is useful for displaying relative path names in case the tests changed the working directory of the Node.js process." } ] }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." } ] } ], "desc": "

Emitted when code coverage is enabled and all tests have completed.

" }, { "textRaw": "Event: `'test:complete'`", "name": "test:complete", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`details` {Object} Additional execution metadata.", "name": "details", "type": "Object", "desc": "Additional execution metadata.", "options": [ { "textRaw": "`passed` {boolean} Whether the test passed or not.", "name": "passed", "type": "boolean", "desc": "Whether the test passed or not." }, { "textRaw": "`duration_ms` {number} The duration of the test in milliseconds.", "name": "duration_ms", "type": "number", "desc": "The duration of the test in milliseconds." }, { "textRaw": "`error` {Error|undefined} An error wrapping the error thrown by the test if it did not pass.", "name": "error", "type": "Error|undefined", "desc": "An error wrapping the error thrown by the test if it did not pass.", "options": [ { "textRaw": "`cause` {Error} The actual error thrown by the test.", "name": "cause", "type": "Error", "desc": "The actual error thrown by the test." } ] }, { "textRaw": "`type` {string|undefined} The type of the test, used to denote whether this is a suite.", "name": "type", "type": "string|undefined", "desc": "The type of the test, used to denote whether this is a suite." } ] }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`testNumber` {number} The ordinal number of the test.", "name": "testNumber", "type": "number", "desc": "The ordinal number of the test." }, { "textRaw": "`todo` {string|boolean|undefined} Present if `context.todo` is called", "name": "todo", "type": "string|boolean|undefined", "desc": "Present if `context.todo` is called" }, { "textRaw": "`skip` {string|boolean|undefined} Present if `context.skip` is called", "name": "skip", "type": "string|boolean|undefined", "desc": "Present if `context.skip` is called" } ] } ], "desc": "

Emitted when a test completes its execution.\nThis event is not emitted in the same order as the tests are\ndefined.\nThe corresponding declaration ordered events are 'test:pass' and 'test:fail'.

" }, { "textRaw": "Event: `'test:dequeue'`", "name": "test:dequeue", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`type` {string} The test type. Either `'suite'` or `'test'`.", "name": "type", "type": "string", "desc": "The test type. Either `'suite'` or `'test'`." } ] } ], "desc": "

Emitted when a test is dequeued, right before it is executed.\nThis event is not guaranteed to be emitted in the same order as the tests are\ndefined. The corresponding declaration ordered event is 'test:start'.

" }, { "textRaw": "Event: `'test:diagnostic'`", "name": "test:diagnostic", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`message` {string} The diagnostic message.", "name": "message", "type": "string", "desc": "The diagnostic message." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`level` {string} The severity level of the diagnostic message. Possible values are:", "name": "level", "type": "string", "desc": "The severity level of the diagnostic message. Possible values are:", "options": [ { "textRaw": "`'info'`: Informational messages.", "desc": "`'info'`: Informational messages." }, { "textRaw": "`'warn'`: Warnings.", "desc": "`'warn'`: Warnings." }, { "textRaw": "`'error'`: Errors.", "desc": "`'error'`: Errors." } ] } ] } ], "desc": "

Emitted when context.diagnostic is called.\nThis event is guaranteed to be emitted in the same order as the tests are\ndefined.

" }, { "textRaw": "Event: `'test:enqueue'`", "name": "test:enqueue", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`type` {string} The test type. Either `'suite'` or `'test'`.", "name": "type", "type": "string", "desc": "The test type. Either `'suite'` or `'test'`." } ] } ], "desc": "

Emitted when a test is enqueued for execution.

" }, { "textRaw": "Event: `'test:fail'`", "name": "test:fail", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`details` {Object} Additional execution metadata.", "name": "details", "type": "Object", "desc": "Additional execution metadata.", "options": [ { "textRaw": "`duration_ms` {number} The duration of the test in milliseconds.", "name": "duration_ms", "type": "number", "desc": "The duration of the test in milliseconds." }, { "textRaw": "`error` {Error} An error wrapping the error thrown by the test.", "name": "error", "type": "Error", "desc": "An error wrapping the error thrown by the test.", "options": [ { "textRaw": "`cause` {Error} The actual error thrown by the test.", "name": "cause", "type": "Error", "desc": "The actual error thrown by the test." } ] }, { "textRaw": "`type` {string|undefined} The type of the test, used to denote whether this is a suite.", "name": "type", "type": "string|undefined", "desc": "The type of the test, used to denote whether this is a suite." }, { "textRaw": "`attempt` {number|undefined} The attempt number of the test run, present only when using the `--test-rerun-failures` flag.", "name": "attempt", "type": "number|undefined", "desc": "The attempt number of the test run, present only when using the `--test-rerun-failures` flag." } ] }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`testNumber` {number} The ordinal number of the test.", "name": "testNumber", "type": "number", "desc": "The ordinal number of the test." }, { "textRaw": "`todo` {string|boolean|undefined} Present if `context.todo` is called", "name": "todo", "type": "string|boolean|undefined", "desc": "Present if `context.todo` is called" }, { "textRaw": "`skip` {string|boolean|undefined} Present if `context.skip` is called", "name": "skip", "type": "string|boolean|undefined", "desc": "Present if `context.skip` is called" } ] } ], "desc": "

Emitted when a test fails.\nThis event is guaranteed to be emitted in the same order as the tests are\ndefined.\nThe corresponding execution ordered event is 'test:complete'.

" }, { "textRaw": "Event: `'test:interrupted'`", "name": "test:interrupted", "type": "event", "meta": { "added": [ "v25.7.0" ], "changes": [] }, "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`tests` {Array} An array of objects containing information about the interrupted tests.", "name": "tests", "type": "Array", "desc": "An array of objects containing information about the interrupted tests.", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." } ] } ] } ], "desc": "

Emitted when the test runner is interrupted by a SIGINT signal (e.g., when\npressing Ctrl+C). The event contains information about\nthe tests that were running at the time of interruption.

\n

When using process isolation (the default), the test name will be the file path\nsince the parent runner only knows about file-level tests. When using\n--test-isolation=none, the actual test name is shown.

" }, { "textRaw": "Event: `'test:pass'`", "name": "test:pass", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`details` {Object} Additional execution metadata.", "name": "details", "type": "Object", "desc": "Additional execution metadata.", "options": [ { "textRaw": "`duration_ms` {number} The duration of the test in milliseconds.", "name": "duration_ms", "type": "number", "desc": "The duration of the test in milliseconds." }, { "textRaw": "`type` {string|undefined} The type of the test, used to denote whether this is a suite.", "name": "type", "type": "string|undefined", "desc": "The type of the test, used to denote whether this is a suite." }, { "textRaw": "`attempt` {number|undefined} The attempt number of the test run, present only when using the `--test-rerun-failures` flag.", "name": "attempt", "type": "number|undefined", "desc": "The attempt number of the test run, present only when using the `--test-rerun-failures` flag." }, { "textRaw": "`passed_on_attempt` {number|undefined} The attempt number the test passed on, present only when using the `--test-rerun-failures` flag.", "name": "passed_on_attempt", "type": "number|undefined", "desc": "The attempt number the test passed on, present only when using the `--test-rerun-failures` flag." } ] }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`testNumber` {number} The ordinal number of the test.", "name": "testNumber", "type": "number", "desc": "The ordinal number of the test." }, { "textRaw": "`todo` {string|boolean|undefined} Present if `context.todo` is called", "name": "todo", "type": "string|boolean|undefined", "desc": "Present if `context.todo` is called" }, { "textRaw": "`skip` {string|boolean|undefined} Present if `context.skip` is called", "name": "skip", "type": "string|boolean|undefined", "desc": "Present if `context.skip` is called" } ] } ], "desc": "

Emitted when a test passes.\nThis event is guaranteed to be emitted in the same order as the tests are\ndefined.\nThe corresponding execution ordered event is 'test:complete'.

" }, { "textRaw": "Event: `'test:plan'`", "name": "test:plan", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." }, { "textRaw": "`count` {number} The number of subtests that have ran.", "name": "count", "type": "number", "desc": "The number of subtests that have ran." } ] } ], "desc": "

Emitted when all subtests have completed for a given test.\nThis event is guaranteed to be emitted in the same order as the tests are\ndefined.

" }, { "textRaw": "Event: `'test:start'`", "name": "test:start", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`column` {number|undefined} The column number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "column", "type": "number|undefined", "desc": "The column number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`file` {string|undefined} The path of the test file, `undefined` if test was run through the REPL.", "name": "file", "type": "string|undefined", "desc": "The path of the test file, `undefined` if test was run through the REPL." }, { "textRaw": "`line` {number|undefined} The line number where the test is defined, or `undefined` if the test was run through the REPL.", "name": "line", "type": "number|undefined", "desc": "The line number where the test is defined, or `undefined` if the test was run through the REPL." }, { "textRaw": "`name` {string} The test name.", "name": "name", "type": "string", "desc": "The test name." }, { "textRaw": "`nesting` {number} The nesting level of the test.", "name": "nesting", "type": "number", "desc": "The nesting level of the test." } ] } ], "desc": "

Emitted when a test starts reporting its own and its subtests status.\nThis event is guaranteed to be emitted in the same order as the tests are\ndefined.\nThe corresponding execution ordered event is 'test:dequeue'.

" }, { "textRaw": "Event: `'test:stderr'`", "name": "test:stderr", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`file` {string} The path of the test file.", "name": "file", "type": "string", "desc": "The path of the test file." }, { "textRaw": "`message` {string} The message written to `stderr`.", "name": "message", "type": "string", "desc": "The message written to `stderr`." } ] } ], "desc": "

Emitted when a running test writes to stderr.\nThis event is only emitted if --test flag is passed.\nThis event is not guaranteed to be emitted in the same order as the tests are\ndefined.

" }, { "textRaw": "Event: `'test:stdout'`", "name": "test:stdout", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`file` {string} The path of the test file.", "name": "file", "type": "string", "desc": "The path of the test file." }, { "textRaw": "`message` {string} The message written to `stdout`.", "name": "message", "type": "string", "desc": "The message written to `stdout`." } ] } ], "desc": "

Emitted when a running test writes to stdout.\nThis event is only emitted if --test flag is passed.\nThis event is not guaranteed to be emitted in the same order as the tests are\ndefined.

" }, { "textRaw": "Event: `'test:summary'`", "name": "test:summary", "type": "event", "params": [ { "textRaw": "`data` {Object}", "name": "data", "type": "Object", "options": [ { "textRaw": "`counts` {Object} An object containing the counts of various test results.", "name": "counts", "type": "Object", "desc": "An object containing the counts of various test results.", "options": [ { "textRaw": "`cancelled` {number} The total number of cancelled tests.", "name": "cancelled", "type": "number", "desc": "The total number of cancelled tests." }, { "textRaw": "`failed` {number} The total number of failed tests.", "name": "failed", "type": "number", "desc": "The total number of failed tests." }, { "textRaw": "`passed` {number} The total number of passed tests.", "name": "passed", "type": "number", "desc": "The total number of passed tests." }, { "textRaw": "`skipped` {number} The total number of skipped tests.", "name": "skipped", "type": "number", "desc": "The total number of skipped tests." }, { "textRaw": "`suites` {number} The total number of suites run.", "name": "suites", "type": "number", "desc": "The total number of suites run." }, { "textRaw": "`tests` {number} The total number of tests run, excluding suites.", "name": "tests", "type": "number", "desc": "The total number of tests run, excluding suites." }, { "textRaw": "`todo` {number} The total number of TODO tests.", "name": "todo", "type": "number", "desc": "The total number of TODO tests." }, { "textRaw": "`topLevel` {number} The total number of top level tests and suites.", "name": "topLevel", "type": "number", "desc": "The total number of top level tests and suites." } ] }, { "textRaw": "`duration_ms` {number} The duration of the test run in milliseconds.", "name": "duration_ms", "type": "number", "desc": "The duration of the test run in milliseconds." }, { "textRaw": "`file` {string|undefined} The path of the test file that generated the summary. If the summary corresponds to multiple files, this value is `undefined`.", "name": "file", "type": "string|undefined", "desc": "The path of the test file that generated the summary. If the summary corresponds to multiple files, this value is `undefined`." }, { "textRaw": "`success` {boolean} Indicates whether or not the test run is considered successful or not. If any error condition occurs, such as a failing test or unmet coverage threshold, this value will be set to `false`.", "name": "success", "type": "boolean", "desc": "Indicates whether or not the test run is considered successful or not. If any error condition occurs, such as a failing test or unmet coverage threshold, this value will be set to `false`." } ] } ], "desc": "

Emitted when a test run completes. This event contains metrics pertaining to\nthe completed test run, and is useful for determining if a test run passed or\nfailed. If process-level test isolation is used, a 'test:summary' event is\ngenerated for each test file in addition to a final cumulative summary.

" }, { "textRaw": "Event: `'test:watch:drained'`", "name": "test:watch:drained", "type": "event", "params": [], "desc": "

Emitted when no more tests are queued for execution in watch mode.

" }, { "textRaw": "Event: `'test:watch:restarted'`", "name": "test:watch:restarted", "type": "event", "params": [], "desc": "

Emitted when one or more tests are restarted due to a file change in watch mode.

" } ] }, { "textRaw": "Class: `TestContext`", "name": "TestContext", "type": "class", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [ { "version": [ "v20.1.0", "v18.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/47586", "description": "The `before` function was added to TestContext." } ] }, "desc": "

An instance of TestContext is passed to each test function in order to\ninteract with the test runner. However, the TestContext constructor is not\nexposed as part of the API.

", "methods": [ { "textRaw": "`context.before([fn][, options])`", "name": "before", "type": "method", "meta": { "added": [ "v20.1.0", "v18.17.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function is used to create a hook running before\nsubtest of the current test.

" }, { "textRaw": "`context.beforeEach([fn][, options])`", "name": "beforeEach", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function is used to create a hook running\nbefore each subtest of the current test.

\n
test('top level test', async (t) => {\n  t.beforeEach((t) => t.diagnostic(`about to run ${t.name}`));\n  await t.test(\n    'This is a subtest',\n    (t) => {\n      // Some relevant assertion here\n    },\n  );\n});\n
" }, { "textRaw": "`context.after([fn][, options])`", "name": "after", "type": "method", "meta": { "added": [ "v19.3.0", "v18.13.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function is used to create a hook that runs after the current test\nfinishes.

\n
test('top level test', async (t) => {\n  t.after((t) => t.diagnostic(`finished running ${t.name}`));\n  // Some relevant assertion here\n});\n
" }, { "textRaw": "`context.afterEach([fn][, options])`", "name": "afterEach", "type": "method", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`fn` {Function|AsyncFunction} The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The hook function. The first argument to this function is a `TestContext` object. If the hook uses callbacks, the callback function is passed as the second argument.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the hook. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the hook. The following properties are supported:", "options": [ { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress hook.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress hook." }, { "textRaw": "`timeout` {number} A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the hook will fail after. If unspecified, subtests inherit this value from their parent." } ], "optional": true } ] } ], "desc": "

This function is used to create a hook running\nafter each subtest of the current test.

\n
test('top level test', async (t) => {\n  t.afterEach((t) => t.diagnostic(`finished running ${t.name}`));\n  await t.test(\n    'This is a subtest',\n    (t) => {\n      // Some relevant assertion here\n    },\n  );\n});\n
" }, { "textRaw": "`context.diagnostic(message)`", "name": "diagnostic", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`message` {string} Message to be reported.", "name": "message", "type": "string", "desc": "Message to be reported." } ] } ], "desc": "

This function is used to write diagnostics to the output. Any diagnostic\ninformation is included at the end of the test's results. This function does\nnot return a value.

\n
test('top level test', (t) => {\n  t.diagnostic('A diagnostic message');\n});\n
" }, { "textRaw": "`context.plan(count[,options])`", "name": "plan", "type": "method", "meta": { "added": [ "v22.2.0", "v20.15.0" ], "changes": [ { "version": [ "v23.9.0", "v22.15.0" ], "pr-url": "https://github.com/nodejs/node/pull/56765", "description": "Add the `options` parameter." }, { "version": [ "v23.4.0", "v22.13.0" ], "pr-url": "https://github.com/nodejs/node/pull/55895", "description": "This function is no longer experimental." } ] }, "signatures": [ { "params": [ { "textRaw": "`count` {number} The number of assertions and subtests that are expected to run.", "name": "count", "type": "number", "desc": "The number of assertions and subtests that are expected to run." }, { "textRaw": "`options` {Object} Additional options for the plan.", "name": "options", "type": "Object", "desc": "Additional options for the plan.", "options": [ { "textRaw": "`wait` {boolean|number} The wait time for the plan:If `true`, the plan waits indefinitely for all assertions and subtests to run.If `false`, the plan performs an immediate check after the test function completes, without waiting for any pending assertions or subtests. Any assertions or subtests that complete after this check will not be counted towards the plan.If a number, it specifies the maximum wait time in milliseconds before timing out while waiting for expected assertions and subtests to be matched. If the timeout is reached, the test will fail. **Default:** `false`.", "name": "wait", "type": "boolean|number", "default": "`false`", "desc": "The wait time for the plan:If `true`, the plan waits indefinitely for all assertions and subtests to run.If `false`, the plan performs an immediate check after the test function completes, without waiting for any pending assertions or subtests. Any assertions or subtests that complete after this check will not be counted towards the plan.If a number, it specifies the maximum wait time in milliseconds before timing out while waiting for expected assertions and subtests to be matched. If the timeout is reached, the test will fail." } ], "optional": true } ] } ], "desc": "

This function is used to set the number of assertions and subtests that are expected to run\nwithin the test. If the number of assertions and subtests that run does not match the\nexpected count, the test will fail.

\n
\n

Note: To make sure assertions are tracked, t.assert must be used instead of assert directly.

\n
\n
test('top level test', (t) => {\n  t.plan(2);\n  t.assert.ok('some relevant assertion here');\n  t.test('subtest', () => {});\n});\n
\n

When working with asynchronous code, the plan function can be used to ensure that the\ncorrect number of assertions are run:

\n
test('planning with streams', (t, done) => {\n  function* generate() {\n    yield 'a';\n    yield 'b';\n    yield 'c';\n  }\n  const expected = ['a', 'b', 'c'];\n  t.plan(expected.length);\n  const stream = Readable.from(generate());\n  stream.on('data', (chunk) => {\n    t.assert.strictEqual(chunk, expected.shift());\n  });\n\n  stream.on('end', () => {\n    done();\n  });\n});\n
\n

When using the wait option, you can control how long the test will wait for the expected assertions.\nFor example, setting a maximum wait time ensures that the test will wait for asynchronous assertions\nto complete within the specified timeframe:

\n
test('plan with wait: 2000 waits for async assertions', (t) => {\n  t.plan(1, { wait: 2000 }); // Waits for up to 2 seconds for the assertion to complete.\n\n  const asyncActivity = () => {\n    setTimeout(() => {\n      t.assert.ok(true, 'Async assertion completed within the wait time');\n    }, 1000); // Completes after 1 second, within the 2-second wait time.\n  };\n\n  asyncActivity(); // The test will pass because the assertion is completed in time.\n});\n
\n

Note: If a wait timeout is specified, it begins counting down only after the test function finishes executing.

" }, { "textRaw": "`context.runOnly(shouldRunOnlyTests)`", "name": "runOnly", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`shouldRunOnlyTests` {boolean} Whether or not to run `only` tests.", "name": "shouldRunOnlyTests", "type": "boolean", "desc": "Whether or not to run `only` tests." } ] } ], "desc": "

If shouldRunOnlyTests is truthy, the test context will only run tests that\nhave the only option set. Otherwise, all tests are run. If Node.js was not\nstarted with the --test-only command-line option, this function is a\nno-op.

\n
test('top level test', (t) => {\n  // The test context can be set to run subtests with the 'only' option.\n  t.runOnly(true);\n  return Promise.all([\n    t.test('this subtest is now skipped'),\n    t.test('this subtest is run', { only: true }),\n  ]);\n});\n
" }, { "textRaw": "`context.skip([message])`", "name": "skip", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`message` {string} Optional skip message.", "name": "message", "type": "string", "desc": "Optional skip message.", "optional": true } ] } ], "desc": "

This function causes the test's output to indicate the test as skipped. If\nmessage is provided, it is included in the output. Calling skip() does\nnot terminate execution of the test function. This function does not return a\nvalue.

\n
test('top level test', (t) => {\n  // Make sure to return here as well if the test contains additional logic.\n  t.skip('this is skipped');\n});\n
" }, { "textRaw": "`context.todo([message])`", "name": "todo", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`message` {string} Optional `TODO` message.", "name": "message", "type": "string", "desc": "Optional `TODO` message.", "optional": true } ] } ], "desc": "

This function adds a TODO directive to the test's output. If message is\nprovided, it is included in the output. Calling todo() does not terminate\nexecution of the test function. This function does not return a value.

\n
test('top level test', (t) => {\n  // This test is marked as `TODO`\n  t.todo('this is a todo');\n});\n
" }, { "textRaw": "`context.test([name][, options][, fn])`", "name": "test", "type": "method", "meta": { "added": [ "v18.0.0", "v16.17.0" ], "changes": [ { "version": [ "v18.8.0", "v16.18.0" ], "pr-url": "https://github.com/nodejs/node/pull/43554", "description": "Add a `signal` option." }, { "version": [ "v18.7.0", "v16.17.0" ], "pr-url": "https://github.com/nodejs/node/pull/43505", "description": "Add a `timeout` option." } ] }, "signatures": [ { "params": [ { "textRaw": "`name` {string} The name of the subtest, which is displayed when reporting test results. **Default:** The `name` property of `fn`, or `''` if `fn` does not have a name.", "name": "name", "type": "string", "default": "The `name` property of `fn`, or `''` if `fn` does not have a name", "desc": "The name of the subtest, which is displayed when reporting test results.", "optional": true }, { "textRaw": "`options` {Object} Configuration options for the subtest. The following properties are supported:", "name": "options", "type": "Object", "desc": "Configuration options for the subtest. The following properties are supported:", "options": [ { "textRaw": "`concurrency` {number|boolean|null} If a number is provided, then that many tests would run asynchronously (they are still managed by the single-threaded event loop). If `true`, it would run all subtests in parallel. If `false`, it would only run one test at a time. If unspecified, subtests inherit this value from their parent. **Default:** `null`.", "name": "concurrency", "type": "number|boolean|null", "default": "`null`", "desc": "If a number is provided, then that many tests would run asynchronously (they are still managed by the single-threaded event loop). If `true`, it would run all subtests in parallel. If `false`, it would only run one test at a time. If unspecified, subtests inherit this value from their parent." }, { "textRaw": "`only` {boolean} If truthy, and the test context is configured to run `only` tests, then this test will be run. Otherwise, the test is skipped. **Default:** `false`.", "name": "only", "type": "boolean", "default": "`false`", "desc": "If truthy, and the test context is configured to run `only` tests, then this test will be run. Otherwise, the test is skipped." }, { "textRaw": "`signal` {AbortSignal} Allows aborting an in-progress test.", "name": "signal", "type": "AbortSignal", "desc": "Allows aborting an in-progress test." }, { "textRaw": "`skip` {boolean|string} If truthy, the test is skipped. If a string is provided, that string is displayed in the test results as the reason for skipping the test. **Default:** `false`.", "name": "skip", "type": "boolean|string", "default": "`false`", "desc": "If truthy, the test is skipped. If a string is provided, that string is displayed in the test results as the reason for skipping the test." }, { "textRaw": "`todo` {boolean|string} If truthy, the test marked as `TODO`. If a string is provided, that string is displayed in the test results as the reason why the test is `TODO`. **Default:** `false`.", "name": "todo", "type": "boolean|string", "default": "`false`", "desc": "If truthy, the test marked as `TODO`. If a string is provided, that string is displayed in the test results as the reason why the test is `TODO`." }, { "textRaw": "`timeout` {number} A number of milliseconds the test will fail after. If unspecified, subtests inherit this value from their parent. **Default:** `Infinity`.", "name": "timeout", "type": "number", "default": "`Infinity`", "desc": "A number of milliseconds the test will fail after. If unspecified, subtests inherit this value from their parent." }, { "textRaw": "`plan` {number} The number of assertions and subtests expected to be run in the test. If the number of assertions run in the test does not match the number specified in the plan, the test will fail. **Default:** `undefined`.", "name": "plan", "type": "number", "default": "`undefined`", "desc": "The number of assertions and subtests expected to be run in the test. If the number of assertions run in the test does not match the number specified in the plan, the test will fail." } ], "optional": true }, { "textRaw": "`fn` {Function|AsyncFunction} The function under test. The first argument to this function is a `TestContext` object. If the test uses callbacks, the callback function is passed as the second argument. **Default:** A no-op function.", "name": "fn", "type": "Function|AsyncFunction", "default": "A no-op function", "desc": "The function under test. The first argument to this function is a `TestContext` object. If the test uses callbacks, the callback function is passed as the second argument.", "optional": true } ], "return": { "textRaw": "Returns: {Promise} Fulfilled with `undefined` once the test completes.", "name": "return", "type": "Promise", "desc": "Fulfilled with `undefined` once the test completes." } } ], "desc": "

This function is used to create subtests under the current test. This function\nbehaves in the same fashion as the top level test() function.

\n
test('top level test', async (t) => {\n  await t.test(\n    'This is a subtest',\n    { only: false, skip: false, concurrency: 1, todo: false, plan: 1 },\n    (t) => {\n      t.assert.ok('some relevant assertion here');\n    },\n  );\n});\n
" }, { "textRaw": "`context.waitFor(condition[, options])`", "name": "waitFor", "type": "method", "meta": { "added": [ "v23.7.0", "v22.14.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`condition` {Function|AsyncFunction} An assertion function that is invoked periodically until it completes successfully or the defined polling timeout elapses. Successful completion is defined as not throwing or rejecting. This function does not accept any arguments, and is allowed to return any value.", "name": "condition", "type": "Function|AsyncFunction", "desc": "An assertion function that is invoked periodically until it completes successfully or the defined polling timeout elapses. Successful completion is defined as not throwing or rejecting. This function does not accept any arguments, and is allowed to return any value." }, { "textRaw": "`options` {Object} An optional configuration object for the polling operation. The following properties are supported:", "name": "options", "type": "Object", "desc": "An optional configuration object for the polling operation. The following properties are supported:", "options": [ { "textRaw": "`interval` {number} The number of milliseconds to wait after an unsuccessful invocation of `condition` before trying again. **Default:** `50`.", "name": "interval", "type": "number", "default": "`50`", "desc": "The number of milliseconds to wait after an unsuccessful invocation of `condition` before trying again." }, { "textRaw": "`timeout` {number} The poll timeout in milliseconds. If `condition` has not succeeded by the time this elapses, an error occurs. **Default:** `1000`.", "name": "timeout", "type": "number", "default": "`1000`", "desc": "The poll timeout in milliseconds. If `condition` has not succeeded by the time this elapses, an error occurs." } ], "optional": true } ], "return": { "textRaw": "Returns: {Promise} Fulfilled with the value returned by `condition`.", "name": "return", "type": "Promise", "desc": "Fulfilled with the value returned by `condition`." } } ], "desc": "

This method polls a condition function until that function either returns\nsuccessfully or the operation times out.

" } ], "properties": [ { "textRaw": "`context.assert`", "name": "assert", "type": "property", "meta": { "added": [ "v22.2.0", "v20.15.0" ], "changes": [] }, "desc": "

An object containing assertion methods bound to context. The top-level\nfunctions from the node:assert module are exposed here for the purpose of\ncreating test plans.

\n
test('test', (t) => {\n  t.plan(1);\n  t.assert.strictEqual(true, true);\n});\n
", "methods": [ { "textRaw": "`context.assert.fileSnapshot(value, path[, options])`", "name": "fileSnapshot", "type": "method", "meta": { "added": [ "v23.7.0", "v22.14.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`value` {any} A value to serialize to a string. If Node.js was started with the `--test-update-snapshots` flag, the serialized value is written to `path`. Otherwise, the serialized value is compared to the contents of the existing snapshot file.", "name": "value", "type": "any", "desc": "A value to serialize to a string. If Node.js was started with the `--test-update-snapshots` flag, the serialized value is written to `path`. Otherwise, the serialized value is compared to the contents of the existing snapshot file." }, { "textRaw": "`path` {string} The file where the serialized `value` is written.", "name": "path", "type": "string", "desc": "The file where the serialized `value` is written." }, { "textRaw": "`options` {Object} Optional configuration options. The following properties are supported:", "name": "options", "type": "Object", "desc": "Optional configuration options. The following properties are supported:", "options": [ { "textRaw": "`serializers` {Array} An array of synchronous functions used to serialize `value` into a string. `value` is passed as the only argument to the first serializer function. The return value of each serializer is passed as input to the next serializer. Once all serializers have run, the resulting value is coerced to a string. **Default:** If no serializers are provided, the test runner's default serializers are used.", "name": "serializers", "type": "Array", "default": "If no serializers are provided, the test runner's default serializers are used", "desc": "An array of synchronous functions used to serialize `value` into a string. `value` is passed as the only argument to the first serializer function. The return value of each serializer is passed as input to the next serializer. Once all serializers have run, the resulting value is coerced to a string." } ], "optional": true } ] } ], "desc": "

This function serializes value and writes it to the file specified by path.

\n
test('snapshot test with default serialization', (t) => {\n  t.assert.fileSnapshot({ value1: 1, value2: 2 }, './snapshots/snapshot.json');\n});\n
\n

This function differs from context.assert.snapshot() in the following ways:

\n
    \n
  • The snapshot file path is explicitly provided by the user.
  • \n
  • Each snapshot file is limited to a single snapshot value.
  • \n
  • No additional escaping is performed by the test runner.
  • \n
\n

These differences allow snapshot files to better support features such as syntax\nhighlighting.

" }, { "textRaw": "`context.assert.snapshot(value[, options])`", "name": "snapshot", "type": "method", "meta": { "added": [ "v22.3.0" ], "changes": [] }, "signatures": [ { "params": [ { "textRaw": "`value` {any} A value to serialize to a string. If Node.js was started with the `--test-update-snapshots` flag, the serialized value is written to the snapshot file. Otherwise, the serialized value is compared to the corresponding value in the existing snapshot file.", "name": "value", "type": "any", "desc": "A value to serialize to a string. If Node.js was started with the `--test-update-snapshots` flag, the serialized value is written to the snapshot file. Otherwise, the serialized value is compared to the corresponding value in the existing snapshot file." }, { "textRaw": "`options` {Object} Optional configuration options. The following properties are supported:", "name": "options", "type": "Object", "desc": "Optional configuration options. The following properties are supported:", "options": [ { "textRaw": "`serializers` {Array} An array of synchronous functions used to serialize `value` into a string. `value` is passed as the only argument to the first serializer function. The return value of each serializer is passed as input to the next serializer. Once all serializers have run, the resulting value is coerced to a string. **Default:** If no serializers are provided, the test runner's default serializers are used.", "name": "serializers", "type": "Array", "default": "If no serializers are provided, the test runner's default serializers are used", "desc": "An array of synchronous functions used to serialize `value` into a string. `value` is passed as the only argument to the first serializer function. The return value of each serializer is passed as input to the next serializer. Once all serializers have run, the resulting value is coerced to a string." } ], "optional": true } ] } ], "desc": "

This function implements assertions for snapshot testing.

\n
test('snapshot test with default serialization', (t) => {\n  t.assert.snapshot({ value1: 1, value2: 2 });\n});\n\ntest('snapshot test with custom serialization', (t) => {\n  t.assert.snapshot({ value3: 3, value4: 4 }, {\n    serializers: [(value) => JSON.stringify(value)],\n  });\n});\n
" } ] }, { "textRaw": "`context.filePath`", "name": "filePath", "type": "property", "meta": { "added": [ "v22.6.0", "v20.16.0" ], "changes": [] }, "desc": "

The absolute path of the test file that created the current test. If a test file\nimports additional modules that generate tests, the imported tests will return\nthe path of the root test file.

" }, { "textRaw": "`context.fullName`", "name": "fullName", "type": "property", "meta": { "added": [ "v22.3.0", "v20.16.0" ], "changes": [] }, "desc": "

The name of the test and each of its ancestors, separated by >.

" }, { "textRaw": "`context.name`", "name": "name", "type": "property", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "desc": "

The name of the test.

" }, { "textRaw": "Type: {boolean} `false` before the test is executed, e.g. in a `beforeEach` hook.", "name": "passed", "type": "boolean", "meta": { "added": [ "v21.7.0", "v20.12.0" ], "changes": [] }, "desc": "

Indicated whether the test succeeded.

", "shortDesc": "`false` before the test is executed, e.g. in a `beforeEach` hook." }, { "textRaw": "Type: {Error|null}", "name": "error", "type": "Error|null", "meta": { "added": [ "v21.7.0", "v20.12.0" ], "changes": [] }, "desc": "

The failure reason for the test/case; wrapped and available via context.error.cause.

" }, { "textRaw": "Type: {number}", "name": "attempt", "type": "number", "meta": { "added": [ "v25.0.0" ], "changes": [] }, "desc": "

Number of times the test has been attempted.

" }, { "textRaw": "Type: {number|undefined}", "name": "workerId", "type": "number|undefined", "meta": { "added": [ "v25.8.0" ], "changes": [] }, "desc": "

The unique identifier of the worker running the current test file. This value is\nderived from the NODE_TEST_WORKER_ID environment variable. When running tests\nwith --test-isolation=process (the default), each test file runs in a separate\nchild process and is assigned a worker ID from 1 to N, where N is the number of\nconcurrent workers. When running with --test-isolation=none, all tests run in\nthe same process and the worker ID is always 1. This value is undefined when\nnot running in a test context.

\n

This property is useful for splitting resources (like database connections or\nserver ports) across concurrent test files:

\n
import { test } from 'node:test';\nimport { process } from 'node:process';\n\ntest('database operations', async (t) => {\n  // Worker ID is available via context\n  console.log(`Running in worker ${t.workerId}`);\n\n  // Or via environment variable (available at import time)\n  const workerId = process.env.NODE_TEST_WORKER_ID;\n  // Use workerId to allocate separate resources per worker\n});\n
" }, { "textRaw": "Type: {AbortSignal}", "name": "signal", "type": "AbortSignal", "meta": { "added": [ "v18.7.0", "v16.17.0" ], "changes": [] }, "desc": "

Can be used to abort test subtasks when the test has been aborted.

\n
test('top level test', async (t) => {\n  await fetch('some/uri', { signal: t.signal });\n});\n
" } ] }, { "textRaw": "Class: `SuiteContext`", "name": "SuiteContext", "type": "class", "meta": { "added": [ "v18.7.0", "v16.17.0" ], "changes": [] }, "desc": "

An instance of SuiteContext is passed to each suite function in order to\ninteract with the test runner. However, the SuiteContext constructor is not\nexposed as part of the API.

", "properties": [ { "textRaw": "`context.filePath`", "name": "filePath", "type": "property", "meta": { "added": [ "v22.6.0" ], "changes": [] }, "desc": "

The absolute path of the test file that created the current suite. If a test\nfile imports additional modules that generate suites, the imported suites will\nreturn the path of the root test file.

" }, { "textRaw": "`context.fullName`", "name": "fullName", "type": "property", "meta": { "added": [ "v22.3.0", "v20.16.0" ], "changes": [] }, "desc": "

The name of the suite and each of its ancestors, separated by >.

" }, { "textRaw": "`context.name`", "name": "name", "type": "property", "meta": { "added": [ "v18.8.0", "v16.18.0" ], "changes": [] }, "desc": "

The name of the suite.

" }, { "textRaw": "Type: {AbortSignal}", "name": "signal", "type": "AbortSignal", "meta": { "added": [ "v18.7.0", "v16.17.0" ], "changes": [] }, "desc": "

Can be used to abort test subtasks when the test has been aborted.

" } ] } ], "displayName": "Test runner" } ] }

X Tutup