Tags: triggerdotdev/trigger.dev
Tags
chore: release v4.4.3 (#3182) ## Summary 2 new features, 2 improvements. ## Improvements - Add syncSupabaseEnvVars to pull database connection strings and save them as trigger.dev environment variables ([#3152](#3152)) - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) ## Server changes These changes affect the self-hosted Docker image and Trigger.dev Cloud: - A new Errors page for viewing and tracking errors that cause runs to fail - Errors are grouped using error fingerprinting - View top errors for a time period, filter by task, or search the text - View occurrences over time - View all the runs for an error and bulk replay them ([#3172](#3172)) - Add sidebar tabs (Options, AI, Schema) to the Test page for schemaTask payload generation and schema viewing. ([#3188](#3188)) <details> <summary>Raw changeset output</summary> # Releases ## @trigger.dev/build@4.4.3 ### Patch Changes - Add syncSupabaseEnvVars to pull database connection strings and save them as trigger.dev environment variables ([#3152](#3152)) - Updated dependencies: - `@trigger.dev/core@4.4.3` ## trigger.dev@4.4.3 ### Patch Changes - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) - Updated dependencies: - `@trigger.dev/core@4.4.3` - `@trigger.dev/build@4.4.3` - `@trigger.dev/schema-to-json@4.4.3` ## @trigger.dev/core@4.4.3 ### Patch Changes - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) ## @trigger.dev/python@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` - `@trigger.dev/build@4.4.3` - `@trigger.dev/sdk@4.4.3` ## @trigger.dev/react-hooks@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/redis-worker@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/rsc@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/schema-to-json@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/sdk@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` </details> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
chore: release v4.4.3 (#3182) ## Summary 2 new features, 2 improvements. ## Improvements - Add syncSupabaseEnvVars to pull database connection strings and save them as trigger.dev environment variables ([#3152](#3152)) - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) ## Server changes These changes affect the self-hosted Docker image and Trigger.dev Cloud: - A new Errors page for viewing and tracking errors that cause runs to fail - Errors are grouped using error fingerprinting - View top errors for a time period, filter by task, or search the text - View occurrences over time - View all the runs for an error and bulk replay them ([#3172](#3172)) - Add sidebar tabs (Options, AI, Schema) to the Test page for schemaTask payload generation and schema viewing. ([#3188](#3188)) <details> <summary>Raw changeset output</summary> # Releases ## @trigger.dev/build@4.4.3 ### Patch Changes - Add syncSupabaseEnvVars to pull database connection strings and save them as trigger.dev environment variables ([#3152](#3152)) - Updated dependencies: - `@trigger.dev/core@4.4.3` ## trigger.dev@4.4.3 ### Patch Changes - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) - Updated dependencies: - `@trigger.dev/core@4.4.3` - `@trigger.dev/build@4.4.3` - `@trigger.dev/schema-to-json@4.4.3` ## @trigger.dev/core@4.4.3 ### Patch Changes - Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL ([#3191](#3191)) ## @trigger.dev/python@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` - `@trigger.dev/build@4.4.3` - `@trigger.dev/sdk@4.4.3` ## @trigger.dev/react-hooks@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/redis-worker@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/rsc@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/schema-to-json@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` ## @trigger.dev/sdk@4.4.3 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.3` </details> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
chore: release v4.4.2 (#3127) # trigger.dev v4.4.2 ## Summary 2 new features, 2 improvements, 8 bug fixes. ## Improvements - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) - Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size ([#3137](#3137)) ## Bug fixes - Fix slow batch queue processing by removing spurious cooloff on concurrency blocks and fixing a race condition where retry attempt counts were not atomically updated during message re-queue. ([#3079](#3079)) - fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown ([#3080](#3080)) ## Server changes These changes affect the self-hosted Docker image and Trigger.dev Cloud: - Two-level tenant dispatch architecture for batch queue processing. Replaces the single master queue with a two-level index: a dispatch index (tenant → shard) and per-tenant queue indexes (tenant → queues). This enables O(1) tenant selection and fair scheduling across tenants regardless of queue count. Improves batch queue processing performance. ([#3133](#3133)) - Add input streams with API routes for sending data to running tasks, SSE reading, and waitpoint creation. Includes Redis cache for fast `.send()` to `.wait()` bridging, dashboard span support for input stream operations, and s2-lite support with configurable S2 endpoint, access token skipping, and S2-Basin headers for self-hosted deployments. Adds s2-lite to Docker Compose for local development. ([#3146](#3146)) - Speed up batch queue processing by disabling cooloff and increasing the batch queue processing concurrency limits on the cloud: - Pro plan: increase to 50 from 10. - Hobby plan: increase to 10 from 5. - Free plan: increase to 5 from 1. ([#3079](#3079)) - Move batch queue global rate limiter from FairQueue claim phase to BatchQueue worker queue consumer for accurate per-item rate limiting. Add worker queue depth cap to prevent unbounded growth that could cause visibility timeouts. ([#3166](#3166)) - Fix a race condition in the waitpoint system where a run could be blocked by a completed waitpoint but never be resumed because of a PostgreSQL MVCC issue. This was most likely to occur when creating a waitpoint via `wait.forToken()` at the same moment as completing the token with `wait.completeToken()`. Other types of waitpoints (timed, child runs) were not affected. ([#3075](#3075)) - Fix metrics dashboard chart series colors going out of sync and widgets not reloading stale data when scrolled back into view ([#3126](#3126)) - Gracefully handle oversized batch items instead of aborting the stream. When an NDJSON batch item exceeds the maximum size, the parser now emits an error marker instead of throwing, allowing the batch to seal normally. The oversized item becomes a pre-failed run with `PAYLOAD_TOO_LARGE` error code, while other items in the batch process successfully. This prevents `batchTriggerAndWait` from seeing connection errors and retrying with exponential backoff. Also fixes the NDJSON parser not consuming the remainder of an oversized line split across multiple chunks, which caused "Invalid JSON" errors on subsequent lines. ([#3137](#3137)) - Require the user is an admin during an impersonation session. Previously only the impersonation cookie was checked; now the real user's admin flag is verified on every request. If admin has been revoked, the session falls back to the real user's ID. ([#3078](#3078)) <details> <summary>Raw changeset output</summary> # Releases ## @trigger.dev/build@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## trigger.dev@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/build@4.4.2` - `@trigger.dev/core@4.4.2` - `@trigger.dev/schema-to-json@4.4.2` ## @trigger.dev/python@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/sdk@4.4.2` - `@trigger.dev/build@4.4.2` - `@trigger.dev/core@4.4.2` ## @trigger.dev/react-hooks@4.4.2 ### Patch Changes - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new `endpoints` configuration, `AppendRecord.string()` API, and `maxInflightBytes` session option. - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/redis-worker@4.4.2 ### Patch Changes - Fix slow batch queue processing by removing spurious cooloff on concurrency blocks and fixing a race condition where retry attempt counts were not atomically updated during message re-queue. ([#3079](#3079)) - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/rsc@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/schema-to-json@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/sdk@4.4.2 ### Patch Changes - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new `endpoints` configuration, `AppendRecord.string()` API, and `maxInflightBytes` session option. - fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown ([#3080](#3080)) - Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size ([#3137](#3137)) - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/core@4.4.2 </details> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
chore: release v4.4.2 (#3127) # trigger.dev v4.4.2 ## Summary 2 new features, 2 improvements, 8 bug fixes. ## Improvements - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) - Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size ([#3137](#3137)) ## Bug fixes - Fix slow batch queue processing by removing spurious cooloff on concurrency blocks and fixing a race condition where retry attempt counts were not atomically updated during message re-queue. ([#3079](#3079)) - fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown ([#3080](#3080)) ## Server changes These changes affect the self-hosted Docker image and Trigger.dev Cloud: - Two-level tenant dispatch architecture for batch queue processing. Replaces the single master queue with a two-level index: a dispatch index (tenant → shard) and per-tenant queue indexes (tenant → queues). This enables O(1) tenant selection and fair scheduling across tenants regardless of queue count. Improves batch queue processing performance. ([#3133](#3133)) - Add input streams with API routes for sending data to running tasks, SSE reading, and waitpoint creation. Includes Redis cache for fast `.send()` to `.wait()` bridging, dashboard span support for input stream operations, and s2-lite support with configurable S2 endpoint, access token skipping, and S2-Basin headers for self-hosted deployments. Adds s2-lite to Docker Compose for local development. ([#3146](#3146)) - Speed up batch queue processing by disabling cooloff and increasing the batch queue processing concurrency limits on the cloud: - Pro plan: increase to 50 from 10. - Hobby plan: increase to 10 from 5. - Free plan: increase to 5 from 1. ([#3079](#3079)) - Move batch queue global rate limiter from FairQueue claim phase to BatchQueue worker queue consumer for accurate per-item rate limiting. Add worker queue depth cap to prevent unbounded growth that could cause visibility timeouts. ([#3166](#3166)) - Fix a race condition in the waitpoint system where a run could be blocked by a completed waitpoint but never be resumed because of a PostgreSQL MVCC issue. This was most likely to occur when creating a waitpoint via `wait.forToken()` at the same moment as completing the token with `wait.completeToken()`. Other types of waitpoints (timed, child runs) were not affected. ([#3075](#3075)) - Fix metrics dashboard chart series colors going out of sync and widgets not reloading stale data when scrolled back into view ([#3126](#3126)) - Gracefully handle oversized batch items instead of aborting the stream. When an NDJSON batch item exceeds the maximum size, the parser now emits an error marker instead of throwing, allowing the batch to seal normally. The oversized item becomes a pre-failed run with `PAYLOAD_TOO_LARGE` error code, while other items in the batch process successfully. This prevents `batchTriggerAndWait` from seeing connection errors and retrying with exponential backoff. Also fixes the NDJSON parser not consuming the remainder of an oversized line split across multiple chunks, which caused "Invalid JSON" errors on subsequent lines. ([#3137](#3137)) - Require the user is an admin during an impersonation session. Previously only the impersonation cookie was checked; now the real user's admin flag is verified on every request. If admin has been revoked, the session falls back to the real user's ID. ([#3078](#3078)) <details> <summary>Raw changeset output</summary> # Releases ## @trigger.dev/build@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## trigger.dev@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/build@4.4.2` - `@trigger.dev/core@4.4.2` - `@trigger.dev/schema-to-json@4.4.2` ## @trigger.dev/python@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/sdk@4.4.2` - `@trigger.dev/build@4.4.2` - `@trigger.dev/core@4.4.2` ## @trigger.dev/react-hooks@4.4.2 ### Patch Changes - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new `endpoints` configuration, `AppendRecord.string()` API, and `maxInflightBytes` session option. - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/redis-worker@4.4.2 ### Patch Changes - Fix slow batch queue processing by removing spurious cooloff on concurrency blocks and fixing a race condition where retry attempt counts were not atomically updated during message re-queue. ([#3079](#3079)) - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/rsc@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/schema-to-json@4.4.2 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/sdk@4.4.2 ### Patch Changes - Add input streams for bidirectional communication with running tasks. Define typed input streams with `streams.input<T>({ id })`, then consume inside tasks via `.wait()` (suspends the process), `.once()` (waits for next message), or `.on()` (subscribes to a continuous stream). Send data from backends with `.send(runId, data)` or from frontends with the new `useInputStreamSend` React hook. ([#3146](#3146)) Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new `endpoints` configuration, `AppendRecord.string()` API, and `maxInflightBytes` session option. - fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown ([#3080](#3080)) - Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size ([#3137](#3137)) - Updated dependencies: - `@trigger.dev/core@4.4.2` ## @trigger.dev/core@4.4.2 </details> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Merge remote-tracking branch 'origin/main' into feat/compute-workload… …-manager
feat(supervisor): add flag to enable compute snapshots Gates snapshot/restore behaviour independently of compute mode. When disabled, VMs won't receive the metadata URL and suspend/restore are no-ops. Defaults to off so compute mode can be used without snapshots.
chore: release v4.4.1 (#3100) This PR was opened by the [Changesets release](https://github.com/changesets/action) GitHub action. When you're ready to do a release, you can merge this and publish to npm yourself or [setup this action to publish automatically](https://github.com/changesets/action#with-publishing). If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated. # Releases ## @trigger.dev/build@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.1` ## trigger.dev@4.4.1 ### Patch Changes - Add OTEL metrics pipeline for task workers. Workers collect process CPU/memory, Node.js runtime metrics (event loop utilization, event loop delay, heap usage), and user-defined custom metrics via `otel.metrics.getMeter()`. Metrics are exported to ClickHouse with 10-second aggregation buckets and 1m/5m rollups, and are queryable through the dashboard query engine with typed attribute columns, `prettyFormat()` for human-readable values, and AI query support. ([#3061](#3061)) - Updated dependencies: - `@trigger.dev/build@4.4.1` - `@trigger.dev/core@4.4.1` - `@trigger.dev/schema-to-json@4.4.1` ## @trigger.dev/python@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/sdk@4.4.1` - `@trigger.dev/build@4.4.1` - `@trigger.dev/core@4.4.1` ## @trigger.dev/react-hooks@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.1` ## @trigger.dev/redis-worker@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.1` ## @trigger.dev/rsc@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.1` ## @trigger.dev/schema-to-json@4.4.1 ### Patch Changes - Updated dependencies: - `@trigger.dev/core@4.4.1` ## @trigger.dev/sdk@4.4.1 ### Patch Changes - Add OTEL metrics pipeline for task workers. Workers collect process CPU/memory, Node.js runtime metrics (event loop utilization, event loop delay, heap usage), and user-defined custom metrics via `otel.metrics.getMeter()`. Metrics are exported to ClickHouse with 10-second aggregation buckets and 1m/5m rollups, and are queryable through the dashboard query engine with typed attribute columns, `prettyFormat()` for human-readable values, and AI query support. ([#3061](#3061)) - Updated dependencies: - `@trigger.dev/core@4.4.1` ## @trigger.dev/core@4.4.1 --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
PreviousNext