Triggo Documentation
Workflow Builder

System nodes

Reference for every built-in system node — flow control, data shaping, and utilities — including inputs, outputs, and a runnable example for each.

System nodes

System nodes are the built-in building blocks Triggo provides alongside connector actions — the pieces that shape data, route control flow, call LLMs, or pause execution without talking to a third-party API. Reach for a system node when what you need isn't "call service X", but "decide which branch to take", "do this once per order", "collapse a list into a total", or "pause until tomorrow morning". Connector actions handle the verbs of specific integrations; system nodes handle the grammar that connects them.

This page documents every system node currently exposed by the node picker. Three types from earlier drafts of the product — a generic HTTP system node, a Transform node, and a Delay node — are not system nodes in Triggo: HTTP lives in the dedicated http-request connector, Delay is covered by the Wait node below, and the legacy transform type is now rejected at save time (use Code or field-mapping templates instead).

The Name field

Every node's inspector has a Name field that overrides the default header label on the canvas. The field is available on all system nodes as well as actions, triggers, Code, and LLM nodes, and it's purely cosmetic — it changes what the canvas header displays, not how the node is referenced in field mappings (those still use the node's id). Use it to turn Condition into Paid only? or Set into Build Sheets row when a canvas gets busy.

Where to find them — the node picker

In the node picker, system nodes live under Logic, split into three subcategories that mirror the sections on this page:

  • Flow — Condition, Switch, Loop, Merge, Wait, Stop & Error. These change which nodes run, in what order, or whether the run pauses.
  • Data — Set, Filter, Aggregate, Split Out. These transform the payload without calling an external service.
  • Utilities — Code, LLM, RAG Retrieve, Noop, Respond to Webhook. A grab-bag of general-purpose tools that don't fit Flow or Data.

All 15 types are declared in LOGIC_TYPES in and grouped for display by LOGIC_CATEGORIES in.

Flow

Flow nodes decide which nodes run and when. They branch the DAG, loop over items, join parallel branches back together, pause the run, or stop it with a user-defined error.

Condition

A two-way branch: evaluates a group of conditions against upstream data and routes execution down the true branch or the false branch.

Inputs (config):

  • combinator"AND" | "OR", required. How to combine conditions.
  • conditions[] — required, at least one. Each entry is { id, field, operator, value? }. field is a template path into upstream data (for example trigger.payment.status); operator is one of 23 supported operators covering text, number, boolean, existence, list, and date comparisons.
  • caseSensitive — optional boolean for text operators.
  • label — optional display label.

Branching model: the Condition node has two source handles, true and false. Outgoing edges set sourceHandle to one of those values; the engine marks every edge on the non-taken branch as skipped, and every downstream node on that branch is written to the journal as step_skipped.

Output: { branch: "true" | "false", matched: <per-condition detail> }.

Example: route paid orders to the fulfillment branch and everything else to a "needs review" branch.

Trigger: new order
  → Condition (field: "trigger.status", operator: TEXT_EXACTLY_MATCHES, value: "paid")
      ├── true  → Google Sheets · Append row (fulfillment log)
      └── false → Slack · Post message (needs-review channel)

See the operator catalogue in — the full set is 8 text, 6 number, 2 boolean, 2 existence, 4 list, and 1 date operator.

Switch

An N-way router: evaluates branches in order and takes the first that matches. Use this when Condition's true/false isn't enough — for example, routing by country, plan tier, or event name.

Inputs (config):

  • mode"value" | "expression", required.
  • "value" mode: resolves matchField against upstream data and compares it to each branch's value with string equality.
  • "expression" mode: resolves each branch's expression template and takes the first truthy result.
  • matchField — optional; required in value mode.
  • branches[] — required, at least one. Each is { id, name, value?, expression? }.
  • fallback — required boolean. If true, an unmatched run takes the fallback handle; if false, nothing is taken and the output marks "no_match".
  • label — optional.

Branching model: one source handle per branch (branch_<id>), plus an optional fallback handle. Non-taken branches are pruned the same way as Condition — their edges are marked skipped and their downstream nodes are journaled as step_skipped.

Output: { taken: "branch_<id>" | "fallback" | "no_match" }.

Example: route by subscription tier.

Trigger: new signup
  → Switch (mode: value, matchField: "trigger.plan")
      ├── branch "pro"      → Email · Send onboarding template A
      ├── branch "business" → Email · Send onboarding template B
      └── fallback          → Email · Send onboarding template Free

Loop

Iterates bodyExecutor over an upstream array, running the loop body as a subgraph once per item. Iterations are sequential by default.

Inputs (config):

  • items — required template string resolving to an array, e.g. {{trigger.orders}}.
  • itemVariable — optional, defaults to "item"; the identifier by which body nodes reference the current iteration's value.
  • batchSize — optional integer ≥ 1. Items are processed in chunks of this size; items within a chunk are dispatched together, but the executor treats iterations as sequential — do not rely on parallel execution. Defaults to 1 (fully sequential).
  • label — optional.

Body: the node has an attached body subgraph drawn on the canvas. The engine runs that subgraph once per item; body output is collected into iterations. Nested loops are capped by MAX_LOOP_NESTING_DEPTH. On the first iteration error the loop stops (fail-fast) and the node fails.

Output: { iterations: [{ index, item, output }], totalItems: number }.

Example: append one row to Google Sheets per order in a webhook batch.

Trigger: Inbound webhook (orders.batch)
  → Loop (items: "{{trigger.orders}}", batchSize: 5)
      body: Google Sheets · Append row ({{item.id}}, {{item.total}})
  → Slack · Post "{{loop.totalItems}} orders logged"

Merge

A wait-for-all join: holds until every incoming edge is either completed, failed, or skipped, then emits a single object keyed by each edge's targetHandle. Use this to re-converge parallel branches after a Switch or parallel fan-out.

Inputs (config):

  • mode — fixed to "wait_for_all".
  • inputs — required integer between 2 and 8. Declared fan-in degree (the canvas uses this to render the correct number of target handles).
  • label — optional.

Output: { [targetHandle]: <upstream output or null> }. An edge whose branch was pruned (skipped) contributes null for its handle rather than blocking the merge.

Example: after a Switch with pro and business branches that both enrich data differently, merge the results before a shared "send welcome email" step.

Switch
  ├── branch "pro"      → HubSpot · Enrich contact ─┐
  └── branch "business" → HubSpot · Enrich account ─┤→ Merge (inputs: 2) → Gmail · Send welcome

Wait

Pauses the run for a duration or until a specific timestamp. Internally, the executor enqueues a delayed resume job via WaitService, writes a step_waiting journal event, flips the run status to "waiting", and exits the executor loop. A dedicated worker picks up the job when it's due and resumes the DAG from the Wait node's successors.

Inputs (config):

  • mode"duration" | "until_time", required.
  • "duration" mode: amount (positive number) and unit ("seconds" | "minutes" | "hours" | "days"). Capped at 30 days.
  • "until_time" mode: until, an ISO-8601 datetime string.
  • label — optional.

Output: { resumeAt: <ISO-8601> } (written to the journal; downstream nodes don't typically read Wait's output).

Example: send a follow-up 24 hours after signup.

Trigger: new signup
  → Wait (duration: 24 hours)
  → Gmail · Send follow-up email

Stop and Error

Halts the current branch with a user-defined error. Use this to fail the run explicitly when upstream data indicates an invalid or unsupported case (for example, a webhook payload in an unexpected shape).

Inputs (config):

  • errorMessage — required template string. Resolved against upstream outputs before being thrown.
  • errorCode — optional, defaults to "WORKFLOW_STOPPED". Recorded on the journal's failure event for easy filtering.
  • label — optional.

Output: none — the node throws a WorkflowStoppedError, the engine records a step_failed event, and the run fails the same way an action failure would (including auto-pause accounting). Pair with a Condition upstream when you only want to stop in specific cases.

Example:

Condition (operator: DOES_NOT_EXIST, field: "trigger.customer.email")
  └── true → Stop and Error (errorMessage: "Missing customer email on {{trigger.id}}", errorCode: "MISSING_EMAIL")

Data

Data nodes reshape the payload flowing through the workflow without calling an external service. They don't branch and they don't pause — they just read an upstream value and emit a new one.

Set

Assigns one or more keyed values onto the payload. Think of it as a mini-mapper: useful for naming intermediate values, coercing types, or adding computed fields before handing data to a connector action.

Inputs (config):

  • assignments[] — required, at least one. Each is { id, key, value, type? }.
  • key must match ^[a-zA-Z_][a-zA-Z0-9_]*$.
  • value is a template string resolved against all upstream node outputs.
  • type is one of "string" | "number" | "boolean" | "json"; the resolved value is coerced accordingly ("json" parses a string value as JSON, or passes non-strings through).
  • Boolean coercion is strict: only the literal boolean true or the string "true" yields true. Every other value — including 1, "1", "yes", and "True" — becomes false.
  • includeInputFields — optional boolean, default false. When true, the output starts as a shallow clone of the first upstream node's output, then overlays the assignments.
  • label — optional.

Output: a plain object { [key]: <coerced value>,... }, optionally pre-populated from upstream.

Example: compute a display name and a total in minor units before appending to Sheets.

HTTP enrich customer
  → Set
      assignments:
        - key: "display_name", value: "{{http.firstName}} {{http.lastName}}", type: string
        - key: "total_cents",  value: "{{trigger.total}}", type: number
  → Google Sheets · Append row (display_name, total_cents)

Filter

Filters an upstream array by evaluating a condition set against each item.

Inputs (config):

  • items — required template resolving to an array, e.g. {{trigger.orders}}.
  • combinator"AND" | "OR", required.
  • conditions[] — required, at least one. Same { id, field, operator, value? } shape as the Condition node; evaluated against each array item as its own context.
  • label — optional.

Output: { items: <kept>[], kept: number, dropped: number }. Throws if items doesn't resolve to an array.

Example: keep only paid orders for downstream processing.

Trigger: orders batch
  → Filter (items: "{{trigger.orders}}", combinator: AND,
            conditions: [{ field: "status", operator: TEXT_EXACTLY_MATCHES, value: "paid" }])
  → Loop (items: "{{filter.items}}")
      body: Google Sheets · Append row

Aggregate

Runs one or more aggregate operations over an upstream array. Optionally buckets by a group-by key first.

Inputs (config):

  • items — required template resolving to an array.
  • operations[] — required, at least one. Each operation is { id, key, op, field?, separator? }:
  • opcount | sum | avg | min | max | concat | collect | first | last.
  • field is required except for count | first | last | collect (where it's optional).
  • separator applies to concat (default "").
  • key is the output property name.
  • groupBy — optional. When set, output becomes { groups: { [bucketKey]: { [opKey]: value } } } keyed by String(item[groupBy] ?? "_null").
  • label — optional.

Output: flat { [opKey]: value,... } when groupBy is unset, or { groups: {... } } when it is. Empty-input edge cases: min/maxnull, avg0, first/lastnull.

Example: total revenue and order count per country from a Filter output.

Filter (paid orders)
  → Aggregate
      items: "{{filter.items}}"
      groupBy: "country"
      operations:
        - key: "count",    op: count
        - key: "revenue",  op: sum, field: "total"
  → Slack · Post JSON-formatted summary

Split Out

Unwraps a nested array so downstream nodes can treat it as the primary payload, while optionally preserving the sibling fields from the source object.

Inputs (config):

  • items — required template resolving to an array, e.g. {{fetch.response.orders}}.
  • includeParent — optional boolean, default false. When true, spreads the source node's output into the result (with the source array field omitted so the array isn't present under two keys).
  • itemKey — optional, default "item". Reserved for a future per-item multi-emit mode; currently Split Out emits a single combined output.
  • label — optional.

Output: { items, count } (when includeParent: false), or {...parentFields, items, count } when includeParent: true.

Example: a paginated HTTP response returns { page, pageSize, orders: [...] }. Split Out lets the Loop iterate orders while keeping page available downstream.

HTTP · GET /orders?page=1
  → Split Out (items: "{{http.body.orders}}", includeParent: true)
  → Loop (items: "{{split.items}}")
      body: Google Sheets · Append row ({{item.id}}, page={{split.page}})

Utilities

The rest. General-purpose nodes for embedded code, LLM calls, retrieval-augmented generation, pass-through routing, and synchronous webhook responses.

Code

Runs user-supplied JavaScript or TypeScript in an isolated-vm sandbox. Types are stripped via Node's native stripTypeScriptTypes before evaluation. Use Code when the shape change is too complex for Set, or when you need to call standard library utilities (JSON, Date, array methods) directly.

Inputs (config):

  • code — required string. Must export a named entry function (function run(inputs, utils) {... } by convention; also supports export default function name(...)).
  • fieldMappings — optional object. Each key becomes a property on the inputs argument; each value is a template expression resolved against all upstream node outputs.

Output: whatever the entry function returns. Uncaught errors are journaled with code CODE_EXECUTION_FAILED. console.log output is captured and logged server-side (not included in the node output).

Example: compute a SHA-256 idempotency key from three trigger fields.

// code
function run(inputs) {
  const raw = `${inputs.orderId}:${inputs.customerId}:${inputs.total}`;
  return { idempotencyKey: raw, length: raw.length };
}
// fieldMappings:
//   orderId:    "{{trigger.id}}"
//   customerId: "{{trigger.customer.id}}"
//   total:      "{{trigger.total}}"

See the dedicated Code Node Reference for sandbox semantics and limits.

LLM

Generates text with an LLM provider. Interpolates an upstream-derived prompt template and calls Anthropic or OpenAI via the Vercel AI SDK.

Inputs (config): validated by LlmNodeConfigSchema in.

  • provider"anthropic" | "openai", required.
  • model — model ID string, required.
  • userPromptTemplate — required. {{path.to.field}} placeholders resolve against the resolved step input.
  • systemPrompt — optional, or use preset to select a bundled system prompt from LLM_PRESETS.
  • temperature, maxTokens — optional.

Each call is wrapped in a 60-second timeout and combined with the pipeline-level abort signal. Failures are journaled with code LLM_TIMEOUT or LLM_FAILED.

Output: { text, usage: { inputTokens, outputTokens }, model, provider, durationMs }.

Example: summarize a support ticket in one sentence.

Zendesk trigger
  → LLM (provider: anthropic, model: claude-sonnet,
         userPromptTemplate: "Summarize in one sentence: {{trigger.description}}")
  → Slack · Post "Ticket {{trigger.id}}: {{llm.text}}"

RAG Retrieve

Retrieval-augmented generation: takes a query from upstream data, generates an embedding, queries a configured vector store over HTTP, and synthesizes an answer with an LLM using the retrieved documents as context.

Inputs (config): validated by ragRetrieveConfigSchema in — the single source of truth consumed by both the executor and the canvas inspector.

  • query — required templated string, e.g. "{{trigger.question}}".
  • vectorStoreUrl — required http(s) URL.
  • bodyTemplate — required request body template sent to the vector store, supports {{embedding}} and {{topK}}.
  • responsePath — optional, default "result". Where to find the list of documents in the response.
  • documentContentField — optional, default "payload.text". Field on each retrieved document that holds its text.
  • headers — optional Record<string, string>.
  • topK — optional integer 1–100, default 5.
  • embeddingConnectionId, embeddingModel — embedding provider; default model "text-embedding-3-small".
  • llmSystemPrompt, llmModel, maxTokens — synthesis LLM config; maxTokens defaults to 1024.
  • continueOnFailure — optional boolean, default false. When true, errors produce a step_failed_continued event with a partial output instead of halting.

Output: { query, retrievedDocuments, documentCount, llmResponse, model, embeddingModel, durationMs }.

Example: answer an inbound support question from a product-docs vector store.

Inbound webhook (question)
  → RAG Retrieve
      query: "{{trigger.question}}"
      vectorStoreUrl: https://vector.example/search
      topK: 5
  → Respond to Webhook (body: "{{rag.llmResponse}}")

Noop

A pass-through node. Copies its first upstream output into its own output and marks itself completed.

Inputs (config):

  • label — optional.

Output: the first upstream node's output, or null if there is no upstream edge or that upstream output is missing.

Use it for layout anchors on the canvas, placeholder nodes during authoring, or as a join point when you want a named step in the journal without any side effect.

Respond to Webhook

Writes the HTTP response for a pipeline run that was triggered by a webhook in respond-node mode. The run is created by the inbound HTTP request; this node publishes the actual response back to the waiting API process (via Redis pub/sub) so the client receives a real synchronous response instead of a generic 202 Accepted.

Inputs (config):

  • mode"json" | "text" | "no_body", required.
  • statusCode — integer 100–599, default 200.
  • headers — optional Record<string, string>.
  • body — optional template string. In "json" mode it must resolve to valid JSON; in "text" mode it's sent verbatim; in "no_body" mode it's ignored.
  • label — optional.

Preconditions: the triggering webhook must be configured in respond-node mode (both httpRequestId and synchronousHandlerId are present on the run). If they aren't, the handler throws respond_to_webhook: no pending HTTP response.

Output: { responded: true, statusCode }.

Example: synchronous validation endpoint.

Inbound webhook (respond-node mode)
  → Condition (field: "trigger.body.email", operator: EXISTS)
      ├── true  → Respond to Webhook (mode: json, statusCode: 200,
      │             body: "{\"ok\":true,\"id\":\"{{trigger.body.email}}\"}")
      └── false → Respond to Webhook (mode: json, statusCode: 400,
                    body: "{\"ok\":false,\"error\":\"email required\"}")
  • Field Mapping — how {{node.field}} templates are resolved and how undefined propagates when upstream data is missing.
  • Error Handling — retries, continueOnFailure, circuit breaker, auto-pause, and how a Stop-and-Error node interacts with them.
  • Passing Data — how node outputs become inputs for downstream nodes.
  • Code Node Reference — deeper reference for the Code sandbox.

On this page