Triggo Documentation
Agent Integration

Rate Limits

Per-tier request limits, response headers, and 429 handling for the runtime API.

Rate Limits

Every request to the runtime REST API (/api/v1/runtime/*) and the MCP endpoint (POST /mcp) passes through a per-API-key sliding-window rate limiter. This page documents the limits, the headers you get back, and how to back off correctly.

Per-tier runtime rate limits

Requests are counted per API key over a 60-second sliding window (Redis ZSET). The per-tier ceiling is:

TierRequests / minActive API keysNotes
free601Default fallback also applies if the tier cannot be resolved.
starter3003
pro100010
business1000unlimitedSame req/min ceiling as pro; contact sales for custom limits.

Limits are enforced per key, not per workspace — two keys on the same workspace each get the full tier budget. The API-key quota is separate and independent from the request budget: a free workspace can only have one active key at a time, but that key still gets 60 req/min.

Rate limit headers

Every runtime response — 2xx, 4xx, and 5xx alike once auth succeeds — carries three lowercase headers. 429 responses add a fourth.

HeaderTypeMeaning
x-ratelimit-limitintegerTier ceiling (requests per 60s window).
x-ratelimit-remainingintegerRequests you have left in the current window.
x-ratelimit-resetinteger, epoch seconds (UTC)Wall-clock time at which the window rolls. Not a relative delta.
retry-afterinteger, secondsOnly set on 429 responses. Number of seconds to wait before retrying.

The headers are written lowercase verbatim. HTTP header names are case-insensitive, but be aware that some logging pipelines preserve case — expect x-ratelimit-*, not X-RateLimit-*.

What happens at the limit

When the sliding window is full, the server returns HTTP 429 with this body:

{
  "error": "Rate limit exceeded",
  "code": "RATE_LIMIT_EXCEEDED"
}

…plus retry-after in seconds and the usual x-ratelimit-* trio (remaining will be 0). Authentication is still required to get a 429 — a request with a bad key gets 401 before the limiter runs.

Client strategy. Respect retry-after exactly. Do not retry sooner — the window is sliding, so hammering it resets nothing and just wastes quota on the next slot. If you're orchestrating many agents, serialize or shard them so peak concurrency stays below your tier ceiling.

User-run rate limit (separate layer)

A separate per-user limit caps pipeline executions at 100 runs per rolling hour, regardless of how those runs were triggered (runtime API, MCP, web UI, scheduler, webhook). This layers on top of the per-tier request limit — you can be under your HTTP budget and still be blocked here.

Runs rejected at this layer are not HTTP 429s on the runtime surface; they fail inside the executor with a user-facing message ("Превышен лимит запусков (100 в час)") and stop the run before any step executes. If you are orchestrating a large backfill, stage it to stay under 100 runs/hour per user, or request a tier upgrade.

Applies to both REST and MCP

Yes. MCP tool calls flow through the same Bearer-auth + sliding-window check as REST calls — run_action via MCP consumes one request against the 60s window, just like POST /actions/:slug/run via REST. Mixing transports on the same key shares one budget.

Client pattern

A minimal back-off loop that respects retry-after:

async function callRuntime(url: string, init: RequestInit): Promise<Response> {
  for (let attempt = 0; attempt < 5; attempt++) {
    const res = await fetch(url, init);
    if (res.status !== 429) return res;

    const retryAfter = Number(res.headers.get("retry-after") ?? "1");
    await new Promise((r) => setTimeout(r, retryAfter * 1000));
  }
  throw new Error("Rate-limit retries exhausted");
}

Two rules: do not retry faster than retry-after, and cap the number of attempts so a persistent 429 fails loudly instead of pinning a worker forever.

Upgrading

If you are consistently hitting the ceiling, upgrade the workspace plan from Billing in the Triggo web app. Higher tiers raise the req/min ceiling and the active-key quota. For sustained throughput beyond business, contact support.

On this page