Agent Integration Overview
Expose Triggo workflows to AI agents via MCP or REST.
Agent Integration Overview
Triggo workflows don't have to be triggered from the canvas. Once you publish a workflow, it becomes a callable action that external AI agents — Claude Desktop, Cursor, Windsurf, Claude Code, or your own Node/Python/Go code — can discover and execute. This page is a map. It explains the two surfaces Triggo exposes, when to pick which, and the end-to-end shape of integrating an agent.
What agent integration means
When you build a workflow in the canvas and publish it, Triggo exposes that workflow as a remotely callable action with a typed input schema, an executable endpoint, and run-status tracking. An AI agent — whether a hosted assistant or a script you wrote yourself — can list available actions, inspect their input schemas, invoke them with arguments, and poll for results. Credentials, rate limiting, approvals, and scope enforcement are handled by Triggo; the agent only needs an API key.
Two surfaces
Triggo offers two ways for agents to reach your workflows. They share authentication and workspace state but differ in transport and intended caller.
MCP (Model Context Protocol)
A streamable HTTP endpoint at POST /mcp. The server identifies itself as triggo-runtime version 1.0.0. It exposes 19 tools across four categories:
- Actions (6) —
list_actions,get_action,run_action,get_run_status,list_runs,approve_run - Workflows (7) —
list_workflows,get_workflow,create_workflow,update_workflow,delete_workflow,deploy_workflow,open_workflow - Connectors (3) —
list_connectors,get_connector_operations,get_operation_schema - Builds (3) —
build_connector,get_build_status,validate_connector
MCP is designed for native LLM tool discovery — the client auto-advertises tools to the model without you wiring HTTP calls by hand. If your agent host already speaks MCP, this is the shortest path.
Runtime REST
Plain HTTP endpoints under /api/v1/runtime/. These exist for callers that don't (or shouldn't) take an MCP SDK dependency: your own scripts, webhook-triggered bots, cron jobs, or one workflow calling another via the HTTP node.
Both surfaces share the same underlying services — actions resolved from the same catalog, runs written to the same journal, rate limits counted against the same Redis key.
When to use which
| Caller | Surface |
|---|---|
| Claude Desktop, Cursor, Windsurf, any MCP-aware client | MCP |
Claude Code (.claude/settings.json) | MCP |
| Your Node / Python / Go script | REST (simpler; no MCP SDK) |
| One Triggo workflow calling another | REST (via HTTP node) |
| Webhook-triggered backend code | REST |
Rule of thumb: if the caller is an LLM that needs auto-discovered tools, use MCP. Otherwise use REST.
Authentication model
Both surfaces authenticate the same way — a Bearer API key issued from the Triggo dashboard:
Authorization: Bearer trg_<your-key>Keys are scoped. Each MCP tool and each REST endpoint declares the scopes it requires, and the request is rejected at the edge if the scopes don't match. The canonical scopes are:
actions:read— list and inspect published actionsactions:run— execute actionsruns:read— read run status and historyapprovals:decide— approve or reject runs waiting on a human gateconnectors:read— inspect the connector catalogconnectors:write— build / validate connectors
API keys are created in the dashboard under Agent Setup. The plaintext key is displayed exactly once at creation; at rest Triggo stores a salted SHA-256 hash plus the 8-character prefix for UI display. Lose the key and you issue a new one — there is no recovery.
See API keys for the full create/rotate/revoke flow.
End-to-end shape (6 steps)
┌─────────────────┐ ┌──────────┐ ┌──────────┐ ┌───────────────┐
│ 1. Create key │→ │ 2. Build │→ │ 3. Publish│→ │ 4. Discover │
│ (scopes) │ │ flow │ │ (action)│ │ list_actions │
└─────────────────┘ └──────────┘ └──────────┘ └───────┬───────┘
↓
┌────────────┐ ┌──────────┐
│ 6. Poll │← │ 5. Invoke│
│ get_run │ │ run_action│
└────────────┘ └──────────┘- Create an API key in Triggo with the scopes your agent needs (usually
actions:read+actions:run+runs:read). - Build a workflow in the canvas — trigger, actions, field mappings.
- Publish the workflow. Publishing turns it into a callable action with a stable slug and a typed input schema.
- Discover the action from the agent side —
list_actionsover MCP orGET /api/v1/runtime/actionsover REST. - Invoke the action —
run_action(MCP) orPOST /api/v1/runtime/actions/:slug/run(REST). The response includes arunId. - Read the result —
get_run_status(MCP) orGET /api/v1/runtime/runs/:runId(REST). If the workflow has an approval gate, the run stays inpending_approvaluntil someone resolves it.
Known limitations
We want you to integrate with open eyes, so here's the honest state of things today.
- Workflow and connector-build MCP tools need scopes that aren't yet exposed on API keys. The MCP workflow tools require
workflows:read/workflows:write, and the connector build tools requireconnectors:write. Those scope strings aren't all in the canonical API-key scope list yet, which means Bearer-key access to those specific tools is effectively gated right now. Action and run tools work fine over MCP with a Bearer key. OAuth-session access (i.e., using Triggo from a logged-in browser context) is unaffected. We're closing this gap — follow rate limits and the release notes for updates. - Approval flows via API key are still being hardened. Expect the
approve_run/POST /runs/:runId/approvesurface to behave, but check the release notes before designing a production approval loop on top of a Bearer key. A dedicated page on approvals will document the stable contract once it ships. - Rate limits are per-key, not per-user. A single agent using one key can exhaust the window for everything sharing that key. Issue one key per agent or per integration to keep failure modes isolated.
Related
- API keys — create, rotate, and revoke
- MCP quickstart — Claude Desktop / Cursor / Claude Code configuration
- Publishing workflows as actions — how a canvas workflow becomes a callable action
- Rate limits — per-tier limits, headers, retry behavior