Building Workflows in Calabi Automate
Calabi Automate provides a visual, node-based workflow builder that enables your team to automate any data operation — from sending pipeline alerts to orchestrating multi-step data delivery pipelines — without writing backend code. This page covers everything you need to build, test, and manage production-grade workflows.
Canvas Overview
When you open a workflow in Calabi Automate, you are presented with the canvas — an infinite, pannable workspace where you drag, connect, and configure nodes.
| UI Element | Description |
|---|---|
| Canvas | The main workspace. Pan by holding Space and dragging, or use middle-mouse scroll. Zoom with Ctrl+Scroll. |
| Node Panel | Left sidebar. Search and drag nodes onto the canvas from here. |
| Execution Log | Bottom panel. Shows real-time logs of the last test or production run. |
| Workflow Toolbar | Top bar. Contains Save, Test Workflow, Active toggle, and Settings. |
| Parameters Panel | Right sidebar. Appears when a node is selected, showing all configurable fields. |
| Mini-Map | Bottom-right corner. Provides a bird's-eye overview of large workflows. |
Keyboard Shortcuts:
| Action | Shortcut |
|---|---|
| Save workflow | Ctrl+S / Cmd+S |
| Duplicate selected node | Ctrl+D / Cmd+D |
| Delete selected node | Delete / Backspace |
| Undo | Ctrl+Z / Cmd+Z |
| Select all nodes | Ctrl+A / Cmd+A |
| Zoom to fit | Ctrl+Shift+H |
| Open node panel | Tab |
Node Types
Calabi Automate workflows are built from four fundamental node categories.
Trigger Nodes
Trigger nodes start a workflow. Every workflow must have exactly one trigger.
| Node | Description |
|---|---|
| Schedule Trigger | Fires on a cron schedule (e.g., daily at 08:00 UTC). |
| Webhook | Listens for incoming HTTP POST/GET requests. Used to receive events from external systems. |
| Calabi Pipelines Trigger | Fires when a Calabi Pipelines DAG succeeds, fails, or is retried. |
| Calabi Catalogue Trigger | Fires on Calabi Catalogue events: asset created, quality test failed, glossary term pending. |
| Calabi Connect Trigger | Fires when a Calabi Connect sync job completes or fails. |
| Calabi ML Trigger | Fires when a Calabi ML experiment run finishes. |
| Email Trigger (IMAP) | Monitors an inbox and triggers when a new email matching filter criteria arrives. |
| Manual Trigger | For testing — starts the workflow immediately when you click "Test Workflow". |
Action Nodes
Action nodes perform operations: sending messages, writing data, calling APIs.
| Node | Description |
|---|---|
| Slack | Send messages, post to channels, send DMs, upload files. |
| Email (SMTP) | Send plain-text or HTML emails with optional attachments. |
| HTTP Request | Call any REST API. Supports GET, POST, PUT, PATCH, DELETE with authentication. |
| AWS S3 | Upload files to, download files from, or list objects in an S3 bucket. |
| AWS SNS | Publish messages to SNS topics for fan-out notifications. |
| PagerDuty | Create, acknowledge, or resolve PagerDuty incidents programmatically. |
| Microsoft Teams | Post messages to Teams channels or send direct messages. |
| Google Sheets | Read from or write to Google Sheets. Useful for lightweight reporting. |
| Postgres / MySQL / Redshift | Execute SQL queries against relational databases. |
Logic Nodes
Logic nodes control the flow of execution through your workflow.
| Node | Description |
|---|---|
| IF | Conditional branching. Routes data to different paths based on evaluated conditions. |
| Switch | Multi-branch routing based on a value. Equivalent to a switch statement. |
| Merge | Combines data from multiple branches into a single stream. |
| Wait | Pauses execution for a specified duration or until a webhook is called. |
| Stop and Error | Halts execution and marks the run as failed with a custom error message. |
| No Operation (NoOp) | A passthrough node useful as a placeholder or to label a branch end. |
Transform Nodes
Transform nodes manipulate the data moving through the workflow.
| Node | Description |
|---|---|
| Set | Creates or modifies fields on the data item. Supports expressions and JavaScript. |
| Function | Executes arbitrary JavaScript to transform data. Full access to the item's JSON. |
| Code | Run Python or JavaScript in a sandboxed environment for complex transformations. |
| Edit Fields | Rename, remove, or reorder fields visually without writing expressions. |
| Aggregate | Aggregate values across multiple items (sum, average, min, max, count). |
| Sort | Sort items by one or more fields, ascending or descending. |
| Filter | Remove items from the stream that do not match specified conditions. |
| Limit | Cap the number of items passed to the next node. |
| Split In Batches | Divide a large array of items into smaller batches for loop processing. |
| HTML Extract | Extract data from HTML content using CSS selectors. |
| XML | Parse XML responses into JSON or convert JSON to XML. |
| Markdown | Convert Markdown text to HTML. |
| Date & Time | Parse, format, and perform arithmetic on dates and timestamps. |
Connecting Nodes
- Hover over a node to reveal its output port (a small circle on the right edge).
- Click and drag from the output port to the input port of the next node.
- Release to create the connection. A line appears representing the data flow.
- To delete a connection, click on the line and press Delete.
- Multiple outputs: Logic nodes like IF have two output ports (True / False). Drag from each independently.
- Multiple inputs: The Merge node accepts multiple input connections from different branches.
Using Expressions
Field values in node parameters can reference data from upstream nodes using expressions enclosed in {{ }}:
{{ $json.dag_id }} — field from current item
{{ $node["Slack"].json.ts }} — field from a specific upstream node
{{ $now.toISO() }} — current timestamp
{{ $env.SLACK_CHANNEL }} — environment variable
The expression editor provides autocomplete for item fields, node outputs, and built-in functions.
Sample Workflow: Pipeline Failure → Slack Alert
The following diagram illustrates a complete workflow that listens for pipeline failure events and posts a formatted Slack alert.
Step-by-step:
- Calabi Pipelines Trigger — configured to fire on any DAG state change.
- IF node — checks
{{ $json.state }} == "failed". Routes to the True branch only. - Set node — formats the alert message using expressions: DAG name, task ID, run URL.
- HTTP Request node — calls the Calabi Pipelines log API to fetch the last 20 lines of the error log.
- Slack node — posts a rich Block Kit message to
#data-alertswith run details and the log excerpt. - PagerDuty node — creates an incident only when
{{ $json.severity }} == "critical", keeping low-severity alerts Slack-only.
Testing Workflows
Testing validates your workflow logic against real or sample data before you activate it in production.
Test with Sample Data
- Click Test Workflow in the toolbar.
- The Manual Trigger fires immediately (or Calabi sends a sample event payload for event-based triggers).
- Each node executes and highlights green (success) or red (error).
- Click any node to inspect its input and output data in the right panel.
- The Execution Log at the bottom shows timing, item counts, and any errors.
Pinning Data
You can pin the output of a node to freeze it during testing:
- Run the workflow once to capture real output.
- Click a node, go to the Output tab in the right panel.
- Click Pin Data — subsequent test runs use this pinned data for that node, even if the upstream trigger does not fire.
This is essential for testing Calabi Pipelines Trigger workflows without waiting for an actual pipeline failure.
Partial Execution
Right-click any node and choose Execute Node to run just that node and everything upstream, stopping there. Useful for debugging mid-workflow.
Error Handling
Node-Level Error Handling
Every node has an Error Output port (shown in red) that activates if the node fails. Connect it to an error-handling branch:
- Hover over a node → click the red error circle on its bottom.
- Drag to a Slack or Email node that sends an alert.
- Optionally add a Stop and Error node at the end to mark the run as failed.
Workflow-Level Error Trigger
For global error handling across all workflows:
- Create a dedicated Error Handler workflow with the Error Trigger node as its trigger.
- In any workflow's Settings → Error Workflow, select your Error Handler.
- When any node in the main workflow fails unexpectedly, the Error Handler fires automatically, receiving the workflow name, node name, and error message.
Retry Configuration
Per-node retry settings (available in node settings):
| Setting | Default | Description |
|---|---|---|
| Retry On Fail | Off | Automatically retry the node if it returns an error |
| Max Tries | 3 | Number of retry attempts |
| Wait Between Tries | 1000 ms | Delay between retry attempts (exponential backoff available) |
Continue On Fail
Enable Continue On Fail in a node's settings to let the workflow proceed even if that node errors. The error is captured in $json.error and can be inspected by downstream nodes.
Workflow Versioning
Calabi Automate automatically creates a version snapshot every time you save a workflow.
Viewing Version History
- Open the workflow.
- Click ••• → History.
- A timeline shows all saved versions with timestamps and the user who saved each.
- Click any version to preview its canvas state without activating it.
Restoring a Version
- In the History panel, click the version you want to restore.
- Click Restore This Version.
- The workflow canvas reloads with the historical configuration. It is not yet saved — review and click Save to make it active.
Restoring a version does not automatically deactivate and reactivate the workflow. If the workflow is active, changes take effect on the next trigger execution after you save.
Workflow Settings
Access via ••• → Settings:
| Setting | Description |
|---|---|
| Timezone | Overrides the default UTC timezone for Schedule Trigger nodes. |
| Save Execution Data | Choose to save all runs, only errors, or none (for high-volume workflows). |
| Execution Order | v0 (legacy) or v1 (default). Keep v1 for all new workflows. |
| Error Workflow | Link to your global Error Handler workflow. |
| Caller Policy | Controls which workflows can call this one as a sub-workflow. |
Organizing Workflows
As your library grows, use these features to keep workflows manageable:
- Tags: Add free-form tags (e.g.,
alerting,reporting,governance) and filter by tag in the workflow list. - Folders: Organize workflows into folders per team or domain.
- Naming Convention: Adopt a consistent naming pattern, e.g.
[Category] — [Action] — [Target]→Alerting — Pipeline Failure — Slack. - Descriptions: Add a plain-text description in Settings → Description to document the workflow's purpose for future maintainers.
Related Pages
- Templates — Start from a pre-built workflow
- Managing Credentials — Configure Slack, email, and API credentials
- Calabi Pipelines — Learn about the pipelines that trigger alerts