Metrics
Metrics in CalabiIQ are reusable, named aggregation expressions defined on a dataset. Instead of rewriting ROUND(100.0 * SUM(CASE WHEN passed THEN 1 ELSE 0 END) / COUNT(*), 2) in every chart, you define it once as pass_rate and reference it by name across all charts built on that dataset.
Why Use Metrics
| Without saved metrics | With saved metrics |
|---|---|
| Paste the same SQL expression into each chart | Define the expression once on the dataset |
| Risk of inconsistency across charts | All charts share the canonical definition |
| Hard to update — change in one chart only | Update the metric once, all charts reflect it |
| No governance — anyone can redefine differently | Certify metrics to signal authoritative definitions |
Types of Metrics
| Type | Definition | Example |
|---|---|---|
| Simple | Aggregate function applied to one column | COUNT(*), SUM(revenue), AVG(score) |
| Calculated | Custom SQL expression | SUM(revenue) / NULLIF(SUM(cost), 0) |
| Certified | Any metric marked as the official definition | pass_rate — reviewed and approved |
Creating a Metric
Metrics are created in the Dataset editor.
- Navigate to Data → Datasets.
- Find the dataset and click the Edit (pencil) icon.
- Click the Metrics tab.
- Click + Add metric.
- Fill in the metric fields:
| Field | Description | Example |
|---|---|---|
| Metric name | Internal identifier (no spaces) | pass_rate |
| Verbose name | Human-friendly label in chart dropdowns | Pass Rate (%) |
| Metric type | COUNT, SUM, AVG, MIN, MAX, or CUSTOM | CUSTOM |
| Expression | SQL expression for CUSTOM type | See below |
| Description | Markdown-enabled explanation | Percentage of attempts resulting in pass |
| D3 format | Number format applied in charts | .1% (percentage), ,.0f (integer with comma), .2f (2 decimal places) |
| Warning text | Yellow caution shown in chart editor | Excludes attempts from beta testers |
- Click Save on the metric row, then Save on the dataset.
Metric Expressions — SQL Reference
Simple aggregations
-- Total exam attempts
COUNT(*)
-- Distinct candidates
COUNT(DISTINCT candidate_id)
-- Total revenue
SUM(amount_usd)
-- Average score (exclude nulls automatically)
AVG(score)
-- Highest score recorded
MAX(score)
-- Earliest attempt date
MIN(exam_date)
Calculated metrics
-- Pass rate as a percentage
ROUND(
100.0 * SUM(CASE WHEN passed = TRUE THEN 1 ELSE 0 END) / NULLIF(COUNT(*), 0),
2
)
-- Revenue per attempt
SUM(amount_usd) / NULLIF(COUNT(*), 0)
-- 30-day rolling retention rate (use with appropriate date filter)
COUNT(DISTINCT CASE WHEN days_since_first_exam <= 30 THEN candidate_id END)
/ NULLIF(COUNT(DISTINCT candidate_id), 0)
-- Net Promoter Score proxy
AVG(CASE
WHEN nps_score >= 9 THEN 100
WHEN nps_score <= 6 THEN -100
ELSE 0
END)
Conditional / filtered aggregations
-- Revenue from paid plans only
SUM(CASE WHEN plan_type != 'free' THEN amount_usd ELSE 0 END)
-- Attempts from EMEA region
COUNT(CASE WHEN region = 'EMEA' THEN 1 END)
D3 Number Formats
Apply the right format so metrics display correctly in charts:
| Format string | Example output | Use for |
|---|---|---|
,.0f | 1,234,567 | Integer counts |
,.2f | 1,234,567.89 | Currency or precise decimals |
$,.2f | $1,234,567.89 | Dollar amounts |
.1% | 87.3% | Percentages (value must be 0–1) |
.2f | 87.35 | Percentages (value already 0–100) |
.3s | 1.23M | Large numbers with SI suffix |
d | 42 | Plain integers |
If your expression already multiplies by 100 (e.g., pass rate = 87.35), use .2f not .1%. The .1% format expects values like 0.873 and multiplies by 100 internally.
Using Metrics in Charts
Saved metrics appear in the Chart Editor's Metrics selector with a bookmark icon to distinguish them from ad-hoc aggregations.
- Open or create a chart in the Chart Editor.
- In the Query section, click the Metrics field.
- Type the metric's verbose name to search (e.g.,
Pass Rate). - Select it from the dropdown.
- The metric expression is used in the generated SQL automatically.
You can mix saved metrics with ad-hoc aggregations in the same chart.
Certifying a Metric
Certification indicates that a metric has been reviewed by a data steward and is the official, agreed-upon definition.
- Open the dataset editor → Metrics tab.
- Find the metric and click Edit.
- Check Certified metric.
- Enter a Certifier name and optional Certification details (e.g.,
Approved by Analytics Lead — 2026-01-10). - Save.
Certified metrics display a gold certification badge in the Chart Editor. Teams should use certified metrics for all executive and regulatory reporting.
Warning Text on Metrics
Use warning text to communicate known caveats directly to chart builders:
- Edit the metric and fill in the Warning text field.
- Save.
In the Chart Editor, a yellow warning triangle appears next to the metric name. Hovering reveals the warning message.
Example warning texts:
"Includes voided attempts — filter by status = 'completed' for clean reporting""Uses exchange rate snapshot from 2026-01-01 — not suitable for real-time revenue analysis""Beta metric — definition under review by Finance"
Editing and Updating Metrics
When you update a metric expression on a dataset:
- All charts that use the metric will automatically use the new expression on their next refresh.
- No chart-by-chart updates are needed.
- The Metric change is versioned — the previous expression is preserved in the dataset's change history.
If you rename a metric's name (not verbose name), any charts that referenced the old name will break. Always update charts before renaming a metric, or use verbose name changes instead.
Metrics via the API
For programmatic dataset management, metrics can be read and written via the CalabiIQ REST API:
# List all metrics on dataset ID 42
curl -X GET \
"https://calabi.yourdomain.com/bianalyst/api/v1/dataset/42" \
-H "Authorization: Bearer <token>"
The response includes a metrics array with all metric definitions. To update metrics, use a PUT to the same endpoint with the modified metrics array.
Metrics Best Practices
| Practice | Why |
|---|---|
Name metrics with snake_case and no spaces | Consistent querying and API compatibility |
Add a Description to every metric | Other analysts know what they're using |
Use NULLIF(..., 0) in division expressions | Prevent division-by-zero errors |
| Certify all metrics used in executive dashboards | Prevent divergent definitions across teams |
Add Warning text for metrics with known caveats | Proactive communication to chart builders |
| Review and update metrics when source schema changes | Keeps definitions accurate over time |