Skip to main content

Metrics

Starter+

Metrics in CalabiIQ are reusable, named aggregation expressions defined on a dataset. Instead of rewriting ROUND(100.0 * SUM(CASE WHEN passed THEN 1 ELSE 0 END) / COUNT(*), 2) in every chart, you define it once as pass_rate and reference it by name across all charts built on that dataset.


Why Use Metrics

Without saved metricsWith saved metrics
Paste the same SQL expression into each chartDefine the expression once on the dataset
Risk of inconsistency across chartsAll charts share the canonical definition
Hard to update — change in one chart onlyUpdate the metric once, all charts reflect it
No governance — anyone can redefine differentlyCertify metrics to signal authoritative definitions

Types of Metrics

TypeDefinitionExample
SimpleAggregate function applied to one columnCOUNT(*), SUM(revenue), AVG(score)
CalculatedCustom SQL expressionSUM(revenue) / NULLIF(SUM(cost), 0)
CertifiedAny metric marked as the official definitionpass_rate — reviewed and approved

Creating a Metric

Metrics are created in the Dataset editor.

  1. Navigate to DataDatasets.
  2. Find the dataset and click the Edit (pencil) icon.
  3. Click the Metrics tab.
  4. Click + Add metric.
  5. Fill in the metric fields:
FieldDescriptionExample
Metric nameInternal identifier (no spaces)pass_rate
Verbose nameHuman-friendly label in chart dropdownsPass Rate (%)
Metric typeCOUNT, SUM, AVG, MIN, MAX, or CUSTOMCUSTOM
ExpressionSQL expression for CUSTOM typeSee below
DescriptionMarkdown-enabled explanationPercentage of attempts resulting in pass
D3 formatNumber format applied in charts.1% (percentage), ,.0f (integer with comma), .2f (2 decimal places)
Warning textYellow caution shown in chart editorExcludes attempts from beta testers
  1. Click Save on the metric row, then Save on the dataset.

Metric Expressions — SQL Reference

Simple aggregations

-- Total exam attempts
COUNT(*)

-- Distinct candidates
COUNT(DISTINCT candidate_id)

-- Total revenue
SUM(amount_usd)

-- Average score (exclude nulls automatically)
AVG(score)

-- Highest score recorded
MAX(score)

-- Earliest attempt date
MIN(exam_date)

Calculated metrics

-- Pass rate as a percentage
ROUND(
100.0 * SUM(CASE WHEN passed = TRUE THEN 1 ELSE 0 END) / NULLIF(COUNT(*), 0),
2
)

-- Revenue per attempt
SUM(amount_usd) / NULLIF(COUNT(*), 0)

-- 30-day rolling retention rate (use with appropriate date filter)
COUNT(DISTINCT CASE WHEN days_since_first_exam <= 30 THEN candidate_id END)
/ NULLIF(COUNT(DISTINCT candidate_id), 0)

-- Net Promoter Score proxy
AVG(CASE
WHEN nps_score >= 9 THEN 100
WHEN nps_score <= 6 THEN -100
ELSE 0
END)

Conditional / filtered aggregations

-- Revenue from paid plans only
SUM(CASE WHEN plan_type != 'free' THEN amount_usd ELSE 0 END)

-- Attempts from EMEA region
COUNT(CASE WHEN region = 'EMEA' THEN 1 END)

D3 Number Formats

Apply the right format so metrics display correctly in charts:

Format stringExample outputUse for
,.0f1,234,567Integer counts
,.2f1,234,567.89Currency or precise decimals
$,.2f$1,234,567.89Dollar amounts
.1%87.3%Percentages (value must be 0–1)
.2f87.35Percentages (value already 0–100)
.3s1.23MLarge numbers with SI suffix
d42Plain integers
tip

If your expression already multiplies by 100 (e.g., pass rate = 87.35), use .2f not .1%. The .1% format expects values like 0.873 and multiplies by 100 internally.


Using Metrics in Charts

Saved metrics appear in the Chart Editor's Metrics selector with a bookmark icon to distinguish them from ad-hoc aggregations.

  1. Open or create a chart in the Chart Editor.
  2. In the Query section, click the Metrics field.
  3. Type the metric's verbose name to search (e.g., Pass Rate).
  4. Select it from the dropdown.
  5. The metric expression is used in the generated SQL automatically.

You can mix saved metrics with ad-hoc aggregations in the same chart.


Certifying a Metric

Certification indicates that a metric has been reviewed by a data steward and is the official, agreed-upon definition.

  1. Open the dataset editor → Metrics tab.
  2. Find the metric and click Edit.
  3. Check Certified metric.
  4. Enter a Certifier name and optional Certification details (e.g., Approved by Analytics Lead — 2026-01-10).
  5. Save.

Certified metrics display a gold certification badge in the Chart Editor. Teams should use certified metrics for all executive and regulatory reporting.


Warning Text on Metrics

Use warning text to communicate known caveats directly to chart builders:

  1. Edit the metric and fill in the Warning text field.
  2. Save.

In the Chart Editor, a yellow warning triangle appears next to the metric name. Hovering reveals the warning message.

Example warning texts:

  • "Includes voided attempts — filter by status = 'completed' for clean reporting"
  • "Uses exchange rate snapshot from 2026-01-01 — not suitable for real-time revenue analysis"
  • "Beta metric — definition under review by Finance"

Editing and Updating Metrics

When you update a metric expression on a dataset:

  1. All charts that use the metric will automatically use the new expression on their next refresh.
  2. No chart-by-chart updates are needed.
  3. The Metric change is versioned — the previous expression is preserved in the dataset's change history.
Breaking changes

If you rename a metric's name (not verbose name), any charts that referenced the old name will break. Always update charts before renaming a metric, or use verbose name changes instead.


Metrics via the API

For programmatic dataset management, metrics can be read and written via the CalabiIQ REST API:

# List all metrics on dataset ID 42
curl -X GET \
"https://calabi.yourdomain.com/bianalyst/api/v1/dataset/42" \
-H "Authorization: Bearer <token>"

The response includes a metrics array with all metric definitions. To update metrics, use a PUT to the same endpoint with the modified metrics array.


Metrics Best Practices

PracticeWhy
Name metrics with snake_case and no spacesConsistent querying and API compatibility
Add a Description to every metricOther analysts know what they're using
Use NULLIF(..., 0) in division expressionsPrevent division-by-zero errors
Certify all metrics used in executive dashboardsPrevent divergent definitions across teams
Add Warning text for metrics with known caveatsProactive communication to chart builders
Review and update metrics when source schema changesKeeps definitions accurate over time