Skip to main content

Data Science & AI

Calabi Data Science & AI brings data science, AI, and intelligent automation directly into your data platform — no stitching together external tools or managing separate infrastructure. From tracking ML experiments and registering production models to building custom AI agents and running private LLMs entirely within your AWS account, every capability operates on the same unified data layer your team already trusts. The result is a coherent, governable AI stack where your data, models, and automation pipelines share lineage, access controls, and observability.


Platform Architecture

Your Data LayerFoundation for all AI capabilities
Calabi CatalogueAsset metadata · lineage · glossary · classification
CalabiIQCurated datasets · semantic layer · metrics
Data WarehouseGold-layer tables · marts · aggregates
Context · Datasets · Schema
Data Science & AIFive integrated capabilities
Calabi MLExperiment tracking · model registry · lifecycle
Calabi AI AgentNatural language queries grounded in your catalogue
Calabi AI BuilderVisual pipeline builder · RAG · custom chatbots
Local ModelsPrivate LLM inference · no data leaves your account
Calabi AutomateWorkflow automation · trigger ML runs · DAGs
Models · Inferences · Workflows
OutputsProduction-ready artifacts
Model RegistryVersioned models · staging → production promotion
Custom AI ApplicationsDeployed chatbots · search · recommendation engines
Calabi PipelinesAutomated ML retraining · data refresh DAGs
Reports & AlertsScheduled insights · anomaly notifications
Every AI component runs on the same unified data layer — shared lineage, access controls, and observability

Components at a Glance

ComponentWhat it doesTier
Calabi MLTrack experiments, register models, compare runs, store artifactsStarter+
Calabi AI AgentAsk questions in natural language, get SQL and charts instantlyProfessional+
Calabi AI BuilderBuild RAG pipelines, chatflows, and custom AI agents visuallyProfessional+
Local ModelsRun Llama 3, Mistral, Gemma privately in your clusterEnterprise
Calabi AutomateAutomate data workflows, API triggers, multi-system integrationEnterprise

Calabi ML

Experiment Tracking

Calabi ML provides a centralized experiment tracking system for your data science and ML engineering teams. Every training run — regardless of the framework, language, or compute environment — is captured as a structured experiment with full parameter and metric history. Teams can log hyperparameters, evaluation metrics, dataset versions, and arbitrary artifacts (model weights, plots, confusion matrices, feature importance files) in a consistent format, making it easy to compare runs across dates, team members, or model architectures.

The experiment UI gives you a filterable, sortable run table with parallel coordinate plots for hyperparameter sweeps, metric time-series charts, and side-by-side diff views between any two runs. You can group runs into named experiments, add tags for search, and annotate individual runs with free-text notes. All run data is stored in your own object storage bucket — nothing leaves your AWS account.

Model Registry and Lifecycle Management

When a run produces a model worth promoting, Calabi ML's integrated model registry provides a structured lifecycle from experiment to production. Models move through clearly defined stages — Staging, Production, and Archived — with each transition captured as a version event with an author, timestamp, and optional justification note. This gives compliance and ML governance teams a full audit trail for every model version in use.

The registry supports multi-framework models natively. Whether your model is a scikit-learn pipeline, a PyTorch checkpoint, a Hugging Face transformer, or an XGBoost booster, it is stored in a standard format with associated metadata including the training code version, input schema, and evaluation metrics. Downstream systems — including Calabi AI Builder — can reference registered model URIs directly.

Logging Experiments: Python SDK

Connect your training code to Calabi ML using the standard Python SDK. Point the tracking URI at your Calabi deployment and all subsequent logging calls are routed to your instance automatically.

import mlflow
import mlflow.sklearn
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score

# Point to your Calabi ML instance
mlflow.set_tracking_uri("https://calabi.bifrost.examroom.ai/mlflow")
mlflow.set_experiment("churn-prediction-v2")

with mlflow.start_run(run_name="gbm-lr0.01-depth6"):
# Log hyperparameters
mlflow.log_param("learning_rate", 0.01)
mlflow.log_param("max_depth", 6)
mlflow.log_param("n_estimators", 300)

# Train your model
model = GradientBoostingClassifier(
learning_rate=0.01,
max_depth=6,
n_estimators=300,
)
model.fit(X_train, y_train)

# Log evaluation metrics
preds = model.predict(X_test)
mlflow.log_metric("accuracy", accuracy_score(y_test, preds))
mlflow.log_metric("f1_score", f1_score(y_test, preds, average="weighted"))

# Log the model artifact
mlflow.sklearn.log_model(
sk_model=model,
artifact_path="model",
registered_model_name="churn-predictor",
)

After the run completes, navigate to Calabi ML → Experiments in the Calabi platform to inspect the logged parameters, compare it against previous runs, and promote the model to Staging or Production directly from the UI.

Supported Frameworks

Calabi ML supports auto-logging for most common frameworks with a single mlflow.autolog() call — no manual metric logging required. Supported integrations include scikit-learn, XGBoost, LightGBM, PyTorch, TensorFlow/Keras, Hugging Face Transformers, Spark MLlib, and Statsmodels. Custom logging via the fluent API works with any Python-based training code.


Calabi AI Agent

Natural Language Data Exploration

Calabi AI Agent transforms how your analysts and business users interact with data. Instead of writing SQL or waiting for a data engineer to build a report, users type a plain English question — "What were the top 10 products by revenue last quarter broken down by region?" — and the agent generates the query, executes it against your warehouse, and renders the result as a formatted table or chart in seconds. The agent is context-aware, understanding your schema, your Calabi Catalogue metadata, and the semantic meaning attached to your glossary terms.

Powered by Your Data Context

The agent's intelligence is grounded in your Calabi Catalogue. When a user asks about "active customers" or "recognized revenue," the agent resolves those terms against your governed glossary definitions, not a generic interpretation. It understands table relationships, column-level descriptions, and data domain ownership — which means it asks clarifying questions when a query is ambiguous, and cites the specific tables and columns used to generate each answer. Every query the agent runs is auditable in the same lineage graph as your pipelines.

Chart Generation and Follow-Up Queries

Calabi AI Agent supports multi-turn conversations. After an initial result, users can refine — "Now filter to just EMEA," "Show me this as a bar chart," "Which of those customers also purchased product X?" — and the agent maintains context across the conversation thread. Generated SQL is always visible and editable, so analysts can inspect, copy, or extend the query before saving it as a named query or publishing it to a CalabiIQ dashboard.


Calabi AI Builder

Visual AI Pipeline Construction

Calabi AI Builder provides a drag-and-drop canvas for building production-grade AI applications without writing infrastructure code. Data scientists and AI engineers can compose retrieval-augmented generation (RAG) pipelines, multi-step chatflows, and custom agent workflows by connecting nodes — LLM calls, vector store retrievals, document loaders, conditional branches, tool invocations — into a coherent graph. Each pipeline is versioned, shareable, and deployable as a REST API endpoint from within the platform.

Connecting to Your Model and Data Layer

Calabi AI Builder integrates natively with both Calabi ML and Local Models. You can reference a model from the Calabi ML registry as a node in your pipeline, using it for classification, scoring, or feature extraction steps. For generative AI tasks, you can route inference to Local Models for fully private execution or to a configured external LLM provider for tasks where data residency requirements permit. Document loaders support your catalogued data sources — S3, databases, data products — so RAG pipelines are grounded in assets that carry Calabi lineage and access control.

Deploying Custom AI Applications

Once a pipeline is built and tested on the canvas, it is published as an API endpoint with automatic input/output schema generation and built-in token usage tracking. Teams embed these endpoints into internal tools, customer-facing applications, or Calabi Automate workflows. The builder's visual interface makes it straightforward for AI engineers to iterate on prompt templates, swap underlying models, adjust retrieval chunk sizes, and re-deploy — without rewriting application code. All changes are tracked as pipeline versions with rollback support.


Local Models

Private LLM Inference in Your AWS Account

Local Models gives your organization access to leading open-weight large language models running entirely within your AWS infrastructure. No prompts, no context, and no outputs are transmitted to external APIs — inference happens on GPU-backed nodes inside your VPC. This capability is designed for organizations with strict data residency requirements, regulated workloads, or sensitive intellectual property that cannot leave the cloud account boundary.

Supported Models

The following models are available out of the box and can be activated through the Calabi platform settings:

ModelParametersBest For
Llama 3.370BGeneral reasoning, long-context tasks
Mistral7BFast inference, instruction following
Gemma 29B / 27BBalanced quality and efficiency
Phi-414BComplex reasoning at smaller scale
CodeLlama13B / 34BCode generation and explanation
Qwen2.5-Coder7B / 32BMultilingual code and SQL generation

Additional models can be pulled from the model registry on demand. Model weights are cached in your account's object storage after the first pull and served from there on subsequent requests — no repeated downloads.

Accessing Local Models

Local Models are accessible through two interfaces:

  • Calabi AI Chat at /openwebui — a full-featured chat interface for interactive use, supporting multi-turn conversations, file uploads, and model switching
  • API endpoint at /api/ollama — an OpenAI-compatible REST API that Calabi AI Builder and any OpenAI SDK-compatible application can target directly

Access to specific models is governed by Calabi's role-based access control, so you can restrict high-compute models (e.g., Llama 3.3 70B) to authorized teams while making lighter models broadly available.


Calabi Automate

Connecting Your Data Workflows to the World

Calabi Automate is the integration and automation layer of the Calabi platform. It lets your team connect data events, pipeline completions, quality alerts, and schedule triggers to any external system — without writing custom integration scripts. The workflow canvas supports hundreds of built-in connectors including Slack, Microsoft Teams, email, JIRA, Salesforce, HubSpot, Snowflake, REST APIs, and databases. Workflows are composed visually as trigger-action chains, with support for conditional logic, looping, data transformation, and error handling.

Triggering Calabi Pipelines and ML Runs

Calabi Automate integrates directly with Calabi Pipelines and Calabi ML. You can trigger a DAG run from a webhook (for example, when a file lands in an S3 bucket or a CRM record is updated), chain a data quality check after a pipeline completes, or automatically kick off a model retraining run when upstream data freshness metrics cross a threshold. This closes the loop between the external systems that generate your data and the internal pipelines that process it — eliminating manual handoffs and reducing time-to-insight for operational data products.

Intelligent Alerting and Reporting Automation

Beyond pipeline triggers, Calabi Automate handles the last-mile delivery of data insights. Configure workflows to send formatted Slack messages with CalabiIQ chart snapshots when a KPI breaches a threshold, generate and email PDF reports on a schedule, sync aggregated metrics to your CRM or finance system, or create JIRA tickets automatically when a data quality test fails. Workflows run on a managed scheduler inside your cluster — no external cron infrastructure required. All workflow executions are logged with input/output payloads, making it straightforward to audit and debug any automation.


Availability by Tier

FeatureStarterProfessionalEnterprise
Calabi ML — experiment trackingIncludedIncludedIncluded
Calabi ML — model registryIncludedIncludedIncluded
Calabi AI AgentIncludedIncluded
Calabi AI BuilderIncludedIncluded
Local Models (all supported models)Included
Calabi AutomateIncluded
Local Models — custom model importAdd-on

Next Steps