Multi-Tenancy
Calabi's multi-tenancy model provides complete isolation between tenants at every layer of the stack: Kubernetes, database, object storage, identity, and network. This page explains the isolation architecture, how new tenants are provisioned, the policies that govern cross-tenant interaction (there are none — isolation is absolute), and the white-labeling options available.
Tenant Isolation Architecture
Calabi implements hard multi-tenancy: each tenant is a fully independent deployment with no shared mutable state. The following table summarizes how isolation is implemented at each layer.
| Layer | Isolation Mechanism | Details |
|---|---|---|
| Kubernetes | Separate namespaces | Each tenant gets its own namespace: calabi-tenant-<id>. Network policies prevent cross-namespace traffic. |
| Database | Separate RDS instances | Each tenant has its own RDS instance. There is no shared database. |
| Object Storage | Separate S3 buckets | Each tenant has dedicated S3 buckets for DAGs, ML artifacts, backups, and Calabi Connect staging. |
| Search | Separate search engine domains | Each tenant's Calabi Catalogue uses a dedicated search engine domain. |
| Cache | Separate ElastiCache clusters | Each tenant has its own Redis ElastiCache cluster. |
| Identity | Separate Cognito User Pools | Each tenant's users authenticate against a dedicated Cognito pool. SSO federation is configured per pool. |
| Encryption | Separate KMS keys | Each tenant has its own AWS KMS Customer Managed Key for encrypting databases, S3 objects, and Kubernetes secrets. |
| Network | Kubernetes NetworkPolicy + Security Groups | Cross-tenant traffic is blocked at both the Kubernetes network policy layer and the AWS security group layer. |
| IAM | Separate IAM roles | Each tenant's pods use dedicated IAM roles (via IRSA), scoped to only their own resources. |
Infrastructure Diagram
All connections cross namespace boundaries only through the Calabi control plane API — there is no direct database or storage access between tenants.
Tenant Provisioning Workflow
A new Calabi tenant is provisioned in approximately 15–20 minutes via an automated pipeline.
Provisioning Steps
-
Tenant request — The Calabi platform team (or an automated billing integration) submits a tenant provisioning request via the Calabi Control Plane API:
calabi-admin tenant create \
--tenant-id "acme-corp" \
--tier "professional" \
--domain "calabi.acme.com" \
--aws-region "us-east-1" \
--admin-email "admin@acme.com" -
Infrastructure provisioning — The control plane runs a Terraform module that provisions:
- Kubernetes namespace with resource quotas and network policies
- RDS database instance
- ElastiCache Redis cluster
- S3 buckets (with versioning, encryption, and replication configured)
- Search engine domain
- Cognito User Pool
- KMS keys
- IAM roles (IRSA)
- Route 53 DNS records
-
Helm deployment — The control plane generates a
client/values.yamlfor the new tenant and runs:helm install calabi ./calabi-chart \
-f base/values.yaml \
-f tier/professional.yaml \
-f client/acme-corp/values.yaml \
--namespace calabi-tenant-acme-corp \
--create-namespace \
--atomic -
Database initialization — Runs database migrations for all enabled modules.
-
Admin account creation — Creates the first admin user account using the
--admin-emailparameter. A welcome email with a temporary password is sent. -
Health verification — The control plane polls the health endpoint until all services report healthy (up to 10 minutes).
-
Provisioning complete — The platform team and tenant admin receive a notification with the platform URL.
Tenant Provisioning Time Estimates
| Phase | Estimated Duration |
|---|---|
| Terraform infrastructure provisioning | 8–12 minutes |
| Helm deployment | 3–5 minutes |
| Database initialization | 1–2 minutes |
| Health verification | 1–3 minutes |
| Total | ~15–20 minutes |
Tenant Configuration Organization
Each tenant's Helm values are stored in version control at:
calabi-tenants/
├── acme-corp/
│ ├── values.yaml ← Tenant-specific overrides
│ ├── secrets.yaml.enc ← SOPS-encrypted secrets
│ └── modules.yaml ← Enabled modules for this tenant
├── globex/
│ ├── values.yaml
│ ├── secrets.yaml.enc
│ └── modules.yaml
└── ...
Updating a tenant's configuration:
# Decrypt secrets (using SOPS + AWS KMS)
sops -d calabi-tenants/acme-corp/secrets.yaml.enc > /tmp/secrets.yaml
# Apply the update
helm upgrade calabi ./calabi-chart \
-f base/values.yaml \
-f tier/professional.yaml \
-f calabi-tenants/acme-corp/values.yaml \
-f /tmp/secrets.yaml \
--namespace calabi-tenant-acme-corp \
--atomic
# Clean up decrypted secrets
rm /tmp/secrets.yaml
Cross-Tenant Policies
Calabi enforces zero cross-tenant access. There are no exceptions and no configuration options to enable cross-tenant data sharing.
| Policy | Enforcement |
|---|---|
| No cross-tenant database access | Separate RDS instances; no cross-database queries possible |
| No cross-tenant API access | JWT tokens are scoped to a single tenant's Cognito User Pool; a token from Tenant A is invalid on Tenant B's API |
| No cross-tenant network traffic | Kubernetes NetworkPolicy blocks all cross-namespace pod-to-pod communication |
| No cross-tenant S3 access | S3 bucket policies restrict access to the tenant's own IAM role only |
| No shared admin accounts | Each tenant has its own Admin users; there is no platform-wide super-admin account accessible to tenants |
Calabi's own platform engineering team uses a separate administrative interface to manage tenant provisioning, billing, and upgrades. This interface is not accessible to any tenant admin.
Tenant Deprovisioning
When a tenant subscription ends, deprovisioning follows this sequence:
- Notification — The tenant admin receives a 30-day advance notice.
- Data export window — For 30 days, the tenant can export all their data (CalabiIQ dashboards, Catalogue metadata, ML artifacts) via the Calabi export tools.
- Account suspension — On the deprovisioning date, all user logins are disabled. Data remains intact.
- Data deletion — After a configurable grace period (default: 7 days), all tenant resources are permanently deleted:
- Kubernetes namespace (all pods and PVCs)
- RDS instance (with a final snapshot retained for 90 days)
- S3 buckets (objects deleted; buckets removed)
- Cognito User Pool
- KMS keys (scheduled for deletion per AWS policy)
- DNS records
White-Labeling Options
Enterprise tenants can white-label the Calabi platform with their own branding. All white-label settings are configured via the Admin UI at Admin → Branding.
| Setting | Options |
|---|---|
| Platform name | Replace "Calabi" with your organization's product name in all UI labels |
| Logo | Upload a custom logo (PNG/SVG, displayed in the navigation header) |
| Favicon | Upload a custom favicon |
| Primary color | Hex color used for buttons, links, and active states |
| Login page background | Custom image or gradient for the login page |
| Welcome message | Custom text displayed on the login page |
| Support email | Replaces Calabi support links with your internal helpdesk |
| Documentation URL | Replace links to Calabi docs with your internal documentation portal |
| Email sender name | System-generated emails (welcome, password reset) use your platform name |
White-label settings are applied per-tenant and do not affect other tenants.
Resource Quotas per Tenant
Each Kubernetes namespace has default resource quotas to prevent a single tenant from consuming cluster-wide capacity:
# Default quotas — adjustable per tenant via Helm values
apiVersion: v1
kind: ResourceQuota
metadata:
name: calabi-tenant-quota
spec:
hard:
requests.cpu: "32"
requests.memory: "64Gi"
limits.cpu: "64"
limits.memory: "128Gi"
persistentvolumeclaims: "20"
services: "30"
pods: "100"
Adjust quotas for large tenants:
# client/values.yaml
resourceQuota:
cpu:
requests: "64"
limits: "128"
memory:
requests: "128Gi"
limits: "256Gi"
Related Pages
- Helm Configuration Reference — Per-tenant Helm values
- Roles & Permissions — RBAC within a single tenant
- Single Sign-On — Per-tenant SSO and identity provider configuration
- Backup & Recovery — Per-tenant backup isolation