Skip to main content

Mssql

Supported sync modes

Sync modeSupported?
Full Refresh - OverwriteYes
Full Refresh - AppendYes
Full Refresh - Overwrite + DedupedYes
Incremental Sync - AppendYes
Incremental Sync - Append + DedupedYes

Output Schema

Each stream will be output into its own table in SQL Server. Each table will contain the following metadata columns:

  • _airbyte_raw_id: A random UUID assigned to each incoming record. The column type in SQL Server is VARCHAR(MAX).
  • _airbyte_extracted_at: A timestamp for when the event was pulled from the data source. The column type in SQL Server is BIGINT.
  • _airbyte_meta: Additional information about the record. The column type in SQL Server is TEXT.
  • _airbyte_generation_id: Incremented each time a refresh is executed. The column type in SQL Server is TEXT.

See here for more information about these fields.

Getting Started

Setup guide

  • MS SQL Server: Azure SQL Database, SQL Server 2016 or greater

Network Access

Make sure your SQL Server database can be accessed by Calabi Connect. If your database is within a VPC, you may need to allow access from the IP you're using to expose Calabi Connect.

Permissions

You need a user configured in SQL Server that can create tables and write rows. We highly recommend creating an Calabi Connect-specific user for this purpose. In order to allow for normalization, please grant ALTER permissions for the user configured.

Target Database

You will need to choose an existing database or create a new database that will be used to store synced data from Calabi Connect.

Configuration

You'll need the following information to configure the MSSQL destination:

  • Host
    • The host name of the MSSQL database.
  • Port
    • The port of the MSSQL database.
  • Database Name
    • The name of the MSSQL database.
  • Default Schema
    • The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
  • Username
    • The username which is used to access the database.
  • Password
    • The password associated with this username.
  • JDBC URL Parameters
    • Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
  • SSL Method
    • The SSL configuration supports three modes: Unencrypted, Encrypted (trust server certificate), and Encrypted (verify certificate).
      • Unencrypted: Do not use SSL encryption on the database connection
      • Encrypted (trust server certificate): Use SSL encryption without verifying the server's certificate. This is useful for self-signed certificates in testing scenarios, but should not be used in production.
      • Encrypted (verify certificate): Use the server's SSL certificate, after standard certificate verification.
        • Host Name In Certificate (optional): When using certificate verification, this property can be set to specify an expected name for added security. If this value is present, and the server's certificate's host name does not match it, certificate verification will fail.
  • Load Type
    • The data load type supports two modes: Insert or Bulk
      • Insert: Utilizes SQL INSERT statements to load data to the destination table.
      • Bulk: Utilizes Azure Blob Storage and the BULK INSERT command to load data to the destination table. If selected, additional configuration is required:

MSSQL with Azure Blob Storage (Bulk Upload) Setup Guide

This section describes how to set up and use the Microsoft SQL Server (MSSQL) connector with the Azure Blob Storage Bulk Upload feature. By staging data in an Azure Blob Storage container and using BULK INSERT, you can significantly improve ingestion speed and reduce network overhead for large or frequent data loads.

Why Use Azure Blob Storage Bulk Upload?

When handling high data volumes or frequent syncs, row-by-row inserts into MSSQL can become slow and resource-intensive. By staging files in Azure Blob Storage first, you can:

  1. Aggregate Data into Bulk Files: Data is written to Blob Storage in batches, reducing overhead.
  2. Perform Bulk Ingestion: MSSQL uses BULK INSERT to directly load these files, typically resulting in faster performance compared to conventional row-by-row inserts.
Prerequisites
  1. A Microsoft SQL Server Instance
    • Compatible with on-premises SQL Server or Azure SQL Database.
  2. Azure Blob Storage Account
    • A storage account and container (e.g., bulk-staging) where data files will be placed.
  3. Permissions
    • Blob Storage: ability to create, read, and delete objects within the designated container.
    • MSSQL: ability to create or modify tables, and permission to execute BULK INSERT.
Setup Guide

Follow these steps to configure MSSQL with Azure Blob Storage for bulk uploads.

1. Set Up Azure Blob Storage
  1. Create a Storage Account & Container
    • In the Azure Portal, create (or reuse) a Storage Account.
    • Within that account, create a container (e.g., bulk-staging) for staging your data files.
  2. Establish Access Credentials
    • Use a Shared Access Signature (SAS) scoped to your container.
    • Ensure the SAS token or role assignments include permissions such as Read, Write, Delete, and List.
2. Configure MSSQL

See the official Microsoft documentation for more details. Below is a simplified overview:

  1. (Optional) Create a Master Encryption Key If your environment requires a master key to store credentials securely, create one:

    CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<your_password>';
  2. Create a Database Scoped Credential Configure a credential that grants MSSQL access to your Blob Storage using the SAS token:

    CREATE DATABASE SCOPED CREDENTIAL <credential_name>
    WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
    SECRET = '<your_sas_token>';
  3. Create an External Data Source Point MSSQL to your Blob container using the credential:

    CREATE EXTERNAL DATA SOURCE <data_source_name>
    WITH (
    TYPE = BLOB_STORAGE,
    LOCATION = 'https://<storage_account>.blob.core.windows.net/<container_name>',
    CREDENTIAL = <credential_name>
    );

    You’ll reference <data_source_name> when configuring the connector.

3. Connector Configuration

You’ll need to supply:

  1. MSSQL Connection Details
    • The server hostname/IP, port, database name, and authentication (username/password).
  2. Bulk Load Data Source
    • The name of the external data source you created (e.g., <data_source_name>).
  3. Azure Storage Account & Container
    • The name of the storage account and container.
  4. SAS Token
    • The token that grants Blob Storage access.

See the Getting Started: Configuration section of this guide for more details on BULK INSERT connector configuration.

Namespace support

This destination supports namespaces. The namespace maps to a SQL Server schema.