Debezium Connector for SQL Server
Want to help us further hone and improve it? Learn how. |
Debezium’s SQL Server Connector can monitor and record the row-level changes in the schemas of a SQL Server database.
The first time it connects to a SQL Server database/cluster, it reads a consistent snapshot of all of the schemas. When that snapshot is complete, the connector continuously streams the changes that were committed to SQL Server and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.
Overview
The functionality of the connector is based upon change data capture feature provided by SQL Server Standard (since SQL Server 2016 SP1) or Enterprise edition. Using this mechanism a SQL Server capture process monitors all databases and tables the user is interested in and stores the changes into specifically created CDC tables that have stored procedure facade.
The database operator must enable CDC for the table(s) that should be captured by the connector. The connector then produces a change event for every row-level insert, update, and delete operation that was published via the CDC API, recording all the change events for each table in a separate Kafka topic. The client applications read the Kafka topics that correspond to the database tables they’re interested in following, and react to every row-level event it sees in those topics.
The database operator normally enables CDC in the mid-life of a database an/or table. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the SQL Server connector first connects to a particular SQL Server database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, we start with a consistent view of all of the data, yet continue reading without having lost any of the changes made while the snapshot was taking place.
The connector is also tolerant of failures. As the connector reads changes and produces events, it records the position in the database log (LSN / Log Sequence Number), that is associated with CDC record, with each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart it simply continues reading the CDC tables where it last left off. This includes snapshots: if the snapshot was not completed when the connector is stopped, upon restart it begins a new snapshot.
Setting up SQL Server
Before using the SQL Server connector to monitor the changes committed on SQL Server, first enable CDC on a monitored database.
Please bear in mind that CDC cannot be enabled for the master
database.
-- ====
-- Enable Database for CDC template
-- ====
USE MyDB
GO
EXEC sys.sp_cdc_enable_db
GO
Then enable CDC for each table that you plan to monitor.
-- ====
-- Enable a Table Specifying Filegroup Option Template
-- ====
USE MyDB
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'MyTable',
@role_name = N'MyRole',
@filegroup_name = N'MyDB_CT',
@supports_net_changes = 0
GO
Verify that the user have access to the CDC table.
-- ====
-- Verify the user of the connector have access, this query should not have empty result
-- ====
EXEC sys.sp_cdc_help_change_data_capture
GO
If the result is empty then please make sure that the user has privileges to access both the capture instance and CDC tables.
SQL Server on Azure
The SQL Server plug-in has not been tested with SQL Server on Azure. We welcome any feedback from a user to try the plug-in with database in managed environments.
SQL Server Always On
The SQL Server plug-in can capture changes from an Always On read-only replica. A few pre-requisities are necessary to be fulfilled:
-
Change data capture is configured and enabled on the master node. SQL Server does not support CDC directly on replicas.
-
The configuration option
database.applicationIntent
must be set toReadOnly
. This is required by SQL Server. When Debezium detects this configuration option then it will:-
set
snapshot.isolation.mode
tosnapshot
as this is the only one transaction isolation mode supported by raed-only replicas -
commit the (read-only) transaction in every execution of the streaming query loop, as this is necessary to get the latest view on CDC data
-
How the SQL Server connector works
Snapshots
SQL Server CDC is not designed to store the complete history of database changes. It is thus necessary that Debezium establishes the baseline of current database content and streams it to the Kafka. This is achieved via a process called snapshotting.
By default (snapshotting mode initial) the connector will upon the first startup perform an initial consistent snapshot of the database (meaning the structure and data within any tables to be captured as per the connector’s filter configuration).
Each snapshot consists of the following steps:
-
Determine the tables to be captured
-
Obtain a lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables. The level of the lock is determined by
snapshot.isolation.mode
configuration option. -
Read the maximum LSN ("log sequence number") position in the server’s transaction log.
-
Capture the structure of all relevant tables.
-
Optionally release the locks obtained in step 2, i.e. the locks are held usually only for a short period of time.
-
Scan all of the relevant database tables and schemas as valid at the LSN position read in step 3, and generate a
READ
event for each row and write that event to the appropriate table-specific Kafka topic. -
Record the successful completion of the snapshot in the connector offsets.
Reading the change data tables
Upon first start-up, the connector takes a structural snapshot of the structure of the captured tables and persists this information in its internal database history topic. Then the connector identifies a change table for each of the source tables and executes the main loop
-
For each change table read all changes that were created between last stored maximum LSN and current maximum LSN
-
Order the read changes incrementally according to commit LSN and change LSN. This ensures that the changes are replayed by Debezium in the same order as were made to the database.
-
Pass commit and change LSNs as offsets to Kafka Connect.
-
Store the maximum LSN and repeat the loop.
After a restart, the connector will resume from the offset (commit and change LSNs) where it left off before.
The connector is able to detect whether CDC is enabled or disabled for whitelisted source tables and adjust its behavior.
Topic names
The SQL Server connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. The name of the Kafka topics always takes the form serverName.schemaName.tableName, where serverName is the logical name of the connector as specified with the database.server.name
configuration property, schemaName is the name of the schema where the operation occurred, and tableName is the name of the database table on which the operation occurred.
For example, consider a SQL Server installation with an inventory
database that contains four tables: products
, products_on_hand
, customers
, and orders
in schema dbo
. If the connector monitoring this database were given a logical server name of fulfillment
, then the connector would produce events on these four Kafka topics:
-
fulfillment.dbo.products
-
fulfillment.dbo.products_on_hand
-
fulfillment.dbo.customers
-
fulfillment.dbo.orders
Schema change topic
The user-facing schema change topic is not implemented yet (see DBZ-1904).
Events
All data change events produced by the SQL Server connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see Topic names).
The SQL Server connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]). If not, then all invalid characters will automatically be replaced with an underscore character. This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores. |
Debezium and Kafka Connect are designed around continuous streams of event messages, and the structure of these events may change over time. This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.
Change Event Keys
For a given table, the change event’s key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.
Consider a customers
table defined in the inventory
database’s schema dbo
:
CREATE TABLE customers (
id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL UNIQUE
);
If the database.server.name
configuration property has the value server1
,
every change event for the customers
table while it has this definition will feature the same key structure, which in JSON looks like this:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
}
],
"optional": false,
"name": "server1.dbo.customers.Key"
},
"payload": {
"id": 1004
}
}
The schema
portion of the key contains a Kafka Connect schema describing what is in the key portion. In this case, it means that the payload
value is not optional, is a structure defined by a schema named server1.dbo.customers.Key
, and has one required field named id
of type int32
.
If you look at the value of the key’s payload
field, you can see that it is indeed a structure (which in JSON is just an object) with a single id
field, whose value is 1004
.
Therefore, you can interpret this key as describing the row in the dbo.customers
table (output from the connector named server1
) whose id
primary key column had a value of 1004
.
Although the |
If the table does not have a primary or unique key, then the change event’s key will be null. This makes sense since the rows in a table without a primary or unique key constraint cannot be uniquely identified. |
Change Event Values
Like the message key, the value of a change event message has a schema section and payload section. The payload section of every change event value produced by the SQL Server connector has an envelope structure with the following fields:
-
op
is a mandatory field that contains a string value describing the type of operation. Values for the SQL Server connector arec
for create (or insert),u
for update,d
for delete, andr
for read (in the case of a snapshot). -
before
is an optional field that if present contains the state of the row before the event occurred. The structure is described by theserver1.dbo.customers.Value
Kafka Connect schema, which theserver1
connector uses for all rows in thedbo.customers
table. -
after
is an optional field that if present contains the state of the row after the event occurred. The structure is described by the sameserver1.dbo.customers.Value
Kafka Connect schema used inbefore
. -
source
is a mandatory field that contains a structure describing the source metadata for the event, which in the case of SQL Server contains these fields: the Debezium version, the connector name, whether the event is part of an ongoing snapshot or not, the commit LSN (not while snapshotting), the LSN of the change, database, schema and table where the change happened, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).Also a field
event_serial_no
is present during streaming. This is used to differentiate among events that have the same commit and change LSN. There are mostly two situations when you can see it present with value different from1
:-
update events will have the value set to
2
, this is because the update generates two events in the CDC change table of SQL Server (source documentation). The first one contains the old values and the second one contains new values. So the first one is dropped and the values from it are used with the second one to create the Debezium change event. -
when a primary key is updated, then SQL Server emits two records -
delete
to remove the record with the old primary key value andinsert
to create the record with the new primary key. Both operations share the same commit and change LSN and their event numbers are1
and2
.
-
-
ts_ms
is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.
And of course, the schema portion of the event message’s value contains a schema that describes this envelope structure and the nested fields within it.
Create events
Let’s look at what a create event value might look like for our customers
table:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "first_name"
},
{
"type": "string",
"optional": false,
"field": "last_name"
},
{
"type": "string",
"optional": false,
"field": "email"
}
],
"optional": true,
"name": "server1.dbo.customers.Value",
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "id"
},
{
"type": "string",
"optional": false,
"field": "first_name"
},
{
"type": "string",
"optional": false,
"field": "last_name"
},
{
"type": "string",
"optional": false,
"field": "email"
}
],
"optional": true,
"name": "server1.dbo.customers.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "connector"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": false,
"field": "ts_ms"
},
{
"type": "boolean",
"optional": true,
"default": false,
"field": "snapshot"
},
{
"type": "string",
"optional": false,
"field": "db"
},
{
"type": "string",
"optional": false,
"field": "schema"
},
{
"type": "string",
"optional": false,
"field": "table"
},
{
"type": "string",
"optional": true,
"field": "change_lsn"
},
{
"type": "string",
"optional": true,
"field": "commit_lsn"
},
{
"type": "int64",
"optional": true,
"field": "event_serial_no"
}
],
"optional": false,
"name": "io.debezium.connector.sqlserver.Source",
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
}
],
"optional": false,
"name": "server1.dbo.customers.Envelope"
},
"payload": {
"before": null,
"after": {
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "john.doe@example.org"
},
"source": {
"version": "1.1.2.Final",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559729468470,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000758:0003",
"commit_lsn": "00000027:00000758:0005",
"event_serial_no": "1"
},
"op": "c",
"ts_ms": 1559729471739
}
}
If we look at the schema
portion of this event’s value, we can see the schema for the envelope, the schema for the source
structure (which is specific to the SQL Server connector and reused across all events), and the table-specific schemas for the before
and after
fields.
The names of the schemas for the |
If we look at the payload
portion of this event’s value, we can see the information in the event, namely that it is describing that the row was created (since op=c
), and that the after
field value contains the values of the new inserted row’s' id
, first_name
, last_name
, and email
columns.
It may appear that the JSON representations of the events are much larger than the rows they describe. This is true, because the JSON representation must include the schema and the payload portions of the message. It is possible and even recommended to use the to dramatically decrease the size of the actual messages written to the Kafka topics. |
Update events
The value of an update change event on this table will actually have the exact same schema, and its payload is structured the same but will hold different values. Here’s an example:
{
"schema": { ... },
"payload": {
"before": {
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "john.doe@example.org"
},
"after": {
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "noreply@example.org"
},
"source": {
"version": "1.1.2.Final",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559729995937,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000ac0:0002",
"commit_lsn": "00000027:00000ac0:0007",
"event_serial_no": "2"
},
"op": "u",
"ts_ms": 1559729998706
}
}
When we compare this to the value in the insert event, we see a couple of differences in the payload
section:
-
The
op
field value is nowu
, signifying that this row changed because of an update -
The
before
field now has the state of the row with the values before the database commit -
The
after
field now has the updated state of the row, and here was can see that theemail
value is nownoreply@example.org
. -
The
source
field structure has the same fields as before, but the values are different since this event is from a different position in the transaction log. -
The
event_serial_no
field has value2
. That is due to the update event composed of two events behind the scenes and we are exposing only the second one. If you are interested in details please check the source documentation and refer to the field$operation
. -
The
ts_ms
shows the timestamp that Debezium processed this event.
There are several things we can learn by just looking at this payload
section. We can compare the before
and after
structures to determine what actually changed in this row because of the commit.
The source
structure tells us information about SQL Server’s record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same SQL Server commit as other events.
When the columns for a row’s primary/unique key are updated, the value of the row’s key has changed so Debezium will output three events: a |
Delete events
So far, you have seen samples of create and update events.
The following sample shows the value of a delete event for the same table. Once again, the schema
portion of the value is exactly the same as with the create and update events:
{
"schema": { ... },
},
"payload": {
"before": {
"id": 1005,
"first_name": "john",
"last_name": "doe",
"email": "noreply@example.org"
},
"after": null,
"source": {
"version": "1.1.2.Final",
"connector": "sqlserver",
"name": "server1",
"ts_ms": 1559730445243,
"snapshot": false,
"db": "testDB",
"schema": "dbo",
"table": "customers",
"change_lsn": "00000027:00000db0:0005",
"commit_lsn": "00000027:00000db0:0007",
"event_serial_no": "1"
},
"op": "d",
"ts_ms": 1559730450205
}
}
If we look at the payload
portion, we see a number of differences compared with the create or update event payloads:
-
The
op
field value is nowd
, signifying that this row was deleted -
The
before
field now has the state of the row that was deleted with the database commit. -
The
after
field is null, signifying that the row no longer exists -
The
source
field structure has many of the same values as before, except thets_ms
,commit_lsn
andchange_lsn
fields have changed -
The
ts_ms
shows the timestamp that Debezium processed this event.
This event gives a consumer all kinds of information that it can use to process the removal of this row.
The SQL Server connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.
When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key.
But only if the message value is null
will Kafka know that it can remove all messages with that same key.
To make this possible, the SQL Server connector always follows the delete event with a special tombstone event that has the same key but null
value.
Transaction Metadata
Debezium can generate events that represents tranaction metadata boundaries and enrich data messages.
Transaction boundaries
Debezium generates events for every transaction BEGIN
and END
.
Every event contains
-
status
-BEGIN
orEND
-
id
- string representation of unique transaction identifier -
event_count
(forEND
events) - total number of events emmitted by the transaction -
data_collections
(forEND
events) - an array of pairs ofdata_collection
andevent_count
that provides number of events emitted by changes originating from given data collection
Following is an example of what a message looks like:
{
"status": "BEGIN",
"id": "00000025:00000d08:0025",
"event_count": null,
"data_collections": null
}
{
"status": "END",
"id": "00000025:00000d08:0025",
"event_count": 2,
"data_collections": [
{
"data_collection": "testDB.dbo.tablea",
"event_count": 1
},
{
"data_collection": "testDB.dbo.tableb",
"event_count": 1
}
]
}
The transaction events are written to the topic named <database.server.name>.transaction
.
Data events enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field.
This field provides information about every event in the form of a composite of fields:
-
id
- string representation of unique transaction identifier -
total_order
- the absolute position of the event among all events generated by the transaction -
data_collection_order
- the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
{
"before": null,
"after": {
"pk": "2",
"aa": "1"
},
"source": {
...
},
"op": "c",
"ts_ms": "1580390884335",
"transaction": {
"id": "00000025:00000d08:0025",
"total_order": "1",
"data_collection_order": "1"
}
}
Database schema evolution
Debezium is able to capture schema changes over time. Due to the way CDC is implemented in SQL Server, it is necessary to work in co-operation with a database operator in order to ensure the connector continues to produce data change events when the schema is updated.
As was already mentioned before, Debezium uses SQL Server’s change data capture functionality. This means that SQL Server creates a capture table that contains all changes executed on the source table. Unfortunately, the capture table is static and needs to be updated when the source table structure changes. This update is not done by the connector itself but must be executed by an operator with elevated privileges.
There are generally two procedures how to execute the schema change:
-
cold - this is executed when Debezium is stopped
-
hot - executed while Debezium is running
Both approaches have their own advantages and disadvantages.
In both cases, it is critically important to execute the procedure completely before a new schema update on the same source table is made. It is thus recommended to execute all DDLs in a single batch so the procedure is done only once. |
Not all schema changes are supported when CDC is enabled for a source table. One such exception identified is renaming a column or changing its type, SQL Server will not allow executing the operation. |
Although not required by SQL Server’s CDC mechanism itself, a new capture instance must be created when altering a column from |
Cold schema update
This is the safest procedure but might not be feasible for applications with high-availability requirements. The operator should follow this sequence of steps
-
Suspend the application that generates the database records
-
Wait for Debezium to stream all unstreamed changes
-
Stop the connector
-
Apply all changes to the source table schema
-
Create a new capture table for the update source table using
sys.sp_cdc_enable_table
procedure with a unique value for parameter@capture_instance
-
Resume the application
-
Start the connector
-
When Debezium starts streaming from the new capture table it is possible to drop the old one using
sys.sp_cdc_disable_table
stored procedure with parameter@capture_instance
set to the old capture instance name
Hot schema update
The hot schema update does not require any downtime in application and data processing. The procedure itself is also much simpler than in case of cold schema update
-
Apply all changes to the source table schema
-
Create a new capture table for the update source table using
sys.sp_cdc_enable_table
procedure with a unique value for parameter@capture_instance
-
When Debezium starts streaming from the new capture table it is possible to drop the old one using
sys.sp_cdc_disable_table
stored procedure with parameter@capture_instance
set to the old capture instance name
The hot schema update has one drawback. There is a period of time between the database schema update and creating the new capture instance. All changes that will arrive during this period are captured by the old instance with the old structure. For instance this means that in case of a newly added column any change event produced during this time will not yet contain a field for that new column. If your application does not tolerate such a transition period we recommend to follow the cold schema update.
Example
Let’s deploy the SQL Server based Debezium tutorial to demonstrate the hot schema update.
In this example, a column phone_number
is added to the customers
table.
# Start the database shell
docker-compose -f docker-compose-sqlserver.yaml exec sqlserver bash -c '/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD -d testDB'
-- Modify the source table schema
ALTER TABLE customers ADD phone_number VARCHAR(32);
-- Create the new capture instance
EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customers', @role_name = NULL, @supports_net_changes = 0, @capture_instance = 'dbo_customers_v2';
GO
-- Insert new data
INSERT INTO customers(first_name,last_name,email,phone_number) VALUES ('John','Doe','john.doe@example.com', '+1-555-123456');
GO
Kafka Connect log will contain messages like these:
connect_1 | 2019-01-17 10:11:14,924 INFO || Multiple capture instances present for the same table: Capture instance "dbo_customers" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_CT, startLsn=00000024:00000d98:0036, changeTableObjectId=1525580473, stopLsn=00000025:00000ef8:0048] and Capture instance "dbo_customers_v2" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
connect_1 | 2019-01-17 10:11:14,924 INFO || Schema will be changed for ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
...
connect_1 | 2019-01-17 10:11:33,719 INFO || Migrating schema to ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
Eventually, there is a new field in the schema and value of the messages written to the Kafka topic.
...
{
"type": "string",
"optional": true,
"field": "phone_number"
}
...
"after": {
"id": 1005,
"first_name": "John",
"last_name": "Doe",
"email": "john.doe@example.com",
"phone_number": "+1-555-123456"
},
-- Drop the old capture instance
EXEC sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'dbo_customers', @capture_instance = 'dbo_customers';
GO
Data types
As described above, the SQL Server connector represents the changes to rows with events that are structured like the table in which the row exist. The event contains a field for each column value, and how that value is represented in the event depends on the SQL data type of the column. This section describes this mapping.
The following table describes how the connector maps each of the SQL Server data types to a literal type and semantic type within the events' fields.
Here, the literal type describes how the value is literally represented using Kafka Connect schema types, namely INT8
, INT16
, INT32
, INT64
, FLOAT32
, FLOAT64
, BOOLEAN
, STRING
, BYTES
, ARRAY
, MAP
, and STRUCT
.
The semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
SQL Server Data Type |
Literal type (schema type) |
Semantic type (schema name) |
Notes |
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
n/a |
|
|
|
|
Contains the string representation of a XML document |
|
|
|
A string representation of a timestamp with timezone information, where the timezone is GMT |
Other data type mappings are described in the following sections.
If present, a column’s default value is propagated to the corresponding field’s Kafka Connect schema. Change messages will contain the field’s default value (unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema. Passing the default value helps though with satisfying the compatibility rules when using Avro as serialization format together with the Confluent schema registry.
Temporal values
Other than SQL Server’s DATETIMEOFFSET
data type (which contain time zone information), the other temporal types depend on the value of the time.precision.mode
configuration property. When the time.precision.mode
configuration property is set to adaptive
(the default), then the connector will determine the literal type and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:
SQL Server Data Type |
Literal type (schema type) |
Semantic type (schema name) |
Notes |
|
|
|
Represents the number of days since epoch. |
|
|
|
Represents the number of milliseconds past midnight, and does not include timezone information. |
|
|
|
Represents the number of microseconds past midnight, and does not include timezone information. |
|
|
|
Represents the number of nanoseconds past midnight, and does not include timezone information. |
|
|
|
Represents the number of milliseconds past epoch, and does not include timezone information. |
|
|
|
Represents the number of milliseconds past epoch, and does not include timezone information. |
|
|
|
Represents the number of milliseconds past epoch, and does not include timezone information. |
|
|
|
Represents the number of microseconds past epoch, and does not include timezone information. |
|
|
|
Represents the number of nanoseconds past epoch, and does not include timezone information. |
When the time.precision.mode
configuration property is set to connect
, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since SQL Server supports tenth of microsecond precision, the events generated by a connector with the connect
time precision mode will result in a loss of precision when the database column has a fractional second precision value greater than 3:
SQL Server Data Type |
Literal type (schema type) |
Semantic type (schema name) |
Notes |
|
|
|
Represents the number of days since epoch. |
|
|
|
Represents the number of milliseconds since midnight, and does not include timezone information. SQL Server allows |
|
|
|
Represents the number of milliseconds since epoch, and does not include timezone information. |
|
|
|
Represents the number of milliseconds past epoch, and does not include timezone information. |
|
|
|
Represents the number of milliseconds since epoch, and does not include timezone information. SQL Server allows |
Timestamp values
The DATETIME
, SMALLDATETIME
and DATETIME2
types represent a timestamp without time zone information.
Such columns are converted into an equivalent Kafka Connect value based on UTC.
So for instance the DATETIME2
value "2018-06-20 15:13:16.945104" is represented by a io.debezium.time.MicroTimestamp
with the value "1529507596945104".
Note that the timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.
Decimal values
SQL Server Data Type |
Literal type (schema type) |
Semantic type (schema name) |
Notes |
|
|
|
The |
|
|
|
The |
|
|
|
The |
|
|
|
The |
Deploying the SQL Server connector
If you have already installed Zookeeper, Kafka, and Kafka Connect, then using Debezium’s SQL Server` connector is easy.
Simply download the connector’s plug-in archive, extract the JARs into your Kafka Connect environment, and add the directory with the JARs to Kafka Connect’s plugin.path
.
Restart your Kafka Connect process to pick up the new JARs.
If immutable containers are your thing, then check out Debezium’s Docker images for Zookeeper, Kafka and Kafka Connect with the SQL Server connector already pre-installed and ready to go. You can even run Debezium on OpenShift.
Example configuration
To use the connector to produce change events for a particular SQL Server database or cluster:
-
Enable the CDC on SQL Server to publish the CDC events in the database.
-
Create a configuration file for the SQL Server connector.
When the connector starts, it will grab a consistent snapshot of the schemas in your SQL Server database and start streaming changes, producing events for every inserted, updated, and deleted row. You can also choose to produce events for a subset of the schemas and tables. Optionally ignore, mask, or truncate columns that are sensitive, too large, or not needed.
Following is an example of the configuration for a connector instance that monitors a SQL Server server at port 1433 on 192.168.99.100, which we logically name fullfillment
.
Typically, you configure the Debezium SQL Server connector in a .json
file using the configuration properties available for the connector.
{
"name": "inventory-connector", (1)
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector", (2)
"database.hostname": "192.168.99.100", (3)
"database.port": "1433", (4)
"database.user": "sa", (5)
"database.password": "Password!", (6)
"database.dbname": "testDB", (7)
"database.server.name": "fullfillment", (8)
"table.whitelist": "dbo.customers", (9)
"database.history.kafka.bootstrap.servers": "kafka:9092", (10)
"database.history.kafka.topic": "dbhistory.fullfillment" (11)
}
}
1 | The name of our connector when we register it with a Kafka Connect service. |
2 | The name of this SQL Server connector class. |
3 | The address of the SQL Server instance. |
4 | The port number of the SQL Server instance. |
5 | The name of the SQL Server user |
6 | The password for the SQL Server user |
7 | The name of the database to capture changes from. |
8 | The logical name of the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
9 | A list of all tables whose changes Debezium should capture. |
10 | The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic. |
11 | The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers. |
See the complete list of connector properties that can be specified in these configurations.
This configuration can be sent via POST to a running Kafka Connect service, which will then record the configuration and start up the one connector task that will connect to the SQL Server database, read the transaction log, and record events to Kafka topics.
Monitoring
The Debezium SQL Server connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
-
snapshot metrics; for monitoring the connector when performing snapshots
-
streaming metrics; for monitoring the connector when reading CDC table data
-
schema history metrics; for monitoring the status of the connector’s schema history
Please refer to the monitoring documentation for details of how to expose these metrics via JMX.
Snapshot Metrics
The MBean is debezium.sql_server:type=connector-metrics,context=snapshot,server=<database.server.name>
.
Attribute Name |
Type |
Description |
|
|
The last snapshot event that the connector has read. |
|
|
The number of milliseconds since the connector has read and processed the most recent event. |
|
|
The total number of events that this connector has seen since last started or reset. |
|
|
The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
|
The list of tables that are monitored by the connector. |
|
|
The length of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
|
The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
|
The total number of tables that are being included in the snapshot. |
|
|
The number of tables that the snapshot has yet to copy. |
|
|
Whether the snapshot was started. |
|
|
Whether the snapshot was aborted. |
|
|
Whether the snapshot completed. |
|
|
The total number of seconds that the snapshot has taken so far, even if not complete. |
|
|
Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. |
Streaming Metrics
The MBean is debezium.sql_server:type=connector-metrics,context=streaming,server=<database.server.name>
.
Attribute Name |
Type |
Description |
|
|
The last streaming event that the connector has read. |
|
|
The number of milliseconds since the connector has read and processed the most recent event. |
|
|
The total number of events that this connector has seen since last started or reset. |
|
|
The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
|
The list of tables that are monitored by the connector. |
|
|
The length of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
|
The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
|
Flag that denotes whether the connector is currently connected to the database server. |
|
|
The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. |
|
|
The number of processed transactions that were committed. |
|
|
The coordinates of the last received event. |
|
|
Transaction identifier of the last processed transaction. |
Schema History Metrics
The MBean is debezium.sql_server:type=connector-metrics,context=schema-history,server=<database.server.name>
.
Attribute Name |
Type |
Description |
|
|
One of |
|
|
The time in epoch seconds at what recovery has started. |
|
|
The number of changes that were read during recovery phase. |
|
|
The total number of schema changes applie during recovery and runtime. |
|
|
The number of milliseconds that elapsed since the last change was recovered from the history store. |
|
|
The number of milliseconds that elapsed since the last change was applied. |
|
|
The string representation of the last change recovered from the history store. |
|
|
The string representation of the last applied change. |
Connector properties
The following configuration properties are required unless a default value is available.
Property |
Default |
Description |
Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) |
||
The name of the Java class for the connector. Always use a value of |
||
|
The maximum number of tasks that should be created for this connector. The SQL Server connector always uses a single task and therefore does not use this value, so the default is always acceptable. |
|
IP address or hostname of the SQL Server database server. |
||
|
Integer port number of the SQL Server database server. |
|
Username to use when connecting to the SQL Server database server. |
||
Password to use when connecting to the SQL Server database server. |
||
The name of the SQL Server database from which to stream the changes |
||
Logical name that identifies and provides a namespace for the particular SQL Server database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector. Only alphanumeric characters and underscores should be used. |
||
The full name of the Kafka topic where the connector will store the database schema history. |
||
A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection is used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process. |
||
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the whitelist is excluded from monitoring. Each identifier is of the form schemaName.tableName. By default the connector will monitor every non-system table in each monitored schema. May not be used with |
||
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the blacklist is monitored.
Each identifier is of the form schemaName.tableName. May not be used with |
||
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Note that primary key columns are always included in the event’s key, also if blacklisted from the value. |
|
|
Time, date, and timestamps can be represented with different kinds of precision, including: |
|
|
Controls whether a tombstone event should be generated after a delete event. |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form schemaName.tableName.columnName. |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk ( |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters |
|
n/a |
An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters |
|
empty string |
A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. |
The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property |
Default |
Description |
initial |
A mode for taking an initial snapshot of the structure and optionally data of captured tables.
Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs. |
|
repeatable_read |
Mode to control which transaction isolation level is used and how long the connector locks the monitored tables.
There are five possible values: It is worth documenting that Another aspect is data consistency. Only |
|
|
Specifies how the connector should react to exceptions during processing of events.
|
|
|
Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second. |
|
|
Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the CDC table reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the |
|
|
Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. |
|
|
Controls how frequently heartbeat messages are sent. |
|
|
Controls the naming of the topic to which heartbeat messages are sent. |
|
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; |
||
|
Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000. |
|
|
An integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail (also see snapshots). |
|
Controls which rows from tables are included in snapshot. |
||
v2 |
Schema version for the |
|
|
Whether field names are sanitized to adhere to Avro naming requirements. See Avro naming for more details. |
|
Timezone of the server. This is used to define the timezone of the transaction timestamp (ts_ms) retrieved from the server (which is actually not zoned). Default value is unset. Should only be specified when running on SQL Server 2014 or older and using different timezones for the database server and the JVM running the Debezium connector. |
||
|
When set to See Transaction Metadata for additional details. |
The connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the database.history.producer.
prefix are used (without the prefix) when creating the Kafka producer that writes to the database history, and all those that begin with the prefix database.history.consumer.
are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector startup.
For example, the following connector configuration properties can be used to secure connections to the Kafka broker:
In addition to the pass-through to the Kafka producer and consumer, the properties starting with database.
, e.g. database.applicationName=debezium
are passed to the JDBC URL.
database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234
database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234
Be sure to consult the Kafka documentation for all of the configuration properties for Kafka producers and consumers. (The SQL Server connector does use the new consumer.)