You are viewing documentation for an outdated version of Debezium.
If you want to view the latest stable version of this page, please go here.

Debezium Connector for Oracle

Debezium’s Oracle Connector can monitor and record all of the row-level changes in the databases on an Oracle server. This connector is at an early stage of development and considered an incubating feature as of Debezium 0.8. It is not feature-complete and the structure of emitted CDC messages may change in future revisions. Most notably, the connector does not yet support changes to the structure of captured tables (e.g. ALTER TABLE…​) after the initial snapshot has been completed (see DBZ-718, scheduled for one of the upcoming 0.9.x releases). It is supported though to capture tables newly added while the connector is running (provided the new table’s name matches the connector’s filter configuration).

Overview

As of Debezium 0.8, change events from Oracle are ingested using the XStream API. In order to use this API and hence this connector, you need to have a license for the GoldenGate product (though it’s not required that GoldenGate itself is installed). We are going to explore alternatives to using XStream in future Debezium 0.9.x releases, e.g. based on LogMiner and/or alternative solutions. Please track the DBZ-137 JIRA issue and join the discussion if you are aware of potential other ways for ingesting change events from Oracle.

Setting up Oracle

The following steps need to be performed in order to prepare the database so the Debezium connector can be used. This assumes the multi-tenancy configuration (with a container database and at least one pluggable database); if you’re not using this model, adjust the steps accordingly.

You can find a template for setting up Oracle in a virtual machine (via Vagrant) in the oracle-vagrant-box/ repository.

Preparing the Database

Enable GoldenGate replication and archive log mode:

ORACLE_SID=ORCLCDB dbz_oracle sqlplus /nolog

CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 5G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list

exit;

Furthermore, in order to capture the before state of changed rows, supplemental logging must be enabled for the captured tables or the database in general. E.g. like so for a specific table:

ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Creating an XStream Admin User and a User For the Connector

Create an XStream admin user in the container database (used per Oracle’s recommendation for administering XStream):

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_adm_tbs.dbf'
  SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_adm_tbs.dbf'
  SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba

CREATE USER c##xstrmadmin IDENTIFIED BY xsa
  DEFAULT TABLESPACE xstream_adm_tbs
  QUOTA UNLIMITED ON xstream_adm_tbs
  CONTAINER=ALL;

GRANT CREATE SESSION, SET CONTAINER TO c##xstrmadmin CONTAINER=ALL;

BEGIN
   DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
      grantee                 => 'c##xstrmadmin',
      privilege_type          => 'CAPTURE',
      grant_select_privileges => TRUE,
      container               => 'ALL'
   );
END;
/

exit;

Create XStream user (used by the Debezium connector to connect to the XStream outbound server):

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_tbs.dbf'
  SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_tbs.dbf'
  SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba

CREATE USER c##xstrm IDENTIFIED BY xs
  DEFAULT TABLESPACE xstream_tbs
  QUOTA UNLIMITED ON xstream_tbs
  CONTAINER=ALL;

GRANT CREATE SESSION TO c##xstrm CONTAINER=ALL;
GRANT SET CONTAINER TO c##xstrm CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##xstrm CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##xstrm CONTAINER=ALL;

exit;

Create an XStream Outbound Server

Create an XStream Outbound server (given the right privileges, this may be done automatically by the connector going forward, see DBZ-721):

sqlplus c##xstrmadmin/xsa@//localhost:1521/ORCLCDB

DECLARE
  tables  DBMS_UTILITY.UNCL_ARRAY;
  schemas DBMS_UTILITY.UNCL_ARRAY;
BEGIN
    tables(1)  := NULL;
    schemas(1) := 'debezium';
  DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
    server_name     =>  'dbzxout',
    table_names     =>  tables,
    schema_names    =>  schemas);
END;
/

exit;

Alter the XStream Outbound server to allow the xstrm user to connect to it:

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba

BEGIN
  DBMS_XSTREAM_ADM.ALTER_OUTBOUND(
    server_name  => 'dbzxout',
    connect_user => 'c##xstrm');
END;
/

exit;

Note that a given outbound server must not be used by multiple connector instances at the same time. If you wish to set up multiple instances of the Debezium Oracle connector, a specific XStreamOutbound server is needed for each of them.

Supported Configurations

So far, the connector has been tested with the pluggable database set-up (CDB/PDB model). It should monitor a single PDB in this model. It should also work with traditional (non-CDB) set-ups, though this could not be tested so far.

How the Oracle Connector Works

Database Schema History

tbd.

Snapshots

Most Oracle servers are configured to not retain the complete history of the database in the redo logs, so the Debezium Oracle connector would be unable to see the entire history of the database by simply reading the logs. So, by default (snapshotting mode initial) the connector will upon first startup perform an initial consistent snapshot of the database (meaning the structure and data within any tables to be captured as per the connector’s filter configuration).

Each snapshot consists of the following steps:

  1. Determine the tables to be captured

  2. Obtain an IN EXCLUSIVE MODE lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables.

  3. Read the current SCN ("system change number") position in the server’s redo log.

  4. Capture the structure of all relevant tables.

  5. Release the locks obtained in step 2, i.e. the locks are held only for a short period of time.

  6. Scan all of the relevant database tables and schemas as valid at the SCN position read in step 3 (SELECT * FROM …​ AS OF SCN 123), and generate a READ event for each row and write that event to the appropriate table-specific Kafka topic.

  7. Record the successful completion of the snapshot in the connector offsets.

If the connector fails, is rebalanced, or stops after step 1 begins but before step 7 completes, upon restart the connector will begin a new snapshot. Once the Oracle connector does complete its initial snapshot, it continues streaming from the position read during step 3, ensuring that it does not miss any updates that occurred while the snapshot was taken. If the connector stops again for any reason, upon restart it will simply continue streaming changes from where it previously left off.

A second snapshotting mode is initial_schema_only. In this case step 6 from the snapshotting routine described above won’t be applied. I.e. the connector will still capture the structure of the relevant tables, but it won’t create any READ events representing the complete dataset at the point of connector start-up. This can be useful if you’re only interested in any data changes from now onwards but not the complete current state of all records.

Reading the Redo Log

Upon first start-up, the connector takes a snapshot of the structure of the captured tables (DDL) and persists this information in its internal database history topic. It then proceeds to listen for change events right from the SCN at which the schema structure was captured. Processed SCNs are passed as offsets to Kafka Connect and regularly acknowledged with the database server (allowing it to discard older log files). After restart, the connector will resume from the offset (SCN) where it left off before.

Topics Names

Schema Change Topic

The user-facing schema change topic is not implemented yet (see DBZ-753).

Events

All data change events produced by the Oracle connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see Topic names).

The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]). If not, then all invalid characters will automatically be replaced with an underscore character.

This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores.

Debezium and Kafka Connect are designed around continuous streams of event messages, and the structure of these events may change over time. This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.

Change Event Keys

For a given table, the change event’s key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.

Consider a customers table defined in the inventory database schema:

CREATE TABLE customers (
  id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
  first_name VARCHAR2(255) NOT NULL,
  last_name VARCHAR2(255) NOT NULL,
  email VARCHAR2(255) NOT NULL UNIQUE
);

If the database.server.name configuration property has the value server1, every change event for the customers table while it has this definition will feature the same key structure, which in JSON looks like this:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "int32",
                "optional": false,
                "field": "ID"
            }
        ],
        "optional": false,
        "name": "server1.INVENTORY.CUSTOMERS.Key"
    },
    "payload": {
        "ID": 1004
    }
}

The schema portion of the key contains a Kafka Connect schema describing what is in the key portion, and in our case that means that the payload value is not optional, is a structure defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key, and has one required field named id of type int32. If we look at the value of the key’s payload field, we’ll see that it is indeed a structure (which in JSON is just an object) with a single id field, whose value is 1004.

Therefore, we interpret this key as describing the row in the inventory.customers table (output from the connector named server1) whose id primary key column had a value of 1004.

Change Event Values

Like the message key, the value of a change event message has a schema section and payload section. The payload section of every change event value produced by the Oracle connector has an envelope structure with the following fields:

  • op is a mandatory field that contains a string value describing the type of operation. Values for the Oracle connector are c for create (or insert), u for update, d for delete, and r for read (in the case of a snapshot).

  • before is an optional field that if present contains the state of the row before the event occurred. The structure will be described by the server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema, which the server1 connector uses for all rows in the inventory.customers table.

Whether or not this field and its elements are available is highly dependent on the Supplemental Logging configuration applying to the table.

  • after is an optional field that if present contains the state of the row after the event occurred. The structure is described by the same server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema used in before.

  • source is a mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the Debezium version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, it’ll be the point in time of snapshotting)

  • ts_ms is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.

And of course, the schema portion of the event message’s value contains a schema that describes this envelope structure and the nested fields within it.

Create events

Let’s look at what a create event value might look like for our customers table:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "before"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "after"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "string",
                        "optional": true,
                        "field": "version"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "name"
                    },
                    {
                        "type": "int64",
                        "optional": true,
                        "field": "ts_ms"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "txId"
                    },
                    {
                        "type": "int64",
                        "optional": true,
                        "field": "scn"
                    },
                    {
                        "type": "boolean",
                        "optional": true,
                        "field": "snapshot"
                    }
                ],
                "optional": false,
                "name": "io.debezium.connector.oracle.Source",
                "field": "source"
            },
            {
                "type": "string",
                "optional": false,
                "field": "op"
            },
            {
                "type": "int64",
                "optional": true,
                "field": "ts_ms"
            }
        ],
        "optional": false,
        "name": "server1.DEBEZIUM.CUSTOMERS.Envelope"
    },
    "payload": {
        "before": null,
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "source": {
            "version": "0.9.0.Alpha1",
            "name": "server1",
            "ts_ms": 1520085154000,
            "txId": "6.28.807",
            "scn": 2122185,
            "snapshot": false
        },
        "op": "c",
        "ts_ms": 1532592105975
    }
}

If we look at the schema portion of this event’s value, we can see the schema for the envelope, the schema for the source structure (which is specific to the Oracle connector and reused across all events), and the table-specific schemas for the before and after fields.

The names of the schemas for the before and after fields are of the form logicalName.schemaName.tableName.Value, and thus are entirely independent from all other schemas for all other tables. This means that when using the Avro Converter, the resulting Avro schems for each table in each logical source have their own evolution and history.

If we look at the payload portion of this event’s value, we can see the information in the event, namely that it is describing that the row was created (since op=c), and that the after field value contains the values of the new inserted row’s' ID, FIRST_NAME, LAST_NAME, and EMAIL columns.

It may appear that the JSON representations of the events are much larger than the rows they describe. This is true, because the JSON representation must include the schema and the payload portions of the message. It is possible and even recommended to use the Avro Converter to dramatically decrease the size of the actual messages written to the Kafka topics.

Update events

The value of an update change event on this table will actually have the exact same schema, and its payload will be structured the same but will hold different values. Here’s an example:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "source": {
            "version": "0.9.0.Alpha1",
            "name": "server1",
            "ts_ms": 1520085811000,
            "txId": "6.9.809",
            "scn": 2125544,
            "snapshot": false
        },
        "op": "u",
        "ts_ms": 1532592713485
    }
}

When we compare this to the value in the insert event, we see a couple of differences in the payload section:

  • The op field value is now u, signifying that this row changed because of an update

  • The before field now has the state of the row with the values before the database commit

  • The after field now has the updated state of the row, and here was can see that the EMAIL value is now anne@example.com.

  • The source field structure has the same fields as before, but the values are different since this event is from a different position in the redo log.

  • The ts_ms shows the timestamp that Debezium processed this event.

There are several things we can learn by just looking at this payload section. We can compare the before and after structures to determine what actually changed in this row because of the commit. The source structure tells us information about Oracle’s record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same Oracle commit as other events.

When the columns for a row’s primary/unique key are updated, the value of the row’s key has changed so Debezium will output three events: a DELETE event and a tombstone event with the old key for the row, followed by an INSERT event with the new key for the row.

Delete events

So far we’ve seen samples of create and update events. Now, let’s look at the value of a delete event for the same table. Once again, the schema portion of the value will be exactly the same as with the create and update events:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "after": null,
        "source": {
            "version": "0.9.0.Alpha1",
            "name": "server1",
            "ts_ms": 1520085153000,
            "txId": "6.28.807",
            "scn": 2122184,
            "snapshot": false
        },
        "op": "d",
        "ts_ms": 1532592105960
    }
}

If we look at the payload portion, we see a number of differences compared with the create or update event payloads:

  • The op field value is now d, signifying that this row was deleted

  • The before field now has the state of the row that was deleted with the database commit.

  • The after field is null, signifying that the row no longer exists

  • The source field structure has many of the same values as before, except the ts_ms, scn and txId fields have changed

  • The ts_ms shows the timestamp that Debezium processed this event.

This event gives a consumer all kinds of information that it can use to process the removal of this row.

The Oracle connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.

When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key. But only if the message value is null will Kafka know that it can remove all messages with that same key. To make this possible, Debezium’s Oracle connector always follows the delete event with a special tombstone event that has the same key but null value.

Data Types

As described above, the Debezium Oracle connector represents the changes to rows with events that are structured like the table in which the row exist. The event contains a field for each column value, and how that value is represented in the event depends on the Oracle data type of the column. This section describes this mapping from Oracle’s data types to a literal type and semantic type within the events' fields.

Here, the literal type describes how the value is literally represented using Kafka Connect schema types, namely INT8, INT16, INT32, INT64, FLOAT32, FLOAT64, BOOLEAN, STRING, BYTES, ARRAY, MAP, and STRUCT.

The semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.

Support for further data types will be added in subsequent releases. Please file a JIRA issue for any specific types you are missing.

Character Values

Oracle Data Type Literal type (schema type) Semantic type (schema name) Notes

CHAR[(M)]

STRING

n/a

NCHAR[(M)]

STRING

n/a

VARCHAR[(M)]

STRING

n/a

VARCHAR2[(M)]

STRING

n/a

NVARCHAR2[(M)]

STRING

n/a

Numeric Values

Oracle Data Type Literal type (schema type) Semantic type (schema name) Notes

NUMBER[(P[, *])]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

NUMBER(P, S > 0)

BYTES

org.apache.kafka.connect.data.Decimal

NUMBER(P, S ⇐ 0)

INT8 / INT16 / INT32 / INT64

n/a

NUMBER columns with a scale of 0 represent integer numbers; a negative scale indicates rounding in Oracle, e.g. a scale of -2 will cause rounding to hundreds.
Depending on the precision and scale, a matching Kafka Connect integer type will be chosen: INT8 if P - S < 3, INT16 if P - S < 5, INT32 if P - S < 10 and INT64 if P - S < 19.
If P - S >= 19, the column will be mapped to BYTES (org.apache.kafka.connect​.data.Decimal).

SMALLINT

BYTES

org.apache.kafka.connect.data.Decimal

SMALLINT is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

INTEGER, INT

BYTES

org.apache.kafka.connect.data.Decimal

INTEGER is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

NUMERIC[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for NUMERIC).

DECIMAL[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for DECIMAL).

BINARY_FLOAT

FLOAT32

n/a

BINARY_DOUBLE

FLOAT64

n/a

FLOAT[(P)]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

DOUBLE PRECISION

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

REAL

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

Temporal Values

Oracle Data Type Literal type (schema type) Semantic type (schema name) Notes

DATE

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

TIMESTAMP(0 - 3)

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

TIMESTAMP, TIMESTAMP(4 - 6)

INT64

io.debezium.time.MicroTimestamp

Represents the number of microseconds past epoch, and does not include timezone information.

TIMESTAMP(7 - 9)

INT64

io.debezium.time.NanoTimestamp

Represents the number of nanoseconds past epoch, and does not include timezone information.

TIMESTAMP WITH TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information

INTERVAL

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average

Deploying a Connector

Due to licensing requirements, the Debezium Oracle Connector does not ship with the Oracle JDBC driver and the XStream API JAR. You can obtain them for free by downloading the Oracle Instant Client.

Extract the archive into a directory, e.g. /path/to/instant_client/. Copy the files _ojdbc8.jar and xstreams.jar from the Instant Client into Kafka’s libs directory. Create the environment variable LD_LIBRARY_PATH, pointing to the Instant Client directory:

LD_LIBRARY_PATH=/path/to/instant_client/

Example Configuration

The following shows an example JSON request for registering an instance of the Debezium Oracle connector:

{
    "name": "inventory-connector",
    "config": {
        "connector.class" : "io.debezium.connector.oracle.OracleConnector",
        "tasks.max" : "1",
        "database.server.name" : "server1",
        "database.hostname" : "<oracle ip>",
        "database.port" : "1521",
        "database.user" : "c##xstrm",
        "database.password" : "xsa",
        "database.dbname" : "ORCLCDB",
        "database.pdb.name" : "ORCLPDB1",
        "database.out.server.name" : "dbzxout",
        "database.history.kafka.bootstrap.servers" : "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory"
    }
}

Monitoring

Kafka, Zookeeper, and Kafka Connect all have built-in support for JMX metrics. The Oracle connector also publishes a number of metrics about the connector’s activities that can be monitored through JMX. The connector has two types of metrics. Snapshot metrics help you monitor the snapshot activity and are available when the connector is performing a snapshot. Streaming metrics help you monitor the progress and activity while the connector reads XStream events.

Snapshot Metrics

MBean: debezium.oracle:type=connector-metrics,context=snapshot,server=<database.server.name>
Attribute Name Type Description

LastEvent

string

The last snapshot event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfEventsFiltered

long

The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector.

MonitoredTables

string[]

The list of tables that are monitored by the connector.

QueueTotalCapcity

int

The length of the queue used to pass events between the snapshotter and the main Kafka Connect loop.

QueueRemainingCapcity

int

The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop.

TotalTableCount

int

The total number of tables that are being included in the snapshot.

RemainingTableCount

int

The number of tables that the snapshot has yet to copy.

SnapshotRunning

boolean

Whether the snapshot was started.

SnapshotAborted

boolean

Whether the snapshot was aborted.

SnapshotCompleted

boolean

Whether the snapshot completed.

SnapshotDurationInSeconds

long

The total number of seconds that the snapshot has taken so far, even if not complete.

RowsScanned

Map<String, Long>

Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.

Streaming Metrics

MBean: debezium.oracle:type=connector-metrics,context=streaming,server=<database.server.name>
Attribute Name Type Description

LastEvent

string

The last streaming event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfEventsFiltered

long

The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector.

MonitoredTables

string[]

The list of tables that are monitored by the connector.

QueueTotalCapcity

int

The length of the queue used to pass events between the streamer and the main Kafka Connect loop.

QueueRemainingCapcity

int

The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop.

Connected

boolean

Flag that denotes whether the connector is currently connected to the database server.

MilliSecondsBehindSource

long

The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the Debezium connector are running.

NumberOfCommittedTransactions

long

The number of processed transactions that were committed.

SourceEventPosition

map<string, string>

The coordinates of the last received event.

LastTransactionId

string

Transaction identifier of the last processed transaction.

Schema History Metrics

MBean: debezium.mysql:type=connector-metrics,context=schema-history,server=<database.server.name>
Attribute Name Type Description

Status

string

One of STOPPED, RECOVERING (recovering history from the storage), RUNNING describing state of the database history.

RecoveryStartTime

long

The time in epoch seconds at what recovery has started.

ChangesRecovered

long

The number of changes that were read during recovery phase.

ChangesApplied

long

The total number of schema changes applie during recovery and runtime.

MilliSecondsSinceLastRecoveredChange

long

The number of milliseconds that elapsed since the last change was recovered from the history store.

MilliSecondsSinceLastAppliedChange

long

The number of milliseconds that elapsed since the last change was applied.

LastRecoveredChange

string

The string representation of the last change recovered from the history store.

LastAppliedChange

string

The string representation of the last applied change.

Connector Properties

The following configuration properties are required unless a default value is available.

Property Default Description

name

Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)

connector.class

The name of the Java class for the connector. Always use a value of io.debezium​.connector.oracle.OracleConnector for the Oracle connector.

tasks.max

1

The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.

database.hostname

IP address or hostname of the Oracle database server.

database.port

Integer port number of the Oracle database server.

database.user

Name of the user to use when connecting to the Oracle database server.

database.password

Password to use when connecting to the Oracle database server.

database.dbname

Name of the database to connect to. Must be the CDB name when working with the CDB + PDB model.

database.pdb.name

Name of the PDB to connect to, when working with the CDB + PDB model.

database.out.server.name

Name of the XStream outbound server configured in the database.

database.server.name

Logical name that identifies and provides a namespace for the particular Oracle database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector.

database.history.kafka.topic

The full name of the Kafka topic where the connector will store the database schema history.

database.history​.kafka.bootstrap.servers

A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process.

snapshot.mode

initial

A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are initial (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and initial_schema_only (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs.

table.whitelist

empty string

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the whitelist will be excluded from monitoring. Each identifier is of the form databaseName.tableName. By default the connector will monitor every non-system table in each monitored database. May not be used with table.blacklist.

table.blacklist

empty string

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the blacklist will be monitored. Each identifier is of the form databaseName.tableName. May not be used with table.whitelist.

max.queue.size

8192

Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the max.batch.size property.

max.batch.size

2048

Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.

poll.interval.ms

1000

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.

tombstones.on.delete

true

Controls whether a tombstone event should be generated after a delete event.
When true the delete operations are represented by a delete event and a subsequent tombstone event. When false only a delete event is sent.
Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.

message.key.columns

empty string

A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key.
Each item (regular expression) must match the <fully-qualified table>:<a comma-separated list of columns> representing the custom key.
Fully-qualified tables could be defined as DB_NAME.TABLE_NAME or SCHEMA_NAME.TABLE_NAME, depending on the specific connector.

column.propagate.source.type

n/a

An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length and __debezium.source.column.scale will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of the form databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.

heartbeat.interval.ms

0

Controls how frequently heartbeat messages are sent.
This property contains an interval in milli-seconds that defines how frequently the connector sends messages into a heartbeat topic. This can be used to monitor whether the connector is still receiving change events from the database. You also should leverage heartbeat messages in cases where only records in non-captured tables are changed for a longer period of time. In such situation the connector would proceed to read the log from the database but never emit any change messages into Kafka, which in turn means that no offset updates will be committed to Kafka. This will cause the redo log files to be retained by the database longer than needed (as the connector actually has processed them already but never got a chance to flush the latest retrieved SCN to the database) and also may result in more change events to be re-sent after a connector restart. Set this parameter to 0 to not send heartbeat messages at all.
Disabled by default.

heartbeat.topics.prefix

__debezium-heartbeat

Controls the naming of the topic to which heartbeat messages are sent.
The topic is named according to the pattern <heartbeat.topics.prefix>.<server.name>.

snapshot.delay.ms

An interval in milli-seconds that the connector should wait before taking a snapshot after starting up;
Can be used to avoid snapshot interruptions when starting multiple connectors in a cluster, which may cause re-balancing of connectors.

snapshot.fetch.size

2000

Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000.

sanitize.field.names

true when connector configuration explicitly specifies the key.converter or value.converter parameters to use Avro, otherwise defaults to false.

Whether field names will be sanitized to adhere to Avro naming requirements. See Avro naming for more details.