Debezium Connector for Oracle
This connector is currently in incubating state, i.e. exact semantics, configuration options etc. may change in future revisions, based on the feedback we receive. Please let us know if you encounter any problems. |
Debezium’s Oracle Connector can monitor and record all of the row-level changes in the databases on an Oracle server.
Most notably, the connector does not yet support changes to the structure of captured tables (e.g. ALTER TABLE…
) after the initial snapshot has been completed
(see DBZ-718).
It is supported though to capture tables newly added while the connector is running
(provided the new table’s name matches the connector’s filter configuration).
Overview
Debezium ingests change events from Oracle using the XStream API or directly by LogMiner. In order to use the XStream API, you need to have a license for the GoldenGate product (though it is not required that GoldenGate itself is installed).
How the Oracle Connector Works
To optimally configure and run a Debezium Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
Snapshots
Most Oracle servers are configured to not retain the complete history of the database in the redo logs,
so the Debezium Oracle connector would be unable to see the entire history of the database by simply reading the logs.
Consequently, the first time the connector starts, it performs an initial consistent snapshot of the database.
the default behavior for perofmring a snapshot consists of the following steps.
you can change this behavior by setting the snapshot.mode
connector configuration property to a value other than initial
.
-
Determine the tables to be captured
-
Obtain an
IN EXCLUSIVE MODE
lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables. -
Read the current SCN ("system change number") position in the server’s redo log.
-
Capture the structure of all relevant tables.
-
Release the locks obtained in step 2, i.e. the locks are held only for a short period of time.
-
Scan all of the relevant database tables and schemas as valid at the SCN position read in step 3 (
SELECT * FROM … AS OF SCN 123
), and generate aREAD
event for each row and write that event to the appropriate table-specific Kafka topic. -
Record the successful completion of the snapshot in the connector offsets.
If the connector fails, is rebalanced, or stops after step 1 begins but before step 7 completes, upon restart the connector will begin a new snapshot. After the connector completes its initial snapshot, the Debezium connector continues streaming from the position that it read in step 3. This ensures that the connector does not miss any updates. If the connector stops again for any reason, upon restart, the connector continues streaming changes from where it previously left off.
Setting | Description |
---|---|
|
The connector performs a database snapshot after which it will transition to streaming changes. |
|
The connector captures the structure of all relevant tables, performing all the steps described above, except it does not create any |
Schema Change Topic
The Debezium Oracle connector stores the history of schema changes in a database history topic.
This topic reflects an internal connector state and you should not use it directly.
Applications that require notifications about schema changes should obtain the information from the public schema change topic.
the connector writes all of these events to a Kafka topic named <serverName>
, where serverName
is the name of the connector that is specified in the database.server.name
configuration property.
The schema change topic message format is in an incubating state and may change without notice. |
Debezium emits a new message to this topic whenever a new table is streamed from or when the structure of the table is altered (schema evolution procedure must be followed). The message contains a logical representation of the table schema.
The example of the message is:
{
"schema": {
...
},
"payload": {
"source": {
"version": "1.4.2.Final",
"connector": "oracle",
"name": "server1",
"ts_ms": 1588252618953,
"snapshot": "true",
"db": "ORCLPDB1",
"schema": "DEBEZIUM",
"table": "CUSTOMERS",
"txId" : null,
"scn" : "1513734",
"commit_scn": "1513734",
"lcr_position" : null
},
"databaseName": "ORCLPDB1", (1)
"schemaName": "DEBEZIUM", (1)
"ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n ( \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n \"FIRST_NAME\" VARCHAR2(255), \n \"LAST_NAME" VARCHAR2(255), \n \"EMAIL\" VARCHAR2(255), \n PRIMARY KEY (\"ID\") ENABLE, \n SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n ) SEGMENT CREATION IMMEDIATE \n PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n TABLESPACE \"USERS\" ", (2)
"tableChanges": [ (3)
{
"type": "CREATE", (4)
"id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"", (5)
"table": { (6)
"defaultCharsetName": null,
"primaryKeyColumnNames": [ (7)
"ID"
],
"columns": [ (8)
{
"name": "ID",
"jdbcType": 2,
"nativeType": null,
"typeName": "NUMBER",
"typeExpression": "NUMBER",
"charsetName": null,
"length": 9,
"scale": 0,
"position": 1,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "FIRST_NAME",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 2,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "LAST_NAME",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 3,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "EMAIL",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 4,
"optional": false,
"autoIncremented": false,
"generated": false
}
]
}
}
]
}
}
Item | Field name | Description |
---|---|---|
1 |
|
Identifies the database and the schema that contain the change. |
2 |
|
This field contains the DDL responsible for the schema change. |
3 |
|
An array of one or more items that contain the schema changes generated by a DDL command. |
4 |
|
Describes the kind of change. The value is one of the following:
|
5 |
|
Full identifier of the table that was created, altered, or dropped. |
6 |
|
Represents table metadata after the applied change. |
7 |
|
List of columns that compose the table’s primary key. |
8 |
|
Metadata for each column in the changed table. |
In messages that the connector sends to the schema change topic, the key is the name of the database that contains the schema change.
In the following example, the payload
field contains the key:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "databaseName"
}
],
"optional": false,
"name": "io.debezium.connector.oracle.SchemaChangeKey"
},
"payload": {
"databaseName": "ORCLPDB1"
}
}
Transaction Metadata
Debezium can generate events that represents tranaction metadata boundaries and enrich data messages.
Transaction boundaries
Debezium generates events for every transaction BEGIN
and END
.
Every event contains
-
status
-BEGIN
orEND
-
id
- string representation of unique transaction identifier -
event_count
(forEND
events) - total number of events emmitted by the transaction -
data_collections
(forEND
events) - an array of pairs ofdata_collection
andevent_count
that provides number of events emitted by changes originating from given data collection
Following is an example of what a message looks like:
{
"status": "BEGIN",
"id": "5.6.641",
"event_count": null,
"data_collections": null
}
{
"status": "END",
"id": "5.6.641",
"event_count": 2,
"data_collections": [
{
"data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER",
"event_count": 1
},
{
"data_collection": "ORCLPDB1.DEBEZIUM.ORDER",
"event_count": 1
}
]
}
The transaction events are written to the topic named <database.server.name>.transaction
.
Data events enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field.
This field provides information about every event in the form of a composite of fields:
-
id
- string representation of unique transaction identifier -
total_order
- the absolute position of the event among all events generated by the transaction -
data_collection_order
- the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
{
"before": null,
"after": {
"pk": "2",
"aa": "1"
},
"source": {
...
},
"op": "c",
"ts_ms": "1580390884335",
"transaction": {
"id": "5.6.641",
"total_order": "1",
"data_collection_order": "1"
}
}
Data change events
All data change events produced by the Oracle connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see Topic names).
The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]). If not, then all invalid characters will automatically be replaced with an underscore character. This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores. |
Debezium and Kafka Connect are designed around continuous streams of event messages, and the structure of these events may change over time. This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.
Change event keys
For a given table, the change event’s key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.
Consider a customers
table defined in the inventory
database schema:
CREATE TABLE customers (
id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
first_name VARCHAR2(255) NOT NULL,
last_name VARCHAR2(255) NOT NULL,
email VARCHAR2(255) NOT NULL UNIQUE
);
If the database.server.name
configuration property has the value server1
,
every change event for the customers
table while it has this definition will feature the same key structure, which in JSON looks like this:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
}
],
"optional": false,
"name": "server1.INVENTORY.CUSTOMERS.Key"
},
"payload": {
"ID": 1004
}
}
The schema
portion of the key contains a Kafka Connect schema describing what is in the key portion, and in our case that means that the payload
value is not optional, is a structure defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key
, and has one required field named id
of type int32
.
If you look at the value of the key’s payload
field, you can see that it is indeed a structure (which in JSON is just an object) with a single id
field, whose value is 1004
.
Therefore, you can interpret this key as describing the row in the inventory.customers
table (output from the connector named server1
) whose id
primary key column had a value of 1004
.
Change event values
Like the message key, the value of a change event message has a schema section and payload section. The payload section of every change event value produced by the Oracle connector has an envelope structure with the following fields:
-
op
is a mandatory field that contains a string value describing the type of operation. Values for the Oracle connector arec
for create (or insert),u
for update,d
for delete, andr
for read (in the case of a snapshot). -
before
is an optional field that if present contains the state of the row before the event occurred. The structure will be described by theserver1.INVENTORY.CUSTOMERS.Value
Kafka Connect schema, which theserver1
connector uses for all rows in theinventory.customers
table.
Whether or not this field and its elements are available is highly dependent on the Supplemental Logging configuration applying to the table. |
-
after
is an optional field that if present contains the state of the row after the event occurred. The structure is described by the sameserver1.INVENTORY.CUSTOMERS.Value
Kafka Connect schema used inbefore
. -
source
is a mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the Debezium version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).
The |
-
ts_ms
is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.
And of course, the schema portion of the event message’s value contains a schema that describes this envelope structure and the nested fields within it.
Create events
Let’s look at what a create event value might look like for our customers
table:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
},
{
"type": "string",
"optional": false,
"field": "FIRST_NAME"
},
{
"type": "string",
"optional": false,
"field": "LAST_NAME"
},
{
"type": "string",
"optional": false,
"field": "EMAIL"
}
],
"optional": true,
"name": "server1.DEBEZIUM.CUSTOMERS.Value",
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
},
{
"type": "string",
"optional": false,
"field": "FIRST_NAME"
},
{
"type": "string",
"optional": false,
"field": "LAST_NAME"
},
{
"type": "string",
"optional": false,
"field": "EMAIL"
}
],
"optional": true,
"name": "server1.DEBEZIUM.CUSTOMERS.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": true,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
},
{
"type": "string",
"optional": true,
"field": "txId"
},
{
"type": "int64",
"optional": true,
"field": "scn"
},
{
"type": "int64",
"optional": true,
"field": "commit_scn"
},
{
"type": "boolean",
"optional": true,
"field": "snapshot"
}
],
"optional": false,
"name": "io.debezium.connector.oracle.Source",
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
}
],
"optional": false,
"name": "server1.DEBEZIUM.CUSTOMERS.Envelope"
},
"payload": {
"before": null,
"after": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "annek@noanswer.org"
},
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085154000,
"txId": "6.28.807",
"scn": 2122185,
"commit_scn": 2122185,
"snapshot": false
},
"op": "c",
"ts_ms": 1532592105975
}
}
If we look at the schema
portion of this event’s value, we can see the schema for the envelope, the schema for the source
structure (which is specific to the Oracle connector and reused across all events), and the table-specific schemas for the before
and after
fields.
The names of the schemas for the |
If we look at the payload
portion of this event’s value, we can see the information in the event, namely that it is describing that the row was created (since op=c
), and that the after
field value contains the values of the new inserted row’s' ID
, FIRST_NAME
, LAST_NAME
, and EMAIL
columns.
It may appear that the JSON representations of the events are much larger than the rows they describe. This is true, because the JSON representation must include the schema and the payload portions of the message. It is possible and even recommended to use the Avro Converter to dramatically decrease the size of the actual messages written to the Kafka topics. |
Update events
The value of an update change event on this table will actually have the exact same schema, and its payload will be structured the same but will hold different values. Here’s an example:
{
"schema": { ... },
"payload": {
"before": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "annek@noanswer.org"
},
"after": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "anne@example.com"
},
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085811000,
"txId": "6.9.809",
"scn": 2125544,
"commit_scn": 2125544,
"snapshot": false
},
"op": "u",
"ts_ms": 1532592713485
}
}
When we compare this to the value in the insert event, we see a couple of differences in the payload
section:
-
The
op
field value is nowu
, signifying that this row changed because of an update -
The
before
field now has the state of the row with the values before the database commit -
The
after
field now has the updated state of the row, and here was can see that theEMAIL
value is nowanne@example.com
. -
The
source
field structure has the same fields as before, but the values are different since this event is from a different position in the redo log. -
The
ts_ms
shows the timestamp that Debezium processed this event.
There are several things we can learn by just looking at this payload
section. We can compare the before
and after
structures to determine what actually changed in this row because of the commit.
The source
structure tells us information about Oracle’s record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same Oracle commit as other events.
When the columns for a row’s primary/unique key are updated, the value of the row’s key has changed so Debezium will output three events: a |
Delete events
So far we’ve seen samples of create and update events.
Now, let’s look at the value of a delete event for the same table. Once again, the schema
portion of the value will be exactly the same as with the create and update events:
{
"schema": { ... },
"payload": {
"before": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "anne@example.com"
},
"after": null,
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085153000,
"txId": "6.28.807",
"scn": 2122184,
"commit_scn": 2122184,
"snapshot": false
},
"op": "d",
"ts_ms": 1532592105960
}
}
If we look at the payload
portion, we see a number of differences compared with the create or update event payloads:
-
The
op
field value is nowd
, signifying that this row was deleted -
The
before
field now has the state of the row that was deleted with the database commit. -
The
after
field is null, signifying that the row no longer exists -
The
source
field structure has many of the same values as before, except thets_ms
,scn
andtxId
fields have changed -
The
ts_ms
shows the timestamp that Debezium processed this event.
This event gives a consumer all kinds of information that it can use to process the removal of this row.
The Oracle connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.
When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key.
But only if the message value is null
will Kafka know that it can remove all messages with that same key.
To make this possible, Debezium’s Oracle connector always follows the delete event with a special tombstone event that has the same key but null
value.
Data Type mappings
The Oracle conenctor represents changes to rows with events that are structured like the table in which the rows exists. The event contains a field for each column value. How that value is represented in the event depends on the Oracle data type of the column. The following sections describe how the connector maps oracle data types to a litearl type and a semantic type in event fields.
-
litearl type describes how the value is literally represented using Kafka Connect schema types:
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. -
semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
Support for further data types will be added in subsequent releases. Please file a JIRA issue for any specific types that may be missing.
Character types
The following table describes how the connector maps character types.
Oracle Data Type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
Numeric types
The following table describes how the connector maps numeric types.
Oracle Data Type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
n/a |
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
n/a |
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Decimal types
The setting of the Oracle connector configuration property, decimal.handling.mode
determines how the connector maps decimal types.
When the decimal.handling.mode
property is set to precise
, the connector uses Kafka Connect org.apache.kafka.connect.data.Decimal
logical type for all DECIMAL
and NUMERIC
columns.
This is the default mode.
However, when the decimal.handling.mode
property is set to double
, the connector will represent the values as Java double values with schema type FLOAT64
.
The last possible setting for the decimal.handling.mode
configuration property is string
.
In this case, the connector reprsents DECIMAL
and NUMERIC
values as their formatted string representation with schema type STRING
.
Temporal types
The following table describes how the connector maps temporal types.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
+
The number of micro seconds for a time interval using the |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Setting up Oracle
The following database set up steps are necessary to use the Debezium Oracle connector. These steps assume the use of the multitenancy configuration with a container database and at least one pluggable database. If you intend to use a non-multitenancy configuration, the following steps may require adjustment.
You can find a template for setting up Oracle in a virtual machine (via Vagrant) in the oracle-vagrant-box/ repository.
Preparing the Database
ORACLE_SID=ORCLCDB dbz_oracle sqlplus /nolog
CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 5G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list
exit;
ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog
CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest = '/opt/oracle/oradta/recovery_area' scope=spfile;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should now "Database log mode: Archive Mode"
archive log list
exit;
In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the before state of changed database rows. The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Creating Users for the connector
The Debezium Oracle connector requires that users accounts be set up with specific permissions so that the connector can capture change events. The following briefly describes these user configurations using a multi-tenant database model.
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_adm_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_adm_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##xstrmadmin IDENTIFIED BY xsa
DEFAULT TABLESPACE xstream_adm_tbs
QUOTA UNLIMITED ON xstream_adm_tbs
CONTAINER=ALL;
GRANT CREATE SESSION, SET CONTAINER TO c##xstrmadmin CONTAINER=ALL;
BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'c##xstrmadmin',
privilege_type => 'CAPTURE',
grant_select_privileges => TRUE,
container => 'ALL'
);
END;
/
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##xstrm IDENTIFIED BY xs
DEFAULT TABLESPACE xstream_tbs
QUOTA UNLIMITED ON xstream_tbs
CONTAINER=ALL;
GRANT CREATE SESSION TO c##xstrm CONTAINER=ALL;
GRANT SET CONTAINER TO c##xstrm CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##xstrm CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##xstrm CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO c##xstrm CONTAINER=ALL;
GRANT EXECUTE_CATALOG_ROLE TO c##xstrm CONTAINER=ALL;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##logminer IDENTIFIED BY lm
DEFAULT TABLESPACE logminer_tbs
QUOTA UNLIMITED ON logminer_tbs
CONTAINER=ALL;
GRANT CREATE SESSION TO c##logminer CONTAINER=ALL;
GRANT SET CONTAINER TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##logminer CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT SELECT ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO c##logminer CONTAINER=ALL;
GRANT EXECUTE_CATALOG_ROLE TO c##logminer CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO c##logminer CONTAINER=ALL;
GRANT LOGMINING TO c##logminer CONTAINER=ALL;
GRANT CREATE TABLE TO c##logminer CONTAINER=ALL;
GRANT LOCK ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT ALTER ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT CREATE SEQUENCE TO c##logminer CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR TO c##logminer CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR_D TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOG TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOG_HISTORY TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_LOGS TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOGFILE TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVED_LOG TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##logminer CONTAINER=ALL;
exit;
Create an XStream Outbound Server
If you’re using the LogMiner implementation, this step is not necessary. |
Create an XStream Outbound server (given the right privileges, this may be done automatically by the connector going forward, see DBZ-721):
sqlplus c##xstrmadmin/xsa@//localhost:1521/ORCLCDB
DECLARE
tables DBMS_UTILITY.UNCL_ARRAY;
schemas DBMS_UTILITY.UNCL_ARRAY;
BEGIN
tables(1) := NULL;
schemas(1) := 'debezium';
DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
server_name => 'dbzxout',
table_names => tables,
schema_names => schemas);
END;
/
exit;
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
BEGIN
DBMS_XSTREAM_ADM.ALTER_OUTBOUND(
server_name => 'dbzxout',
connect_user => 'c##xstrm');
END;
/
exit;
A single XStream Outbound server cannot be shared by multiple Debezium Oracle connectors. Each connector requires a unique XStream Outbound connector to be configured. |
Deploying a Connector
Due to licensing requirements, the Debezium Oracle Connector does not ship with the Oracle JDBC driver and the XStream API JAR. You can obtain them for free by downloading the Oracle Instant Client.
Extract the archive and navigate to the Instant Client directory.
Copy ojdbc<version>.jar
and xstreams.jar
into Kafka’s libs directory.
Lastly, create an environment variable, LD_LIBRARY_PATH
, that points to the instant client directory, as shown below:
LD_LIBRARY_PATH=/path/to/instant_client/
Example Configuration
The following shows an example JSON request for registering an instance of the Debezium Oracle connector:
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}
When using a more complex Oracle deployment or needing to use TNS names, then a raw JDBC url can be provided instead of a single hostname-port pair. Here is a similar example but that just passes the raw jdbc url:
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 1>)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 2>)(PORT=1521)))(CONNECT_DATA=SERVICE_NAME=)(SERVER=DEDICATED)))",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}
Pluggable vs Non-Plugable databases
The Debezium Oracle connector supports both deployment practices of pluggable databases (CDB mode) as well as non-pluggable databases (non-CDB mode).
{
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}
{
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}
When using CDB installations, specify |
Selecting the adapter
Debezium provides multiple ways to ingest change events from Oracle. By default Debezium uses the XStream API but this isn’t always applicable for every installation.
The following example configuration illustrates that by adding the database.connection.adapter
, the connector can be toggled to use the LogMiner implementation.
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
"database.connection.adapter": "logminer"
}
}
We encourage the use of the LogMiner adapter for testing and feedback purposes, but we do not yet recommend it for production use as its still under active development. |
Connector Properties
The following configuration properties are required unless a default value is available.
Property |
Default |
Description |
Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) |
||
The name of the Java class for the connector. Always use a value of |
||
|
The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable. |
|
IP address or hostname of the Oracle database server. |
||
Integer port number of the Oracle database server. |
||
Name of the user to use when connecting to the Oracle database server. |
||
Password to use when connecting to the Oracle database server. |
||
Name of the database to connect to. Must be the CDB name when working with the CDB + PDB model. |
||
Raw database jdbc url. This property can be used when more flexibility is needed and can support raw TNS names or RAC connection strings. |
||
|
Enable support for case insensitive table names; set to |
|
|
Specifies how to decode the Oracle SCN values.
|
|
Name of the PDB to connect to, when working with the CDB + PDB model. |
||
Name of the XStream outbound server configured in the database. |
||
Logical name that identifies and provides a namespace for the particular Oracle database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector. Only alphanumeric characters and underscores should be used. |
||
|
The adapter implementation to use.
|
|
A comma-separated list of RAC node host names or addresses. This field is required to enable Oracle RAC support. |
||
The full name of the Kafka topic where the connector will store the database schema history. |
||
A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process. |
||
initial |
A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are initial (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and schema_only (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs. |
|
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in |
||
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in |
||
empty string |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the include list will be excluded from monitoring. Each identifier is of the form schemaName.tableName. By default the connector will monitor every non-system table in each monitored database. May not be used with |
|
empty string |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the exclude list will be monitored. Each identifier is of the form schemaName.tableName. May not be used with |
|
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in the change event message values.
Fully-qualified names for columns are of the form schemaName.tableName.columnName.
Note that primary key columns are always included in the event’s key, even if not included in the value.
Do not also set the |
|
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values.
Fully-qualified names for columns are of the form schemaName.tableName.columnName.
Note that primary key columns are always included in the event’s key, also if excluded from the value.
Do not also set the |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be pseudonyms in the change event message values with a field value consisting of the hashed value using the algorithm |
|
|
The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed. |
|
|
The maximum SCN interval size that this connector will use when reading from redo/archive logs. |
|
|
The starting SCN interval size that the connector will use for reading data from redo/archive logs. |
|
|
The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. |
|
|
The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. |
|
|
The starting amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. |
|
|
The maximum amount of time up or down that the connector will use to tune the optimal sleep time when reading data from logminer. Value is in milliseconds. |
|
|
The number of content records that will be fetched from the log miner content view. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName. Example: column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName where Note: Depending on the |
|
|
Specifies how the connector should handle floating point values for |
|
|
Specifies how the connector should react to exceptions during processing of events.
|
|
|
Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the |
|
|
Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. |
|
|
Long value for the maximum size in bytes of the blocking queue. The feature is disabled by default, it will be active if it’s set with a positive long value. |
|
|
Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second. |
|
|
Controls whether a tombstone event should be generated after a delete event. |
|
n/a |
A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName. |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk ( |
|
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters |
|
n/a |
An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters |
|
|
Controls how frequently heartbeat messages are sent. |
|
|
Controls the naming of the topic to which heartbeat messages are sent. |
|
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; |
||
|
Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000. |
|
|
Whether field names will be sanitized to adhere to Avro naming requirements. See Avro naming for more details. |
|
|
When set to See Transaction Metadata for additional details. |
|
The fully qualified class name to an implementation of |
||
|
The number of hours in the past from SYSDATE to mine archive logs.
Using the default |
|
|
The number of hours to retain entries in log mining history table.
When set to |
|
|
Positive integer value that specifies the number of hours to retain long running transactions between redo log switches. The LogMiner adapter maintains an in-memory buffer of all running transactions. As all DML operations that are part of a transaction will be buffered until a commit or rollback is detected, long-running transactions should be avoided in order to not overflow that buffer. Any transaction that exceeds this configured value will be discarded entirely and no messages emitted for the operations that were part of the transaction. While this option allows the behavior to be configured on a case-by-case basis, we have plans to enhance this behavior in a future release by means of adding a scalable transaction buffer, (see DBZ-3123). |
Monitoring
The Debezium Oracle connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
-
snapshot metrics; for monitoring the connector when performing snapshots
-
streaming metrics; for monitoring the connector when processing change events
-
logminer metrics; additional metrics captured when using the LogMiner adpter to process change events
-
schema history metrics; for monitoring the status of the connector’s schema history
Please refer to the monitoring documentation for details of how to expose these metrics via JMX.
Snapshot Metrics
The MBean is debezium.oracle:type=connector-metrics,context=snapshot,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
The last snapshot event that the connector has read. |
|
|
The number of milliseconds since the connector has read and processed the most recent event. |
|
|
The total number of events that this connector has seen since last started or reset. |
|
|
The number of events that have been filtered by include/exclude list filtering rules configured on the connector. |
|
|
The list of tables that are monitored by the connector. |
|
|
The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
|
The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
|
The total number of tables that are being included in the snapshot. |
|
|
The number of tables that the snapshot has yet to copy. |
|
|
Whether the snapshot was started. |
|
|
Whether the snapshot was aborted. |
|
|
Whether the snapshot completed. |
|
|
The total number of seconds that the snapshot has taken so far, even if not complete. |
|
|
Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. |
|
|
The maximum buffer of the queue in bytes. It will be enabled if |
|
|
The current data of records in the queue in bytes. |
Streaming Metrics
The MBean is debezium.oracle:type=connector-metrics,context=streaming,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
The last streaming event that the connector has read. |
|
|
The number of milliseconds since the connector has read and processed the most recent event. |
|
|
The total number of events that this connector has seen since last started or reset. |
|
|
The number of events that have been filtered by include/exclude list filtering rules configured on the connector. |
|
|
The list of tables that are monitored by the connector. |
|
|
The length the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
|
The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
|
Flag that denotes whether the connector is currently connected to the database server. |
|
|
The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. |
|
|
The number of processed transactions that were committed. |
|
|
The coordinates of the last received event. |
|
|
Transaction identifier of the last processed transaction. |
|
|
The maximum buffer of the queue in bytes. |
|
|
The current data of records in the queue in bytes. |
LogMiner Metrics
The MBean is debezium.oracle:type=connector-metrics,context=log-miner,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
The most recent SCN that has been processed. |
|
|
Array of the log files that are currently mined. |
|
|
The minimum number of logs specified for any LogMiner session. |
|
|
The maximum number of logs specified for any LogMiner session. |
|
|
Array of the current state for each mined logfile with the format |
|
|
The number of times the database has performed a log switch for the last day. |
|
|
The number of DML operations observed in the last LogMiner session query. |
|
|
The maximum number of DML operations observed while processing a single LogMiner session query. |
|
|
The total number of DML operations observed. |
|
|
The total number of LogMiner session query (aka batches) performed. |
|
|
The duration of the last LogMiner session query’s fetch in milliseconds. |
|
|
The maximum duration of any LogMiner session query’s fetch in milliseconds. |
|
|
The duration for processing the last LogMiner query batch results in milliseconds. |
|
|
The time in milliseconds spent parsing DML event SQL sattements. |
|
|
The duration in milliseconds to start the last LogMiner session. |
|
|
The longest duration in milliseconds to start a LogMiner session. |
|
|
The total duration in milliseconds spent by the connector starting LogMiner sessions. |
|
|
The minimum duration in milliseconds spent processing results from a single LogMiner session. |
|
|
The maximum duration in milliseconds spent processing results from a single LogMiner session. |
|
|
The total duration in milliseconds spent processing results from LogMiner sessions. |
|
|
The total duration in milliseconds spent by the JDBC driver fetching the next row to be processed from the log mining view. |
|
|
The total number of rows processed from the log mining view across all sessions. |
|
|
The number of entries fetched by the log mining query per database round-trip. |
|
|
The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view. |
|
|
The maximum number of rows/second processed from the log mining view. |
|
|
The average number of rows/second processed from the log mining. |
|
|
The average number of rows/second processed from the log mining view for the last batch. |
|
|
The number of connection problems detected. |
|
|
The number of hours that transactions will be retained by the connector’s in-memory buffer without being committed or rolled back before being discarded.
See |
|
|
Flag that indicates if the mining results are being recorded.
See |
Schema History Metrics
The MBean is debezium.mysql:type=connector-metrics,context=schema-history,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
One of |
|
|
The time in epoch seconds at what recovery has started. |
|
|
The number of changes that were read during recovery phase. |
|
|
the total number of schema changes applied during recovery and runtime. |
|
|
The number of milliseconds that elapsed since the last change was recovered from the history store. |
|
|
The number of milliseconds that elapsed since the last change was applied. |
|
|
The string representation of the last change recovered from the history store. |
|
|
The string representation of the last applied change. |
Behavior when things go wrong
Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
The rest of this section describes how Debezium handles various kinds of faults and problems.
ORA-25191 - Cannot reference overflow table of an index-organized table
Oracle may issue this error during the snapshot phase when encountering an index-organized table (IOT). This error means that the connector has attempted to execute an operation that must be executed against the parent index-organized table that contains the specified overflow table.
To resolve this, the IOT name used in the SQL operation should be replaced with the parent index-organized table name. To determine the parent index-organized table name, use the following SQL:
SELECT IOT_NAME
FROM DBA_TABLES
WHERE OWNER='<tablespace-owner>'
AND TABLE_NAME='<iot-table-name-that-failed>'
The connector’s table.include.list
or table.exclude.list
configuration options should then be adjusted to explicitly include or exclude the appropriate tables to avoid the connector from attempting to capture changes from the child index-organized table.