You are viewing documentation for an outdated version of Debezium.
If you want to view the latest stable version of this page, please go here.

MongoDB New Document State Extraction

This single message transformation (SMT) is under active development right now, so the emitted message structure or other details may still change as development progresses. Please see below for a descriptions of known limitations of this transformation.

This SMT is supported only for the MongoDB connector. See here for the relational database equivalent to this SMT.

0.9.0 Backwards compatible breaking changes

Breaking changes were introduced together with handling deletions features, the previous default behavior was to keep deletion messages, now the new default behavior is to remove them. To change this setting please refer to delete.handling.mode and drop.tombstones

The Debezium MongoDB connector generates the data in a form of a complex message structure. The message consists of two parts:

  • operation and metadata

  • for inserts, the whole data after the insert has been executed; for updates a patch element describing the altered fields

The after and patch elements are Strings containing JSON representations of the inserted/altered data. E.g. the general message structure for a insert event looks like this:

{
  "op": "r",
  "after": "{\"field1\":newvalue1,\"field2\":\"newvalue1\"}",
  "source": { ... }
}

More details about the message structure are provided in the documentation of the MongoDB connector.

While this structure is a good fit to represent changes to MongoDB’s schemaless collections, it’s not understood by existing sink connectors as for instance the Confluent JDBC sink connector.

Therefore Debezium provides a a single message transformation (SMT) which converts the after/patch information from the MongoDB CDC events into a structure suitable for consumption by existing sink connectors. To do so, the SMT parses the JSON strings and reconstructs properly typed Kafka Connect (comprising the correct message payload and schema) records from that, which then can be consumed by connectors such as the JDBC sink connector.

Using JSON as visualization of the emitted record structure, the event from above would like this:

{
	"field1" : "newvalue1",
	"field2" : "newvalue2"
}

The SMT should be applied on a sink connector.

Configuration

The configuration is a part of sink task connector and is expressed in a set of properties:

transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope
transforms.unwrap.drop.tombstones=false
transforms.unwrap.delete.handling.mode=drop
transforms.unwrap.operation.header=true

Array encoding

The SMT converts MongoDB arrays into arrays as defined by Apache Connect (or Apache Avro) schema. The problem is that such arrays must contains elements of the same time. MongoDB allows the user to store elements of heterogeneous types into the same array. To bypass this impedance mismatch it is possible to encode the array in two different ways using array.encoding configuration option.

transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope
transforms.unwrap.array.encoding=<array|document>

Value array (the default) will encode arrays as the array datatype. It is user’s responsibility to ensure that all elements for a given array instance are of the same time. This option is a restricting one but offers easy processing of arrays by downstream clients.

Value document will convert the array into a struct of structs in the similar way as done by BSON serialization. The main struct contains fields named _0, _1, _2 etc. where the name represents the index of the element in the array. Every element is then passed as the value for the give field.

Let’s suppose an example source MongoDB document with array with heterogeneous types

{
    "_id": 1,
    "a1": [
        {
            "a": 1,
            "b", "none"
        },
        {
            "a": "c",
            "d": "something"
        }
    ]
}

This document will be encoded as

{
    "_id": 1,
    "a1": {
        "_0": {
            "a": 1,
            "b", "none"
        },
        "_1": {
            "a": "c",
            "d": "something"
        }
    }
}

This option allows you to process arbitrary arrays but the consumer need to know how to properly handle them.

Note: The underscore in index names is present because Avro encoding requires field names not to start with digit.

Nested structure flattening

When a MongoDB document contains a nested document (structure) it is faithfully encoded as a nested structure field. If the sink connector does support only flat structure it is possible to flatten the internal structure into a flat one with a consistent field naming. To enable this feature the option flatten.struct must be set to true.

transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope
transforms.unwrap.flatten.struct=<true|false>
transforms.unwrap.flatten.struct.delimiter=<string>

The resulting flat document will consist of fields whose names are created by joining the name of the parent field and the name of the fields in the nested document. Those elements are separated with string defined by an option struct.delimiter by default set to the underscore.

Let’s suppose an example source MongoDB document with a field with a nested document

{
    "_id": 1,
    "a": {
            "b": 1,
            "c", "none"
    },
    "d": 100
}

Such document will be encoded as

{
    "_id": 1,
    "a_c": 1,
    "a_d": "none",
    "d": 100
}

This option allows you to convert a hierarchical document into a flat structure suitable for a table-like storage.

MongoDB $unset handling

MongoDB allows you to make $unset operations which allows you to remove a certain field from a Document, and since the collections are schemaless it becomes hard to find a way to tell the consumers/sinkers which a field is now missing, the approach debezium uses is to set the desired to remove field to null value.

Given the operation

{
    "after":null,
    "patch":"{\"$unset\" : {\"a\" : true}}"
}

The final encoding will look like

{
    "id": 1,
    "a": null
}

Note that other mongo operations might cause an $unset internally, $rename is one example.

Determine original operation

When a message is flattened the final result won’t show whether it was an insert, update or first read (Deletions can be detected via tombstones or rewrites, see Configuration options).

To solve this problem Debezium offers an option to propagate the original operation via a header added to the message. To enable this feature the option operation.header must be set to true.

transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope
transforms.unwrap.operation.header=true

The possible values are the ones from the op field of MongoDB connector change events.

Configuration options

Property Default Description

array.encoding

array

The SMT converts MongoDB arrays into arrays as defined by Apache Connect (or Apache Avro) schema.

flatten.struct

false

The SMT flattens structs by concatenating the fields into plain properties, using a configurable delimiter.

flatten.struct.delimiter

_

Delimiter to concat between field names from the input record when generating field names for the output record. Only applies when flatten.struct is set to true

operation.header

false

The SMT adds the event operation as a message header.

drop.tombstones

true

The SMT removes the tombstone generated by Debezium from the stream.

delete.handling.mode

drop

The SMT can drop, rewrite or pass delete records (none). The rewrite mode will add a __deleted field set to true or false depending on the represented operation.

Known limitations

  • Feeding data changes from a schemaless store such as MongoDB to strictly schema-based datastores such as a relational database can by definition work within certain limits only. Specifically, all fields of documents within one collection with the same name must be of the same type. Otherwise, no consistent column definition can be derived in the target database.

  • Arrays will be restored in the emitted Kafka Connect record correctly, but they are not supported by sink connector just expecting a "flat" message structure.