diff --git a/connectors/debezium-mysql-source/3.0.1.6/debezium-mysql-source.md b/connectors/debezium-mysql-source/3.0.1.6/debezium-mysql-source.md index 4db851013..60c38785b 100644 --- a/connectors/debezium-mysql-source/3.0.1.6/debezium-mysql-source.md +++ b/connectors/debezium-mysql-source/3.0.1.6/debezium-mysql-source.md @@ -30,7 +30,7 @@ The MySQL source connector pulls messages from MySQL and persists the messages t ### Prerequisites -The prerequisites for connecting an Debezium MySQL source connector to external systems include: +The prerequisites for connecting a Debezium MySQL source connector to external systems include: 1. Create a MySQL service: This connector uses the debezium v1.9, Please refer to this [document](https://debezium.io/releases/1.9/) to see the compatible MySQL versions. 2. Prepare MySQL Database: Please refer to this [document](https://debezium.io/documentation/reference/1.9/connectors/mysql.html#setting-up-mysql) to complete the prepare steps on MySQL. @@ -77,7 +77,7 @@ configs: database.server.name: "mydbserver" ``` -> * The configuration structure varies depending on how you create the AWS Kinesis sink connector. +> * The configuration structure varies depending on how you create the Debezium mysql source connector. > For example, some are **JSON**, some are **YAML**, and some are **Kubernetes YAML**. You need to adapt the configs to the corresponding format. > > * If you want to configure more parameters, see [Configuration Properties](#configuration-properties) for reference. @@ -139,7 +139,7 @@ The configuration of Debezium source connector has the following properties. - org.apache.kafka.connect.json.JsonConverter The`json-with-envelope` config is valid only for the JsonConverter. By default, the value is set to false. When the `json-with-envelope` value is set to false, the consumer uses the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message only consists of the payload. - When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES`, and the message consists of the schema and the payload. + When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES)`, and the message consists of the schema and the payload. - org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter @@ -152,5 +152,5 @@ Currently, the destination topic (specified by the `destination-topic-name` opti - One topic for storing the database metadata messages. It is named with the database server name ( `database.server.name`), like `public/default/database.server.name`. - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - (Option) One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. -- One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. +- One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. For examples: "public/default/mydbserver.public.io-test" If automatic topic creation is disabled on the Pulsar broker, you need to manually create these 4 types of topics and the destination topic. \ No newline at end of file diff --git a/connectors/debezium-mysql-source/3.1.0.5/debezium-mysql-source.md b/connectors/debezium-mysql-source/3.1.0.5/debezium-mysql-source.md index 19803aa60..f6f1e8cf4 100644 --- a/connectors/debezium-mysql-source/3.1.0.5/debezium-mysql-source.md +++ b/connectors/debezium-mysql-source/3.1.0.5/debezium-mysql-source.md @@ -30,7 +30,7 @@ The MySQL source connector pulls messages from MySQL and persists the messages t ### Prerequisites -The prerequisites for connecting an Debezium MySQL source connector to external systems include: +The prerequisites for connecting a Debezium MySQL source connector to external systems include: 1. Create a MySQL service: This connector uses the debezium v1.9, Please refer to this [document](https://debezium.io/releases/1.9/) to see the compatible MySQL versions. 2. Prepare MySQL Database: Please refer to this [document](https://debezium.io/documentation/reference/1.9/connectors/mysql.html#setting-up-mysql) to complete the prepare steps on MySQL. @@ -40,7 +40,7 @@ If you are using AWS MySQL service, you need to use the [params group](https://d {% /callout %} -### 1. Create a table on MySQL +### 1. Create a table on MySQL Run the following SQL command on your MySQL. If you don't require the `before` data, you can disregard the configuration of `REPLICA IDENTITY`. ```sql @@ -77,7 +77,7 @@ configs: database.server.name: "mydbserver" ``` -> * The configuration structure varies depending on how you create the AWS Kinesis sink connector. +> * The configuration structure varies depending on how you create the Debezium mysql source connector. > For example, some are **JSON**, some are **YAML**, and some are **Kubernetes YAML**. You need to adapt the configs to the corresponding format. > > * If you want to configure more parameters, see [Configuration Properties](#configuration-properties) for reference. @@ -139,7 +139,7 @@ The configuration of Debezium source connector has the following properties. - org.apache.kafka.connect.json.JsonConverter The`json-with-envelope` config is valid only for the JsonConverter. By default, the value is set to false. When the `json-with-envelope` value is set to false, the consumer uses the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message only consists of the payload. - When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES`, and the message consists of the schema and the payload. + When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES)`, and the message consists of the schema and the payload. - org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter @@ -152,5 +152,5 @@ Currently, the destination topic (specified by the `destination-topic-name` opti - One topic for storing the database metadata messages. It is named with the database server name ( `database.server.name`), like `public/default/database.server.name`. - One topic (`offset.storage.topic`) for storing the offset metadata messages. The connector saves the last successfully-committed offsets on this topic. - (Option) One topic (`database.history.pulsar.topic`) for storing the database history information. The connector writes and recovers DDL statements on this topic. -- One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. - If automatic topic creation is disabled on the Pulsar broker, you need to manually create these 4 types of topics and the destination topic. \ No newline at end of file +- One per-table topic. The connector writes change events for all operations that occur in a table to a single Pulsar topic that is specific to that table. For examples: "public/default/mydbserver.public.io-test" + If automatic topic creation is disabled on the Pulsar broker, you need to manually create these 4 types of topics and the destination topic. \ No newline at end of file