Please see the etc directory for sample configuration files
a2.jdbc.url
- JDBC connection URL. The following URL formats are supported:
- EZConnect Format
jdbc:oracle:thin:@[[protocol:]//]host1[,host2,host3][:port1][,host4:port2] [/service_name][:server_mode][/instance_name][?connection properties]
- TNS URL Format
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=<protocol>) (HOST=<dbhost>)(PORT=<dbport>)) (CONNECT_DATA=(SERVICE_NAME=<service-name>))
- TNS Alias Format
jdbc:oracle:thin:@<alias_name>
For more information and examples for JDBC URL format please see Oracle® Database JDBC Java API Reference, Release 23c
a2.wallet.location
- Location of Oracle Wallet/External Password Store. Not required when a2.jdbc.url
& a2.jdbc.username
& a2.jdbc.password
are set
a2.jdbc.username
- JDBC connection username. Not required when using Oracle Wallet/External Password Store i.e. when a2.wallet.location
set to proper value
a2.jdbc.password
- JDBC connection password. Not required when using Oracle WalletExternal Password Store i.e. when a2.wallet.location
set to proper value
a2.schema.type
- Source Connector only: default kafka. This parameter tells oracdc which schema use, and which key & value converters use.
When set to kafka oracdc produces separate schemas for key and value fields in message.
When set to single oracdc produces single schema for all fields.
When set to debezium oracdc produces Debezium like messages. Messages in this mode can be consumed with internal oracdc sink connector.
a2.topic.prefix
- Source Connector only: default prefix to prepend table names to generate name of Kafka topic. This parameter is used when oracdc configured with a2.schema.type
=kafka
a2.kafka.topic
- Source Connector only: topic to send data, default oracdc-topic . This parameter is used when oracdc configured with a2.schema.type
=debezium
a2.topic.partition
- Kafka topic partition to write data. Default - 0.
a2.batch.size
- default 1000, maximum number of rows to include in a single batch when polling for new data in Source Connector or consuming in Sink Connector
a2.poll.interval
- Source Connector only: interval in milliseconds to poll for new data in each materialized view log, default 1000
a2.exclude
- Source Connector only: comma separated list of table names or table names with schema name (<SCHEMA_NAME>.<TABLE_NAME>) to exclude from oracdc processing. To exclude all schema objects from oracdc processing use <SCHEMA_NAME>.* or <SCHEMA_NAME>.%
a2.include
- Source Connector only: comma separated list of table names or table names with schema name (<SCHEMA_NAME>.<TABLE_NAME>) to include to oracdc processing. To include all schema objects to oracdc processing use <SCHEMA_NAME>.* or <SCHEMA_NAME>.%
a2.autocreate
- Sink Connector only: default false, when set to true oracdc creates missing table automatically
a2.protobuf.schema.naming
- Source Connector only: Default - false. When set to true oracdc generates schema names as valid Protocol Buffers identifiers using underscore as separator. When set to false (default) oracdc generates schema names using dot as separator.
a2.redo.count
- Quantity of archived logs to process during each DBMS_LOGMNR.START_LOGMNR call, default 1
a2.redo.size
- Minimal size of archived logs to process during each DBMS_LOGMNR.START_LOGMNR call. When set value of a2.redo.count
is ignored
a2.first.change
- When set DBMS_LOGMNR.START_LOGMNR will start mining from this SCN. When not set min(FIRST_CHANGE#) from V$ARCHIVED_LOG will used. Overrides SCN value stored in offset file.
a2.tmpdir
- Temporary directory for off-heap storage. Default - value of java.io.tmpdir JVM property
a2.persistent.state.file
- Name of file to store oracdc state between restart. Default $TMPDIR/oracdc.state
. Not used when a2.resiliency.type
set to set to fault-tolerant
(the default since v1.0.0)
a2.oracdc.schemas
- Use oracdc schemas (solutions.a2.cdc.oracle.data.OraNumber and solutions.a2.cdc.oracle.data.OraTimestamp) for Oracle datatypes (NUMBER, TIMESTAMP WITH [LOCAL] TIMEZONE). Default false.
a2.dictionary.file
- File with stored columns data type mapping. For more details contact us at oracle@a2-solutions.eu. This file can be prepared using Schema Editor GUI (solutions.a2.cdc.oracle.schema.TableSchemaEditor)
a2.initial.load
- A mode for performing initial load of data from tables when set to EXECUTE
. Record the successful completion of the initial load in the offset file. Default value - IGNORE
.
a2.topic.name.style
- Kafka topic naming convention when a2.schema.type=kafka
. Valid values - TABLE
(default), SCHEMA_TABLE
, PDB_SCHEMA_TABLE
.
a2.topic.name.delimiter
- Kafka topic name delimiter when a2.schema.type=kafka
and a2.topic.name.style
set to SCHEMA_TABLE
or PDB_SCHEMA_TABLE
. Valid values - _
(default), -
, and .
.
a2.table.list.style
- When set to static
(default) oracdc reads tables and partition list to process only at startup according to values of a2.include
and a2.exclude
parameters. When set to dynamic
oracdc builds list of objects to process on the fly
a2.process.lobs
- process Oracle BLOB, CLOB, NCLOB, and XMLType columns. Default - false
a2.lob.transformation.class
- name of class which implements solutions.a2.cdc.oracle.data.OraCdcLobTransformationsIntf interface. Default - solutions.a2.cdc.oracle.data.OraCdcDefaultLobTransformationsImpl which just passes information about and values of BLOB/CLOB/NCLOB/XMLTYPE columns to Kafka Connect without performing any additional transformation
a2.connection.backoff
- Backoff time in milliseconds between reconnectoion attempts. Default - 30000ms
a2.archived.log.catalog
- name of class which implements solutions.a2.cdc.oracle.OraLogMiner interface. Default - solutions.a2.cdc.oracle.OraCdcV$ArchivedLogImpl which reads archived log information and information about next available archived redo log from V$ARCHIVED_LOG fixed view
a2.fetch.size
- number of rows fetched with each RDBMS round trip for accessing V$LOGMNR_CONTENTS fixed view. Default 32
a2.logminer.trace
- trace with 'event 10046 level 8' LogMiner calls? Default - false. To enable tracing the following statements are executed at RDBMS session
alter session set max_dump_file_size=unlimited;
alter session set tracefile_identifier='oracdc';
alter session set events '10046 trace name context forever, level 8';
a2.resiliency.type
- How restarts and crashes are handled: In legacy
mode, all information is stored in the file system, delivery of all changes is guaranteed with exactly-once semantics, but this mode does not protect against file system failures. When set to fault-tolerant
(the default since v1.0.0), all restart data stored on Kafka topics, the connector depends only on Kafka cluster, but if an error occurs in the middle of sending a Oracle transaction to the Kafka broker, that transaction will be re-read from archived redo and sending to Kafka will continue after last successfully processed record to maintain exactly-once semantics
a2.use.rac
- When set to true oracdc first tried to detect is this connection to Oracle RAC by querying the fixed table V$ACTIVE_INSTANCES. If database is not RAC, only the warning message is printed. If oracdc is connected to Oracle RAC by querying the fixed table V$ACTIVE_INSTANCES additional checks are performed and oracdc starts a separate task for each redo thread/RAC instance. Changes for the same table from different redo threads/RAC instances are delivered to the same topic but to different partition where
<KAFKA_PARTITION_NUMBER> = <THREAD#> - 1
Default - false.
a2.transaction.implementation
- Queue implementation for processing SQL statements within transactions. Allowed values: ChronicleQueue
and ArrayList
. Default - ChronicleQueue
. LOB processing is only possible if a2.transaction.implementation
is set to ChronicleQueue
.
a2.print.invalid.hex.value.warning
- When set to true oracdc prints information about invalid hex values (like single byte value for DATE/TIMESTAMP/TIMESTAMPTZ) in log. Default - false.
a2.process.online.redo.logs
- When set to true oracdc process online redo logs. Default - false.
a2.scn.query.interval.ms
- Minimum time in milliseconds to determine the current SCN during online redo log processing. Used when a2.process.online.redo.logs
is set true. Default - 60_000.
a2.incomplete.redo.tolerance
- Connector behavior when processing an incomplete redo record. Allowed values are error, skip, and restore. Default - error. When set to:
- error oracdc prints information about incomplete redo record and stops connector.
- skip oracdc prints information about incomplete redo record and continue processing.
- restore oracdc tries to restore missed information from actual row incarnation from the table using ROWID from redo the record.
a2.print.all.online.scn.ranges
- If set to true oracdc prints detailed information about SCN ranges when working with the online log every time interval specified by the a2.scn.query.interval.ms
parameter. If set to false oracdc prints information about current online redo only when SEQUENCE# is changed.
Default - true.
a2.log.miner.reconnect.ms
- The time interval in milliseconds after which a reconnection to LogMiner occurs, including the re-creation of the Oracle connection. Unix/Linux only, on Windows oracdc creates new LogMiner session and re-creation of database connection every time DBMS_LOGMNR.START_LOGMNR is called. Default - Long.MAX_VALUE
a2.pk.type
- When set to well_defined the key fields are the table's primary key columns or, if the table does not have a primary key, the table's unique key columns in which all columns are NOT NULL. If there are no appropriate keys in the table, oracdc uses the a2.use.rowid.as.key
parameter and generates a pseudo key based on the row's ROWID, or generates a schema without any key fields.
When set to any_unique and the table does not have a primary key or a unique key with all NOT NULL columns, then the key fields will be the unique key columns which may have NULL columns. If there are no appropriate keys in the table, oracdc uses the a2.use.rowid.as.key
parameter and generates a pseudo key based on the row's ROWID, or generates a schema without any key fields. Default - well_defined.
a2.use.rowid.as.key
- When set to true and the table does not have a appropriate primary or unique key, oracdc adds surragate key using the ROWID. When set to false and the table does not have a appropriate primary or unique key, oracdc generates schema for the table without any key fields and key schema. Default - true.
a2.use.all.columns.on.delete
- When set to false (default) oracdc reads and processes only the primary key columns from the redo record and sends only the key fields to the Apache Kafka topic. When set to true oracdc reads and processes all table columns from the redo record. Default - false.
a2.topic.mapper
- The fully-qualified class name of the class that specifies which Kafka topic the data from the tables should be sent to. If value of thee parameter a2.shema.type
is set to debezium
, the default OraCdcDefaultTopicNameMapper uses the parameter a2.kafka.topic
value as the Kafka topic name, otherwise it constructs the topic name according to the values of the parameters a2.topic.prefix
, a2.topic.name.style
, and a2.topic.name.delimiter
, as well as the table name, table owner and PDB name.
Default - solutions.a2.cdc.oracle.OraCdcDefaultTopicNameMapper
a2.standby.activate
- activate running LogMiner at physical standby database. Default - false.
a2.standby.wallet.location
- Location of Oracle Wallet/External Password Store for connecting to physical standby database with V$DATABASE.OPEN_MODE = MOUNTED
a2.standby.jdbc.url
- JDBC connection URL to connect to Physical Standby Database. For information about syntax please see description of parameter a2.jdbc.url
above
a2.distributed.activate
- Use oracdc in distributed configuration (redo logs are generated at source RDBMS server and then transferred to compatible target RDBMS server for processing with LogMiner. For description of this configuration please look at Figure 25-1 at Using LogMiner to Analyze Redo Log Files. Default - false
a2.distributed.wallet.location
- Location of Oracle Wallet/External Password Store for connecting to target database in distributed mode
a2.distributed.jdbc.url
- JDBC connection URL to connect to Mining Database. For information about syntax please see description of parameter a2.jdbc.url
above
a2.distributed.target.host
- hostname of the target (where dbms_logmnr runs) database on which the shipment agent is running
a2.distributed.target.port
- port number on which shipping agent listens for requests