Releases: averemee-si/oracdc
v2.5.2 (SEP-2024)
- SMT converters for
solutions.a2.cdc.oracle.data.OraNumber/solutions.a2.cdc.oracle.data.OraIntervalYM/solutions.a2.cdc.oracle.data.OraIntervalDS (oracle.sql.NUMBER/oracle.sql.INTERVALYM/oracle.sql.INTERVALDS) - Dockerfile enhancements (Schema registry client updated to Confluent 7.7.1), Dockerfile.snowflake to quickly create a data delivery pipeline between transactional Oracle and analytical Snowflake
Nota bene: using Confluent Schema Registry client v7.7.x requires setting the parameter of source connector a2.protobuf.schema.naming
to true
v2.5.1 - AUG-2024
- Handling/binding suspicious transactions (XID always ends with FFFFFFFF, i.e. wrong transaction ID sequence number) and the transaction always starts with a partial rollback operation
- New parameter and additional pseudocolumn
a2.pseudocolumn.ora_xid
v2.5.0 - AUG-2024
2.5.0 (AUG-2024)
LogMiner Connector
- Improved processing of transactions containing partial rollback (with ROLLBACK=1) statements
- JMX: LastProcessedSequence metric. For more information please read LOGMINER-METRICS.md
- Obsoleted and removed parameters:
a2.resiliency.type
,a2.persistent.state.file
,a2.redo.count
,a2.redo.size
- New parameter to control the selection of database table columns to create key fields of a Kafka Connect record
a2.key.override
. For more information please read KAFKA-CONNECT.md. - New parameter to add notifications about last processed redo sequence
a2.last.sequence.notifier
. For more information please read KAFKA-CONNECT.md.
Sink Connector
New parameter to set a SQL statement(s) that will be executed for all new connections when they are created - a2.connection.init.sql
v2.4.0 - MAY-2024
LogMiner Connector
- Oracle Active DataGuard support for Oracle Database settings check utility
- Fix for Oracle DataGuard when V$STANDBY_LOG does not contain rows
- Fix ORA-310/ORA-334 under heavy RDBMS load
- New parameters to support pseudo columns -
a2.pseudocolumn.ora_rowscn
,a2.pseudocolumn.ora_commitscn
,a2.pseudocolumn.ora_rowts
, &a2.pseudocolumn.ora_operation
. For more information please read KAFKA-CONNECT.md - New parameters to support audit pseudo columns:
a2.pseudocolumn.ora_username
,a2.pseudocolumn.ora_osusername
,a2.pseudocolumn.ora_hostname
,a2.pseudocolumn.ora_audit_session_id
,a2.pseudocolumn.ora_session_info
, &a2.pseudocolumn.ora_client_id
. For more information please read KAFKA-CONNECT.md
Sink Connector
New parameters: a2.table.mapper
, a2.table.name.prefix
, and a2.table.name.suffix
v2.3.1 (APR-2024)
Simplification of configuration for Oracle Active DataGuard - now the same configuration is used for Oracle Active DataGuard as for a primary database
2.3.0 (APR-2024)
###LogMiner Connector
- New parameter -
a2.stop.on.ora.1284
to manage the connector behavior on ORA-1284. For more information please read KAFKA-CONNECT.md - Checking the number of non-zero columns returned from a redo record for greater reliability.
- Handling of partial rollback records in RDBMS 19.13 i.e. when redo record with ROLLBACK=1 is before redo record with ROLLBACK=0
- Processing of DELETE operation for tables ROWID pseudo key
- New parameter -
a2.print.unable.to.delete.warning
to manage the connector output in log for DELETE operations over table's without PK. For more information please read KAFKA-CONNECT.md - New parameter -
a2.schema.name.mapper
to manage schema names generation. For more information please read KAFKA-CONNECT.md
###Docker image
Rehost Confluent schema registry clients (Avro/Protobuf/JSON Schema) and bump version to 7.5.3
v2.2.0 (MAR-2024)
2.2.0 (MAR-2024)
LogMiner Connector
- Enhanced handling of partial rollback redo records (ROLLBACK=1). For additional information about these redo records please read ROLLBACK INTERNALS starting with the sentence "The interesting thing is with partial rollback."
- New parameter
a2.topic.mapper
to manage the name of the Kafka topic to which data will be sent. For more information please read KAFKA-CONNECT.md - Oracle Database settings check utility
Sink Connector
- Connector classes are re-factored and the Sink Connector itself renamed from solutions.a2.cdc.oracle.OraCdcJdbcSinkConnector to solutions.a2.kafka.sink.JdbcSinkConnector
- New parameter -
a2.table.mapper
to manage the table in which to sink the data.
v2.1.0
2.1.0 (FEB-2024)
ServiceLoader manifest files, for more information please read KIP-898: Modernize Connect plugin discovery
LogMiner Connector
- Now oracdc now also checks for first available SCN in V$LOG
- Reducing the output about scale differences between redo and dictionary
- Separate first available SCN detection for primary and standby
New parameters
a2.incomplete.redo.tolerance
- to manage connector behavior when processing an incomplete redo record. For more information please read KAFKA-CONNECT.md
a2.print.all.online.scn.ranges
- to control output when processing online redo logs. For more information please read KAFKA-CONNECT.md
a2.log.miner.reconnect.ms
- to manage reconnect interval for LogMiner for Unix/Linux. For more information please read KAFKA-CONNECT.md
a2.pk.type
- to manage behavior when choosing key fields in schema for table. For more information please read KAFKA-CONNECT.md
a2.use.rowid.as.key
- to manage behavior when the table does not have appropriate PK/unique columns for key fields. For more information please read KAFKA-CONNECT.md
a2.use.all.columns.on.delete
- to manage behavior when reading and processing a redo record for DELETE. For more information please read KAFKA-CONNECT.md
Sink Connector
- fix for SQL statements creation when first statement in topic is "incomplete" (delete operation for instance)
- add exponential back-off for sink getConnection()
- support for key-less schemas
- PostgreSQL: support for implicitly defined primary keys
v2.0.0 - online redo logs processing and more...
Online redo logs processing:
Online redo logs are processed when parameter a2.process.online.redo.logs
is set to true (Default - false). To control the lag between data processing in Oracle, the parameter a2.scn.query.interval.ms
is used, which sets the lag in milliseconds for processing data in online logs.
This expands the range of connector tasks and makes its use possible where minimal and managed latency is required.
default values:
Column default values are now part of table schema
19c enhancements:
- HEX('59') (and some other single byte values) for DATE/TIMESTAMP/TIMESTAMPTZ are treated as NULL
- HEX('787b0b06113b0d')/HEX('787b0b0612013a')/HEX('787b0b0612090c')/etc (2.109041558E-115,2.1090416E-115,2.109041608E-115) for NUMBER(N)/NUMBER(P,S) are treated as NULL
information about such values is not printed by default in the log; to print messages you need to set the parametera2.print.invalid.hex.value.warning
value to true
solution for incomplete redo information:
Solution for problem described in LogMiner REDO_SQL missing WHERE clause and LogMiner Redo SQL w/o WHERE-clause
v1.6.0 - OCT-2023
Support for INTERVALYM/INTERVALDS
TIMESTAMP enhancements
SDU hint in log