Skip to content

Releases: averemee-si/oracdc

v2.5.2 (SEP-2024)

16 Sep 08:54
Compare
Choose a tag to compare
  1. SMT converters for
    solutions.a2.cdc.oracle.data.OraNumber/solutions.a2.cdc.oracle.data.OraIntervalYM/solutions.a2.cdc.oracle.data.OraIntervalDS (oracle.sql.NUMBER/oracle.sql.INTERVALYM/oracle.sql.INTERVALDS)
  2. Dockerfile enhancements (Schema registry client updated to Confluent 7.7.1), Dockerfile.snowflake to quickly create a data delivery pipeline between transactional Oracle and analytical Snowflake

Nota bene: using Confluent Schema Registry client v7.7.x requires setting the parameter of source connector a2.protobuf.schema.naming to true

v2.5.1 - AUG-2024

21 Aug 13:12
Compare
Choose a tag to compare
  1. Handling/binding suspicious transactions (XID always ends with FFFFFFFF, i.e. wrong transaction ID sequence number) and the transaction always starts with a partial rollback operation
  2. New parameter and additional pseudocolumn a2.pseudocolumn.ora_xid

v2.5.0 - AUG-2024

19 Aug 06:19
Compare
Choose a tag to compare

2.5.0 (AUG-2024)

LogMiner Connector

  1. Improved processing of transactions containing partial rollback (with ROLLBACK=1) statements
  2. JMX: LastProcessedSequence metric. For more information please read LOGMINER-METRICS.md
  3. Obsoleted and removed parameters: a2.resiliency.type, a2.persistent.state.file, a2.redo.count, a2.redo.size
  4. New parameter to control the selection of database table columns to create key fields of a Kafka Connect record a2.key.override. For more information please read KAFKA-CONNECT.md.
  5. New parameter to add notifications about last processed redo sequence a2.last.sequence.notifier. For more information please read KAFKA-CONNECT.md.

Sink Connector

New parameter to set a SQL statement(s) that will be executed for all new connections when they are created - a2.connection.init.sql

v2.4.0 - MAY-2024

09 May 20:36
Compare
Choose a tag to compare

LogMiner Connector

  1. Oracle Active DataGuard support for Oracle Database settings check utility
  2. Fix for Oracle DataGuard when V$STANDBY_LOG does not contain rows
  3. Fix ORA-310/ORA-334 under heavy RDBMS load
  4. New parameters to support pseudo columns - a2.pseudocolumn.ora_rowscn, a2.pseudocolumn.ora_commitscn, a2.pseudocolumn.ora_rowts, & a2.pseudocolumn.ora_operation. For more information please read KAFKA-CONNECT.md
  5. New parameters to support audit pseudo columns: a2.pseudocolumn.ora_username, a2.pseudocolumn.ora_osusername, a2.pseudocolumn.ora_hostname, a2.pseudocolumn.ora_audit_session_id, a2.pseudocolumn.ora_session_info, & a2.pseudocolumn.ora_client_id. For more information please read KAFKA-CONNECT.md

Sink Connector

New parameters: a2.table.mapper, a2.table.name.prefix, and a2.table.name.suffix

v2.3.1 (APR-2024)

04 Apr 09:33
Compare
Choose a tag to compare

Simplification of configuration for Oracle Active DataGuard - now the same configuration is used for Oracle Active DataGuard as for a primary database

2.3.0 (APR-2024)

03 Apr 08:12
Compare
Choose a tag to compare

###LogMiner Connector

  1. New parameter - a2.stop.on.ora.1284 to manage the connector behavior on ORA-1284. For more information please read KAFKA-CONNECT.md
  2. Checking the number of non-zero columns returned from a redo record for greater reliability.
  3. Handling of partial rollback records in RDBMS 19.13 i.e. when redo record with ROLLBACK=1 is before redo record with ROLLBACK=0
  4. Processing of DELETE operation for tables ROWID pseudo key
  5. New parameter - a2.print.unable.to.delete.warning to manage the connector output in log for DELETE operations over table's without PK. For more information please read KAFKA-CONNECT.md
  6. New parameter - a2.schema.name.mapper to manage schema names generation. For more information please read KAFKA-CONNECT.md

###Docker image
Rehost Confluent schema registry clients (Avro/Protobuf/JSON Schema) and bump version to 7.5.3

v2.2.0 (MAR-2024)

01 Mar 06:18
Compare
Choose a tag to compare

2.2.0 (MAR-2024)

LogMiner Connector

  1. Enhanced handling of partial rollback redo records (ROLLBACK=1). For additional information about these redo records please read ROLLBACK INTERNALS starting with the sentence "The interesting thing is with partial rollback."
  2. New parameter a2.topic.mapper to manage the name of the Kafka topic to which data will be sent. For more information please read KAFKA-CONNECT.md
  3. Oracle Database settings check utility

Sink Connector

  1. Connector classes are re-factored and the Sink Connector itself renamed from solutions.a2.cdc.oracle.OraCdcJdbcSinkConnector to solutions.a2.kafka.sink.JdbcSinkConnector
  2. New parameter - a2.table.mapper to manage the table in which to sink the data.

v2.1.0

01 Feb 21:23
Compare
Choose a tag to compare

2.1.0 (FEB-2024)

ServiceLoader manifest files, for more information please read KIP-898: Modernize Connect plugin discovery

LogMiner Connector

  1. Now oracdc now also checks for first available SCN in V$LOG
  2. Reducing the output about scale differences between redo and dictionary
  3. Separate first available SCN detection for primary and standby

New parameters

a2.incomplete.redo.tolerance - to manage connector behavior when processing an incomplete redo record. For more information please read KAFKA-CONNECT.md
a2.print.all.online.scn.ranges - to control output when processing online redo logs. For more information please read KAFKA-CONNECT.md
a2.log.miner.reconnect.ms - to manage reconnect interval for LogMiner for Unix/Linux. For more information please read KAFKA-CONNECT.md
a2.pk.type - to manage behavior when choosing key fields in schema for table. For more information please read KAFKA-CONNECT.md
a2.use.rowid.as.key - to manage behavior when the table does not have appropriate PK/unique columns for key fields. For more information please read KAFKA-CONNECT.md
a2.use.all.columns.on.delete - to manage behavior when reading and processing a redo record for DELETE. For more information please read KAFKA-CONNECT.md

Sink Connector

  1. fix for SQL statements creation when first statement in topic is "incomplete" (delete operation for instance)
  2. add exponential back-off for sink getConnection()
  3. support for key-less schemas
  4. PostgreSQL: support for implicitly defined primary keys

v2.0.0 - online redo logs processing and more...

01 Dec 07:03
Compare
Choose a tag to compare

Online redo logs processing:
Online redo logs are processed when parameter a2.process.online.redo.logs is set to true (Default - false). To control the lag between data processing in Oracle, the parameter a2.scn.query.interval.ms is used, which sets the lag in milliseconds for processing data in online logs.
This expands the range of connector tasks and makes its use possible where minimal and managed latency is required.

default values:
Column default values are now part of table schema

19c enhancements:

  1. HEX('59') (and some other single byte values) for DATE/TIMESTAMP/TIMESTAMPTZ are treated as NULL
  2. HEX('787b0b06113b0d')/HEX('787b0b0612013a')/HEX('787b0b0612090c')/etc (2.109041558E-115,2.1090416E-115,2.109041608E-115) for NUMBER(N)/NUMBER(P,S) are treated as NULL
    information about such values is not printed by default in the log; to print messages you need to set the parameter a2.print.invalid.hex.value.warning value to true

solution for incomplete redo information:
Solution for problem described in LogMiner REDO_SQL missing WHERE clause and LogMiner Redo SQL w/o WHERE-clause

v1.6.0 - OCT-2023

03 Oct 17:28
Compare
Choose a tag to compare

Support for INTERVALYM/INTERVALDS
TIMESTAMP enhancements
SDU hint in log