Skip to content

Releases: averemee-si/oracdc

v1.5.0 - AUG-2023

03 Aug 07:56
Compare
Choose a tag to compare
  • New a2.pk.string.length parameter for Sink Connector and other Sink Connector enhancement
  • New a2.transaction.implementation parameter for LogMiner Source Connector: when set to ChronicleQueue (default) oracdc uses Chronicle Queue to store information about SQL statements in Oracle transaction and uses off-heap memory and needs disk space to store memory mapped files; when set to ArrayList oracdc uses ArrayList to store information about SQL statements in Oracle transaction and uses JVM heap (no disk space needed).
  • Fix for ORA-17002 while querying data dictionary
  • Better handling for SQLRecoverableException while querying data dictionary

v1.4.2 - JUN-2023

25 Jun 17:28
Compare
Choose a tag to compare

fix unhandled ORA-17410 running 12c on Windows and strict checks for supplemental logging settings

v1.4.1 - a2.schema.type=single

19 May 17:41
Compare
Choose a tag to compare

New a2.schema.type=single - schema type to store all columns from database row in one message with just value schema

v1.4.0 - Oracle 23c readiness, supplemental logging checks, fixes for Oracle RDBMS on Microsoft Windows

03 May 13:48
Compare
Choose a tag to compare

Oracle 23c readiness:

  • documentations updated with links to 23c docs (when available)
  • BOOLEAN and JSON datatype information

supplemental logging checks:

  • checks and log output added at database and table level

fixes for Oracle RDBMS on Microsoft Windows

  • fixed issue when LogMiner call blocked archived logs and prevent RMAN processing of these files (Windows specific)

v1.3.3.2 'a2.protobuf.schema.naming' parameter

25 Mar 20:44
Compare
Choose a tag to compare

Add 'a2.protobuf.schema.naming' parameter to fix in Amazon MSK environment with AWS Glue Schema Registry

org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:326)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:355)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.google.protobuf.Descriptors$DescriptorValidationException: com.amazonaws.services.schemaregistry.kafkaconnect.autogenerated.SCOTT.DEPT.Key.SCOTT.DEPT.Key: "SCOTT.DEPT.Key" is not a valid identifier.
	at com.google.protobuf.Descriptors$DescriptorPool.validateSymbolName(Descriptors.java:2703)
	at com.google.protobuf.Descriptors$DescriptorPool.addSymbol(Descriptors.java:2575)
	at com.google.protobuf.Descriptors$Descriptor.<init>(Descriptors.java:971)
	at com.google.protobuf.Descriptors$Descriptor.<init>(Descriptors.java:648)
	at com.google.protobuf.Descriptors$FileDescriptor.<init>(Descriptors.java:548)
	at com.google.protobuf.Descriptors$FileDescriptor.buildFrom(Descriptors.java:319)
	at com.google.protobuf.Descriptors$FileDescriptor.buildFrom(Descriptors.java:290)
	at com.amazonaws.services.schemaregistry.kafkaconnect.protobuf.fromconnectschema.ConnectSchemaToProtobufSchemaConverter.buildFileDescriptor(ConnectSchemaToProtobufSchemaConverter.java:88)
	at com.amazonaws.services.schemaregistry.kafkaconnect.protobuf.fromconnectschema.ConnectSchemaToProtobufSchemaConverter.convert(ConnectSchemaToProtobufSchemaConverter.java:43)
	at com.amazonaws.services.schemaregistry.kafkaconnect.protobuf.ProtobufSchemaConverter.fromConnectData(ProtobufSchemaConverter.java:105)
	at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:326)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
	... 11 more

fix temporary dir check error when a2.tmpdir is not specified

21 Mar 13:36
Compare
Choose a tag to compare

1.3.3.1 - fix temporary dir check error when a2.tmpdir is not specified

v1.3.3 - techstack updates

12 Mar 09:32
Compare
Choose a tag to compare

v1.3.3: techstack/dependencies (JUnit/commons-cli/OJDBC/SL4J) version updates

v1.3.2 - FEB-2023 fixes

15 Feb 11:15
Compare
Choose a tag to compare

fix for #40 & jackson library update

v1.3.1 - add OCI DBCS product name fix

10 Jan 08:03
Compare
Choose a tag to compare

Add OCI DBCS product name fix:
usual on-premise product name is Oracle Database 19c Enterprise Edition / etc, OCI DBCS adds other options - Oracle Database 19c EE Extreme Perf / Oracle Database 19c EE High Perf / etc

v1.3.0 - Single instance physical standby for Oracle RAC

12 Dec 13:47
Compare
Choose a tag to compare

When running against single instance physical standby for Oracle RAC connector automatically detects opened redo threads and starts required number of connector tasks (max.tasks parameter must be equal to or greater than the number of redo threads)