Skip to content

Near real time: is it possible?

averemee-si edited this page Aug 10, 2023 · 2 revisions

oracdc is designed to minimize the impact on the primary transactional Oracle Database instance. oracdc calls DBMS_LOGMNR package and processes change-data capture on Oracle instance when V$DATABASE.OPEN_MODE can be READ WRITE, READ ONLY, or even MOUNTED. This is why oracdc uses V$ARCHIVED_LOG dynamic performance view or directory watcher when running on additional instance for querying information about archived redo logs. The result is

  1. Minimal impact on the transactional instance or an additional Oracle instance where changed data are mined using DBMS_LOGMNR,
  2. preventing from ORA-4036 error associated with the excessive use of PGA for DBMS_LOGMNR utility,
  3. no additional CPU cores required on the DB side for CDC

But what is the price of minimizing the impact on primary transactional Oracle Database instance? Changes are captured only when the redo log containing those changes is archived. This means that for immediate changes transfer to archived log COMMIT is not enough: the ARCn process must copy online redo log to archived redo log or you need to execute ALTER SYSTEM ARCHIVE LOG CURRENT or ALTER SYSTEM SWITCH LOGFILE, but

  1. it is impossible to manage settings of ARCn process
  2. ALTER SYSTEM ARCHIVE LOG CURRENT or ALTER SYSTEM SWITCH LOGFILE are possible only for debugging.

What to do? Consider setting ARCHIVE_LAG_TARGET parameter after testing, this may help to achieve predictable lag while using oracdc.