You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
using eventuate with postgres wal configuration results in wal files not being recycled and storage to increase indefinitely.
Looking into the PostgresWalClient I see that the eventuate_slot is used and the restart_lsn is being updated but not for eventuate_slot2.
Selecting from pg_replication_slots results that the restart_lsn for eventuate_slot2 is much older than eventuate_slot which would explain why all the wal files up to 0/18765A0 would not be deleted.
slot_name
database
temporary
active
active_pid
restart_lsn
confirmed_flush_lsn
eventuate_slot
eventuate
false
true
40
0/76E5CB0
0/76CE780
eventuate_slot2
eventuate
false
false
null
0/18765A0
0/18765D8
Is there any configuration I am missing what could explain this problem or does the second replication spot needs updating as well?
The text was updated successfully, but these errors were encountered:
I can see that before calling saveOffsetOfLastProcessedEvent() all the streamed changes are filtered by .filter(change -> change.getKind().equals("insert")). This would mean updates to the cdc_monitoring table would update the restart_lsn column of the replication slot.
Under a busy system load this would I think not be an issue but if no new saga instances would be created this would cause the WAL files to stack up indefinitely until a new insert is being made.
Hello,
using eventuate with postgres wal configuration results in wal files not being recycled and storage to increase indefinitely.
Looking into the PostgresWalClient I see that the
eventuate_slot
is used and therestart_lsn
is being updated but not foreventuate_slot2
.Selecting from
pg_replication_slots
results that the restart_lsn foreventuate_slot2
is much older thaneventuate_slot
which would explain why all the wal files up to 0/18765A0 would not be deleted.Is there any configuration I am missing what could explain this problem or does the second replication spot needs updating as well?
The text was updated successfully, but these errors were encountered: