-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sink Connector Light for MySQL falls behind after a while #598
Comments
@meysammeisam please try those values with CH, it is best to use less insert threads doing more work :) |
Also one sink-connector should be able to handle multiple database. #531 |
@aadant Unfortunately it didn't help! May 22 16:03:12 ch altinity-sync[933389]: 2024-05-22 16:03:12.236 INFO - *************** EXECUTED BATCH Successfully Records: 212************** task(0) Thread ID: Sink Connector thread-pool-3 Result: [I@2acf6b76
May 22 16:03:12 ch altinity-sync[933389]: [JdbcOffsetBackingStore-1] WARN com.clickhouse.jdbc.internal.ClickHouseConnectionImpl - [JDBC Compliant Mode] Transaction is not supported. You may change jdbcCompliant to false to throw SQLException instead.
May 22 16:03:12 ch altinity-sync[933389]: [JdbcOffsetBackingStore-1] WARN com.clickhouse.jdbc.internal.ClickHouseConnectionImpl - [JDBC Compliant Mode] Transaction [9635f6b5-3f74-4a72-a72b-7b9d9b16ab85] (4 queries & 0 savepoints) is committed.
May 22 16:03:12 ch altinity-sync[933389]: 2024-05-22 16:03:12.264 INFO - ***** BATCH marked as processed to debezium ****Binlog file:mysql-bin-changelog.030965 Binlog position: 2413173 GTID: 2144281366 Sequence Number: 1716393793068007070 Debezium Timestamp: 1716393792068 Any hints on how to debug and see what's the bottleneck is highly appreciated. |
We managed to fix the issue by reducing the upsert rate on the source DB, so it's now producing less binary logs and Alitiny Sink Light, is able to catch up. |
We use the light version to replicate our RDS Mysql data to CH. In most cases, it works fine, but for some cases in which the source DB is relatively large with a high write rate, it falls behind after a few hours. We have tried these but none seems to be a proper solution:
OPTIMIZE TABLE <ABC> FINAL
on CHThe config file looks like:
As it's captured by the Prometheus/Grafana dashboard, The lag(Source DB -> CH Lag) is zero for a while and then it starts falling behind, and no service logs are there. So it seems it stops working fully until we restart or reinitiate it
Are we missing anything in our configuration? Any hints are highly appreciated.
Notes:
The text was updated successfully, but these errors were encountered: