You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The skip count gets incredibly high in a specific situation with high volume.
A set of clusters with cascading replication enabled and realtime replication occurring between the clusters
Puts are normally being made to cluster A, then the realtime queue that goes from cluster C to cluster B will normally see those puts as skips because B will already be in the routed clusters list. Until something is delivered the skip count is not used
The first time something is put to cluster C, it gets replicated to B without a problem. However, after a significant number of puts are sent from A -> B -> C, without any objects going the reserve path (C-> B -> A), when the next object is sent from C -> B there will be an incredibly large skip count that has built up.
We saw a customer with a skip count of just over 73 million objects. In replication efforts, we were able to see skip counts become elevated in the single millions, however, this was not enough to create the latency observed by the customer.
A restart cleared the skip count and the latency returned to normal in the customer cluster.
The text was updated successfully, but these errors were encountered:
Basho-JIRA
changed the title
skip count can become extremely elevated
skip count can become extremely elevated [JIRA: RIAK-2076]
Aug 5, 2015
This has been seen at another customer, this time without cascading replication. The customer saw the issue in RiakKV 2.0.6. The cause of the elevated skip count in this case is not known, but symptoms did manifest when live was switched from cluster A to cluster B.
More information on the specifics can be found in Zendesk:
Unfortunately tracing is not permitted in the customer's production environment, and efforts to reproduce the issue in a test environment have so far failed.
Note that in this case the workaround for the customer was not a restart, but rather a kill of the process involved:
erlang:exit(list_to_pid("<Rogue process PID goes here>"), kill).
The skip count gets incredibly high in a specific situation with high volume.
We saw a customer with a skip count of just over 73 million objects. In replication efforts, we were able to see skip counts become elevated in the single millions, however, this was not enough to create the latency observed by the customer.
A restart cleared the skip count and the latency returned to normal in the customer cluster.
The text was updated successfully, but these errors were encountered: