-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixing Cassandra shutdown example to avoid data corruption #39199
fixing Cassandra shutdown example to avoid data corruption #39199
Conversation
Hi @deimosfr. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with If you have questions or suggestions related to this bot's behavior, please file an issue against the kubernetes/test-infra repository. |
@k8s-bot ok to test LGTM. Thanks! |
Jenkins verification failed for commit 22b936649d8acdfe9c9a42a07f81d07daed3445b. Full PR test history. The magic incantation to run this job again is |
@deimosfr ugh, looks like the switch from 2016 to 2017 confused some automation. Can you rebase? thanks (and sorry!) --brendan |
@brendandburns done, please let me know if it's ok for you now |
@deimosfr this needs another rebase. It seems you have pulled a bunch of preexisting commits somehow. |
@jonashuckestein you guys may be interested in this. |
@Kargakis is something missing to merge ? |
@chrislovecnm can you also have a look? I would expect termination signaling to be handled by the kubelets and a higher terminationGracePeriod in the StatefulSet would make more sense to me but I am not familiar with c*. |
/lgtm Sorry for the delay. |
Jenkins Kubemark GCE e2e failed for commit c165e90. Full PR test history. cc @deimosfr, your PR dashboard The magic incantation to run this job again is Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@k8s-bot kubemark e2e test this |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue (batch tested with PRs 39199, 37273, 29183, 39638, 40199) |
lifecycle: | ||
preStop: | ||
exec: | ||
command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been merged ages ago but isn't it better to call nodetool drain? It should stop accepting new data and flush on disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, nodetool drain is better to ensure all data have been flushed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have opened #49618 to fix this
…p-drain Automatic merge from submit-queue (batch tested with PRs 47724, 49984, 49785, 49803, 49618) Cassandra example, use nodetool drain in preStop Related to kubernetes#39199 (comment)
Hi,
I was playing with Cassandra example stored in the Kubernetes project and I encountered issues on shutdown (not anytime). After checking it looks like the shutdown of a node is brutal and data corruption may occur during a flush on disk. To avoid that, I'm suggesting a hook to gracefully shutdown Cassandra before stopping the container.
Here are logs of corruption after a pod delete:
It works well for me now and do not have data corruption anymore.