Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve clickhouse-keeper manifests #1234

Merged
merged 17 commits into from
Nov 23, 2023
Merged

Improve clickhouse-keeper manifests #1234

merged 17 commits into from
Nov 23, 2023

Conversation

Slach
Copy link
Collaborator

@Slach Slach commented Sep 4, 2023

Depends on ClickHouse/ClickHouse#53481

This will help a lot for maintainers to adopt your Pull Request.

Important items to consider before making a Pull Request

Please check items PR complies to:

  • All commits in the PR are squashed. More info
  • The PR is made into dedicated next-release branch, not into master branch1. More info
  • The PR is signed. More info

--

1 If you feel your PR does not affect any Go-code or any testable functionality (for example, PR contains docs only or supplementary materials), PR can be made into master branch, but it has to be confirmed by project's maintainer.

Signed-off-by: Slach <bloodjazman@gmail.com>
…omething wrong with tearDown.sh, wrong NuRaft quorum state after replicas: 0 -> replicas: 3)

Signed-off-by: Slach <bloodjazman@gmail.com>
@Slach Slach changed the base branch from master to 0.22.0 September 4, 2023 18:16
Signed-off-by: Slach <bloodjazman@gmail.com>
…(23.8)

Signed-off-by: Slach <bloodjazman@gmail.com>
@Slach Slach changed the title [WIP] Improve clickhouse-keeper manifests Improve clickhouse-keeper manifests Sep 5, 2023
@Slach Slach changed the title Improve clickhouse-keeper manifests [wip] Improve clickhouse-keeper manifests Sep 5, 2023
@Slach
Copy link
Collaborator Author

Slach commented Sep 5, 2023

Wait when resolve ClickHouse/ClickHouse#53481 and ClickHouse/ClickHouse#54129

@bputt-e
Copy link

bputt-e commented Sep 13, 2023

does this fix the issue where force-recovery was always set to the first pod if I'm running a 3 clickkeeper cluster? It seems odd that force-recovery was always set to the first pod as if that pod were restarted or offline for a period of time, it effectively tells the rest of the clickkeeper instances that they must use the first pods state

@Slach
Copy link
Collaborator Author

Slach commented Sep 13, 2023

@bputt-e
using --force-recovery in old manifest required for prefer XML configs more priority then internal NuRaft state which stored in /var/lib/clickhouse-keeper, without this option even when you change XML config <raft_configuration> it will have no effect

in the new approach, we are trying to use command reconfig, which also stay not fully operable

@Slach Slach changed the base branch from 0.22.0 to 0.23.0 November 10, 2023 14:48
@Slach Slach changed the title [wip] Improve clickhouse-keeper manifests Improve clickhouse-keeper manifests Nov 23, 2023
@Slach Slach merged commit cd9b7fa into 0.23.0 Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants