Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove the process manager #1199

Closed
pebrc opened this issue Jul 5, 2019 · 0 comments · Fixed by #1249
Closed

Remove the process manager #1199

pebrc opened this issue Jul 5, 2019 · 0 comments · Fixed by #1249
Assignees

Comments

@pebrc
Copy link
Collaborator

pebrc commented Jul 5, 2019

We think we should remove the process manager in two steps:

The justification for that follows below:

What is the use of the process manager and are there alternative solutions for each?

  • It could be an API proxy for the operator (e.g. if IP filtering restricts access otherwise)

    • Workarounds exist
      • Sidecar could handle that as well
      • In case of IP filtering just dynamically add the IP of the operator as an exception
  • It runs the keystore updater to configure secure settings in Elasticsearch at runtime:

    • Is it enough to run keystore updates once in an init container?
      • Actually it is the only way to get consistent behaviour because some settings are not reloadable
      • Currently we don't know when and if the ES nodes have picked up the secure settings and it is impossible to coordinate settings reloads due to the way we implemented this
      • A sidecar could re-instate the existing behaviour if we think it is necessary later
    • Why did we move it into the process manager in the first place?
      • Hard to come up with resource requests for sidecar
      • It was easier?
    • Is kubectl exec an option?
      • Is not a good option, seen as hacky/intrusive, security risk
  • Do we need the ability to restart a process inside the pod now that we move to stateful sets?

    • To optimize rolling restarts/upgrade
      • Only useful for config changes that are not reflected in the pod template e.g. Es configuration
    • Making emptyDir use cases work efficiently
      • We would have to fight the stateful set mechanism to make this work for anything that is in the pod spec
      • We will still migrate data away from the node to be terminated
    • Debugging ES, explicit shutdown, keeping the container running and fiddling around with the filesystem to fix an issue
      • As of k8s 1.12 we have pid namespace sharing that would allow us to use SIGSTOP to suspend the ES process and do any debugging (to be tested and verified)
      • Feature does not exist anywhere else but would be a nice to have but not a reason to keep the process manager
  • Are we happy to ignore the zombie reaping problem?

    • Only starts being a problem if you start other process in the container e.g. via kubectl exec
    • Not an issue anymore in if PID namespace sharing is turned on
    • Should probably be handled in the official images

Benefits of removing

  • Less complexity
  • Removing the anti-pattern of mutating a docker container
  • Reduces the attack surface
  • No need for operator image as an argument of the operator process, less friction in dev process
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants