Replies: 1 comment
-
Hi @avo-sepp , I think what you're trying to do might be accomplished with the manual key management feature.
With this, you will manage directly the certificate used by the controller to encrypt/decrypt. This won't prevent the creation of the sealing certificate automatically, though, but I think what you really want to achieve is to share the same certificate among your clusters. Let us know if this helps! |
Beta Was this translation helpful? Give feedback.
-
Howdy all,
I'm hoping to accomplish a usecase, which may go slightly against the core ideology, but I think is valuable to us.
We have many Kubernetes clusters which utilize GitOps to install from a centralized repository. To accomplish this we share the Operator's signing key among all the clusters. So we can encrypt once and deploy to many locations.
When the Sealed-Secrets operator is installed it creates a new signing key. Even if keyrotationperiod is set to 0, it always initializes with a key. This presents a possible avenue for user error.
We have many devs who may not be as familiar with Kubernetes. So it's not always clear which k8s cluster they use to seal their secrets. This can be a problem in the case where many sealed-secrets instances exist with different initialization keys. They may sign their secrets using the wrong certificate if they don't have their kubernetes context set to the cluster which only has the shared signing key.
What I'm hoping to find is a setting where I can force the Operator to NOT create any new keys on initialization. And instead use the key it finds. If it doesn't find a key I'd prefer it fail the kubernetes liveness probe and reboot until it does find a key. This will help bubble up production errors where a sealed-secret operator did not configure properly.
Beta Was this translation helpful? Give feedback.
All reactions