You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With cross-account resource management (CARM), ACK service controllers gained the ability to manage the lifecycle of AWS resources in multiple AWS accounts. The way CARM works is that the ACK service controller, upon startup, creates a SharedInformer watch on all Namespace objects as well as a SharedInfomer watch on a ConfigMaps, looking for a ConfigMap called "ack-role-account-map".
This cluster-scoped lookup of all Namespaces and ConfigMaps is problematic for deployments of ACK controllers in environments where an ACK service controller should only manage resources in certain Namespaces. The Kubernetes RBAC permissions associated with the Service Account that the ACK controller runs as need to be scoped to only allow reading certain Namespaces and specific ConfigMaps.
We need a mode where an ACK service controller, deployed into a specific "management" K8s Namespace, can be configured to only monitor and manage CRs in a specific set of K8s Namespaces (the "tenant" or "user-owned" namespaces).
I think the cleanest approach to this might be actually creating a number of core ACK Custom Resource Definitions that will allow a controller to be configured instead of adding yet more CLI flags.
Something like this might work:
typeControllerConfigSpecstruct {
// Namespaces is the list of Kubernetes Namespaces that the controller// will monitor for changes in annotations and will allow custom resources// to be created inside. If empty, the controller will not be able to manage// the lifecycle of any resources.Namespaces []string`json:"namespaces"`
}
The controller runtime would be updated to place a watch on ControllerConfigSpec CRs created in its management Namespace.
The text was updated successfully, but these errors were encountered:
How should controllers behave when multiple ControllerConfigSpec are created? I'm thinking that It might be simpler to have one specific ConfigMap to watch, similar to the one we have for CARM "ack-role-account-map"
We ended up going with the upstream solution to this, which is to pass a single Namespace to the controller-runtime when starting up the controller manager.
With cross-account resource management (CARM), ACK service controllers gained the ability to manage the lifecycle of AWS resources in multiple AWS accounts. The way CARM works is that the ACK service controller, upon startup, creates a SharedInformer watch on all Namespace objects as well as a SharedInfomer watch on a ConfigMaps, looking for a ConfigMap called "ack-role-account-map".
This cluster-scoped lookup of all Namespaces and ConfigMaps is problematic for deployments of ACK controllers in environments where an ACK service controller should only manage resources in certain Namespaces. The Kubernetes RBAC permissions associated with the Service Account that the ACK controller runs as need to be scoped to only allow reading certain Namespaces and specific ConfigMaps.
We need a mode where an ACK service controller, deployed into a specific "management" K8s Namespace, can be configured to only monitor and manage CRs in a specific set of K8s Namespaces (the "tenant" or "user-owned" namespaces).
I think the cleanest approach to this might be actually creating a number of core ACK Custom Resource Definitions that will allow a controller to be configured instead of adding yet more CLI flags.
Something like this might work:
The controller runtime would be updated to place a watch on
ControllerConfigSpec
CRs created in its management Namespace.The text was updated successfully, but these errors were encountered: