Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Add --k8s-namespace-whitelist setting that specifies namespaces to watch. #1184

Merged
merged 4 commits into from
Jul 4, 2018

Conversation

oliviabarrick
Copy link
Contributor

@oliviabarrick oliviabarrick commented Jul 2, 2018

Add --k8s-namespace-whitelist setting that specifies namespaces to watch.

Fixes #1181

Currently, Flux expects to have access to all namespaces, even if no manifests
in the repository reference another namespace, it will check all namespaces
for controllers to update.

This change adds a --k8s-namespace-whitelist setting which, if set, will restrict
Flux to only watch the specified namespaces and ignore all others.

Intended for clusters with large amounts of namespaces or restrictive RBAC
policies. If provided Flux will only monitor workloads in the given namespaces.
This significantly cuts the number of API calls made.

An empty list (i.e. not provided) yields the usual behaviour.

@mwhittington21
Copy link

It seems we've basically implemented the same thing except my version allows for multiple namespaces. See #1186. I do like your test code better though, I'm new to Kube API testing.

@squaremo
Copy link
Member

squaremo commented Jul 2, 2018

Great minds think alike! Having looked at both PRs, may I suggest that this PR is adapted

  • to admit multiple namespaces
  • to use the flag name --k8s-namespace-whitelist
  • to include the bit of documentation from the other PR for that flag

I think this will be the easier route -- but what you both decide between you both matters more, since you did the work in the first place.

I have one concern that applies to both PRs: should this also limit which resources are synced, and which resources can be automated or otherwise have new images released?

@mwhittington21
Copy link

I think that limiting those things you mentioned according to the whitelisted namespaces would be ideal as allowing the user to shoot themselves in the foot with this is not good. It would also provide some protection and isolation were you to want to run multiple Flux daemons in the same cluster.

@oliviabarrick
Copy link
Contributor Author

Very cool! I’ll fix up the PR tonight (though, if @mwhittington21 wants to do it in the meantime I have no issue with that).

…tch.

Fixes fluxcd#1181

Currently, Flux expects to have access to all namespaces, even if no manifests
in the repository reference another namespace, it will check all namespaces
for controllers to update.

This change adds a --k8s-namespace-whitelist setting which, if set, will restrict
Flux to only watch the specified namespaces and ignore all others.

Intended for clusters with large amounts of namespaces or restrictive RBAC
policies. If provided Flux will only monitor workloads in the given namespaces.
This significantly cuts the number of API calls made.

An empty list (i.e. not provided) yields the usual behaviour.
@oliviabarrick oliviabarrick changed the title Add --flux-namespace setting that specifies namespaces to watch. Add --k8s-namespace-whitelist setting that specifies namespaces to watch. Jul 3, 2018
@oliviabarrick
Copy link
Contributor Author

I have one concern that applies to both PRs: should this also limit which resources are synced, and which resources can be automated or otherwise have new images released?

This seems wise, but I'm not sure the best place to implement it. Can you give me some pointers?

return nsList, err
}

for _, namespace := range c.nsWhitelist {

This comment was marked as abuse.

This comment was marked as abuse.

@mwhittington21
Copy link

mwhittington21 commented Jul 3, 2018

After a quick look around I think the best place to stop Flux syncing resources that are not in specific namespaces would be to place some code in sync/sync.go in prepareSyncApply. It should be possible to inspect namespaces here and only add resources to the apply action if they match the namespace.

Need to be careful about un-namespaced manifests counting as "default" in case that's an edge case. Would this be sufficient @squaremo?

@mwhittington21
Copy link

mwhittington21 commented Jul 3, 2018

I've given this a go over at https://github.com/mwhittington21/flux/tree/1181-continued-add-sync-restriction-on-ns. Feel free to take the work and update this PR with it @justinbarrick. I based it off your latest master at the time.

@squaremo
Copy link
Member

squaremo commented Jul 3, 2018

After a quick look around I think the best place to stop Flux syncing resources that are not in specific namespaces would be to place some code in sync/sync.go in prepareSyncApply. It should be possible to inspect namespaces here and only add resources to the apply action if they match the namespace.

That would work (demonstrably -- you've already done it!). My inclination is to keep the sync package independent of the cluster implementation, though granted there's only one cluster implementation.

This would mean sync remains ignorant of the structure of resource IDs and namespaces, and the filtering happens in kubernetes.*Cluster.Sync -- which already has the whitelist. It could report non-whitelisted resources as SyncErrors (meaning they can get surfaced in notifications; and in future versions, by the CLI).

It could also make defaulted namespace handling a bit cleaner, since that's pretty particular to Kubernetes. It will entail looking up the default namespace from the config, though, which is a bit more work.

@mwhittington21
Copy link

mwhittington21 commented Jul 3, 2018

Would it be possible to split the Sync restrictions into a future PR and just merge this one in now? Reducing API calls is the most important thing to my current use case.

EDIT: One other thing that might be nice is printing out the whitelisted namespaces on startup, if provided. Like the daemon does for a few of it's other flags. I had an issue where if I surrounded the list with quotes it would turn it into one long string still containing "," so that could help the end user debug problems like that.

@squaremo
Copy link
Member

squaremo commented Jul 3, 2018

Would it be possible to split the Sync restrictions into a future PR and just merge this one in now? Reducing API calls is the most important thing to my current use case.

Yes. Let's label this as experimental (in the help text), and I'll do a quick review.

Copy link
Member

@squaremo squaremo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness, I'd expect SomeControllers to elide resources in namespaces that aren't whitelisted. But if we can mark the flag as experimental, I'm fine with incrementally improving it. (i.e., marking it as experimental is the only blocker)

@@ -114,6 +114,7 @@ func main() {
token = fs.String("token", "", "Authentication token for upstream service")

dockerConfig = fs.String("docker-config", "", "path to a docker config to use for image registry credentials")
k8sNamespaceWhitelist = fs.StringSlice("k8s-namespace-whitelist", []string{}, "Optional, comma separated list of namespaces to monitor for workloads")

This comment was marked as abuse.

This comment was marked as abuse.

// It returns a list of all namespaces unless a namespace whitelist has been set on the Cluster
// instance, in which case it returns a list containing the namespaces from the whitelist
// that exist in the cluster.
func (c *Cluster) getNamespaces() ([]apiv1.Namespace, error) {

This comment was marked as abuse.

func (c *Cluster) getNamespaces() ([]apiv1.Namespace, error) {
nsList := []apiv1.Namespace{}

namespaces, err := c.client.Namespaces().List(meta_v1.ListOptions{})

This comment was marked as abuse.

This comment was marked as abuse.

@oliviabarrick
Copy link
Contributor Author

Thanks for the help! I've marked this experimental so we can get this out. I'll likely submit a follow up for the other issues discussed in the next week or so.

@squaremo squaremo merged commit 34e4f93 into fluxcd:master Jul 4, 2018
@squaremo
Copy link
Member

squaremo commented Jul 4, 2018

Good stuff, thanks @justinbarrick and @mwhittington21. Assuming it looks OK in our dev cluster (which won't use the flag, but that's fine, it's experimental), this should appear in a release pretty soon.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants