Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service Selectors #4481

Closed
grampelberg opened this issue May 26, 2020 · 4 comments · Fixed by #4795
Closed

Service Selectors #4481

grampelberg opened this issue May 26, 2020 · 4 comments · Fixed by #4795
Assignees

Comments

@grampelberg
Copy link
Contributor

grampelberg commented May 26, 2020

What problem are you trying to solve?

Currently, when you'd like a service exported to another cluster, you add an annotation. The source service-mirror watches all services and then filters based off the annotation. This has a couple downsides:

  • There's no way for a source to filter what services they'd like in the local cluster. This has implications around cluster ips and slowing down iptables updates. It also could be too much for a small dev cluster.
  • There's an implicit assumption that exporting a service is what actually grants access. Currently, access is based on the trust anchor and technically allows full access to services that haven't been exported.

How should the problem be solved?

Instead of using an annotation on the target side, let's switch the ownership of what services are moved to the source side. This removes the assumption that exporting services is related to policy and simultaneously provides the tools required for a source cluster to manage what is ending up on their cluster.

The requirements for this are:

  • A default selector is added at install time.
  • Support matchLabels and matchExpressions
  • Multiple selector sets can be added as an OR.
  • Users can update the selector list at any time.
  • Users can introspect what services are being matched.

Note that this effectively removes the export-service command as kubectl label does the same thing.

ConfigMap

Following the pattern set by Prometheus and Grafana, the selectors should be stored as a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: service-selector
  namespace: linkerd-multicluster
data:
  selectorConfiguration: |-
    - matchLabels:
        config.linkerd.io/multicluster: true
    - matchExpressions:
        - key: config.linkerd.io/multicluster
          operator: NotIn
          values:
            - true
    - matchLabels:
        foo: bar
      matchExpressions:
        - {key: environment, operator: NotIn, values: [dev]}

We should mount this as a volume and reload the service-mirror on change. The output should be part of the logs and any errors parsing should cause service-mirror to output the error and crash.

RBAC

This solution primarily provides control to the source side of the equation. If the target cluster would like to restrict what services are being made available, it can be done by changing the RBAC output from linkerd multicluster allow.

Open Questions

  • Should there be implicit labels like cluster name and namespace added automatically?

How would users interact with this feature?

On the source cluster:

kubectl -n linkerd-multicluster edit configmap service-selector

On the target cluster:

kubectl -n default label svc my-service arbitrary-label=foobar
@olix0r
Copy link
Member

olix0r commented May 27, 2020

As I looked into this, I notice that we currently only allow a service to be exposed by at most one gateway, which is an undesirable constraint from my point of view. This is because the namespace value is currently set on a service at export-time, rather than something that is configured by the service-mirror.

Should the configuration support (optional) parameters to bind results to alternate gateway services?

    - clusterName: east
      gatewayNamespace: linkerd-multicluster-prod
      matchLabels:
        foo: bar
      matchExpressions:
        - {key: environment, operator: NotIn, values: [dev]}

If clusterName is not specified, it applies to all configs the mirror is configured with. If the gatewayNamespace is not specified, linkerd-multicluster is assumed.

I don't think we even have to implement this for 2.8 -- these extensions would be fine to follow from my point of view. The important thing is that we eliminate the export-time annotation so that services have an M:N relationship with gateways.

@zaharidichev
Copy link
Member

We should mount this as a volume and reload the service-mirror on change.

@grampelberg why not simply watch the config map for changes? Also what do you mean by "reload" ?

As I looked into this, I notice that we currently only allow a service to be exposed by at most one gateway, which is an undesirable constraint from my point of view.

What I do not understand is how could a service be exposed by more than one gateways. Namely.. if a service is exposed by 5 gateways at the same time and you hit the mirror of that service which gateway do you send that request to?

@zaharidichev
Copy link
Member

Also, I do not quite get why this is so important (multiple gateways per service). The fact that you set the gateway at export time does not mean that this is forever? If you want you can simply change the annotations to an alternative gateway/gateway-ns values and the endpoints/etc will all get updated accordingly (or blanked if this new gateway does not exist...)

@grampelberg
Copy link
Contributor Author

why not simply watch the config map for changes? Also what do you mean by "reload" ?

WFM, I was just thinking of the naive way to do this with something like watchexec.

What I do not understand is how could a service be exposed by more than one gateways. Namely.. if a service is exposed by 5 gateways at the same time and you hit the mirror of that service which gateway do you send that request to?

There shouldn't be multiple gateways per service, there should be multiple gateways per clusters. Each gateway should be in charge of a specific set of services. It isn't a hard requirement right now, but that'll be important once we get to policy.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants