-
Notifications
You must be signed in to change notification settings - Fork 490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe how policies should attach to a Pod #1812
Comments
Seems attaching to service is more reasonable. In k8s service is a group of endpoints which serve the same functionality. |
server-side policies like Authorization Policy are generally inappropriate, IMO, to attach to a Service. For example, what if I have Service A and B selecting the same pod. I apply the rule to Service A. It will now impact Service B as well. Or I have a workload that isn't expose as a Service, or a port I want to apply policy to that isn't part of the Service, etc. Istio used to do this and replaced with workload based selectors for this reason. |
I admit there is such a case, but not common for most people. There could be a compromise, the requisite is to have an associated service, which is acceptable for end user. About defining multi services for same pod, if they share same port, for L7 we can distinguish by hostname they can be different, but for L4 we can not distinguish. This can not be resolved by attaching a rule to pod either |
I tried to find the original discussion about this in Istio:
istio/istio#8990 was the best I could find.
AFAIK all other service meshes have come to the same conclusion as well, so
I would be very very cautious about applying workload policies to Services.
…On Wed, Mar 15, 2023, 7:00 PM Zhonghu Xu ***@***.***> wrote:
I admit there is such a case, but not common for most people. There could
be a compromise, the requisite is to have an associated service, which is
acceptable for end user.
About defining multi services for same pod, if they share same port, for
L7 we can distinguish by hostname they can be different, but for L4 we can
not distinguish. This can not be resolved by attaching a rule to pod either
—
Reply to this email directly, view it on GitHub
<#1812 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEYGXJNKDNZRLNAJIZINALW4JX3LANCNFSM6AAAAAAVZVSOYY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
In #1565, I've tried to make it clearer that it's fine and expected that implementations can use hierarchies that aren't the GatewayClass -> Gateway -> Route -> Backend one. For workloads, I think that it's fine to use a Deployment, or even to have the same Policy object affect all Deployments in a Namespace. I agree that using a Service to select Pods is a mistake - a Service is really selecting Endpoints (or EndpointSlices) anyway (although they're derived from the Pods via the pod selector, so this part is confusing, like many things about Service). |
I think this is very impartial |
It seems the specifics and clarification being requested here are reasonable, though it seems several questions remain. We can accept this with the contingency that whoever takes it on needs to tease out the conversation before we actually aim for any deliverables: /triage accepted That said this doesn't appear to be something we need to work on for GA (correct me if you think I'm wrong), so we don't expect this to have any priority prior to that: /priority backlog |
Currently, we can attach policies to API resources. Unfortunately, its hard to talk about a workload.
This is a continuation of https://kubernetes.slack.com/archives/CR0H13KGA/p1676648817307159 copied into an issue.
There are a few options:
This is terrible, as pods are ephemeral, randomly named, and large in number. But this is the closest to what we want conceptually
This is pretty reasonable. Usually, users do not need more granular application than Deployment, and if they do they can always split their Deployment. However, this is not ideal since pods can be created by a number of things - DaemonSet, KnativeService, etc, making this limited.
An implementation could have logic for all of the first party higher-level types, but there are an endless number of third-party types that are impossible to handle like this.
All pods have a service account, and service account avoids the problem of "ephemeral, random, and large in number".
This actually may make sense for some policies. For example, authorization policies probably actually make sense bound to a service account, as both are in the security domain. A TimeoutPolicy bound to a ServiceAccount would probably be awkward though.
The text was updated successfully, but these errors were encountered: