Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Describe how policies should attach to a Pod #1812

Open
howardjohn opened this issue Mar 13, 2023 · 7 comments
Open

Describe how policies should attach to a Pod #1812

howardjohn opened this issue Mar 13, 2023 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/gep PRs related to Gateway Enhancement Proposal(GEP) lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@howardjohn
Copy link
Contributor

Currently, we can attach policies to API resources. Unfortunately, its hard to talk about a workload.

This is a continuation of https://kubernetes.slack.com/archives/CR0H13KGA/p1676648817307159 copied into an issue.

There are a few options:

  1. Attach to Pod

This is terrible, as pods are ephemeral, randomly named, and large in number. But this is the closest to what we want conceptually

  1. Attach to Deployment

This is pretty reasonable. Usually, users do not need more granular application than Deployment, and if they do they can always split their Deployment. However, this is not ideal since pods can be created by a number of things - DaemonSet, KnativeService, etc, making this limited.

An implementation could have logic for all of the first party higher-level types, but there are an endless number of third-party types that are impossible to handle like this.

  1. Attach to ServiceAccount

All pods have a service account, and service account avoids the problem of "ephemeral, random, and large in number".

This actually may make sense for some policies. For example, authorization policies probably actually make sense bound to a service account, as both are in the security domain. A TimeoutPolicy bound to a ServiceAccount would probably be awkward though.

@shaneutt shaneutt added kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it. kind/gep PRs related to Gateway Enhancement Proposal(GEP) labels Mar 13, 2023
@hzxuzhonghu
Copy link
Member

Seems attaching to service is more reasonable. In k8s service is a group of endpoints which serve the same functionality.
I have to admit sometimes we need to attach policy to a subset of service's endpoints, to satisfy this we can need to add subset selector.

@howardjohn
Copy link
Contributor Author

server-side policies like Authorization Policy are generally inappropriate, IMO, to attach to a Service. For example, what if I have Service A and B selecting the same pod. I apply the rule to Service A. It will now impact Service B as well. Or I have a workload that isn't expose as a Service, or a port I want to apply policy to that isn't part of the Service, etc. Istio used to do this and replaced with workload based selectors for this reason.

@hzxuzhonghu
Copy link
Member

I admit there is such a case, but not common for most people. There could be a compromise, the requisite is to have an associated service, which is acceptable for end user.

About defining multi services for same pod, if they share same port, for L7 we can distinguish by hostname they can be different, but for L4 we can not distinguish. This can not be resolved by attaching a rule to pod either

@howardjohn
Copy link
Contributor Author

howardjohn commented Mar 16, 2023 via email

@youngnick
Copy link
Contributor

In #1565, I've tried to make it clearer that it's fine and expected that implementations can use hierarchies that aren't the GatewayClass -> Gateway -> Route -> Backend one.

For workloads, I think that it's fine to use a Deployment, or even to have the same Policy object affect all Deployments in a Namespace.

I agree that using a Service to select Pods is a mistake - a Service is really selecting Endpoints (or EndpointSlices) anyway (although they're derived from the Pods via the pod selector, so this part is confusing, like many things about Service).

@hzxuzhonghu
Copy link
Member

istio/istio#8990 (comment)

IMO, it makes sense to be workload-centric for things that affect inbound configs. However, for outbound, service (or host) could be acceptable.

I think this is very impartial

@shaneutt
Copy link
Member

It seems the specifics and clarification being requested here are reasonable, though it seems several questions remain. We can accept this with the contingency that whoever takes it on needs to tease out the conversation before we actually aim for any deliverables:

/triage accepted

That said this doesn't appear to be something we need to work on for GA (correct me if you think I'm wrong), so we don't expect this to have any priority prior to that:

/priority backlog

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Apr 28, 2023
@shaneutt shaneutt removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Apr 28, 2023
@shaneutt shaneutt added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/gep PRs related to Gateway Enhancement Proposal(GEP) lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

5 participants