You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
I'd like Fargate to appear as a Node or Nodes in my EKS cluster, and hence interact natively with things like nodeSelector, taints and tolerations, podAffinity, metrics, Pod Topology Spread Constraints, etc.
i.e. the closer I can come to having Fargate feel like a special NodeGroup (or really, a Managed NodeGroup, once the various missing features there are ironed out), the more natural and easy it is to use in a cluster.
As an exempler, if a Fargate Profile (without the selector) was exposed as a tainted and labelled Node, then allowing Pods to run on Kubelet would only require the same tolerations as used to allow a Pod to use any specialised Nodes in Kubernetes, and forcing a Pod to run on Fargate would only require the same nodeSelector as used to lock a Pod to any specialised Nodes in Kubernetes; i.e. the existing Kubernetes documentation "just works".
A brief poke at Virtual Kubelet suggests that the provider-specific implementation would need an Admission Controller to ensure that things like Daemonsets, Pods with hostPath volumes, or other things not supported by EKS/Fargate, don't run on the Fargate Nodes.
Which service(s) is this request for?
EKS/Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Right now, setting up EKS/Fargate requires a small amount of AWS-side configuration, and inverts the relationship from "pods are scheduled by their labels" to "a set of labels and namespaces is defined elsewhere to pull pods out of the scheduler and onto Fargate", which is an added complexity when already doing passably-complex setups involving nodeSelectors and taints/tolerations.
Are you currently working around this issue?
Not using Fargate at this point. This hasn't exactly encouraged me to experiment with it, since I can't as easily mess about with the scheduler and experiment from kubectl, as I can with other k8s systems like the Cluster Autoscaler.
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
The text was updated successfully, but these errors were encountered:
TBBle
changed the title
[EKS/Fargate] [request]: Make Fargate look like a normal k8s Node/Nodes
[EKS] [Fargate] [request]: Make Fargate look like a normal k8s Node/Nodes
Jul 29, 2020
Community Note
Tell us about your request
I'd like Fargate to appear as a Node or Nodes in my EKS cluster, and hence interact natively with things like
nodeSelector
,taints
andtolerations
,podAffinity
, metrics, Pod Topology Spread Constraints, etc.i.e. the closer I can come to having Fargate feel like a special NodeGroup (or really, a Managed NodeGroup, once the various missing features there are ironed out), the more natural and easy it is to use in a cluster.
As an exempler, if a Fargate Profile (without the selector) was exposed as a tainted and labelled Node, then allowing Pods to run on Kubelet would only require the same tolerations as used to allow a Pod to use any specialised Nodes in Kubernetes, and forcing a Pod to run on Fargate would only require the same nodeSelector as used to lock a Pod to any specialised Nodes in Kubernetes; i.e. the existing Kubernetes documentation "just works".
It seems like the virtual-kubelet project is intended to offer exactly that, but the AWS Fargate provider is abandoned, despite an interesting blog post demonstrating it with a non-EKS AWS environment in 2018.
A brief poke at Virtual Kubelet suggests that the provider-specific implementation would need an Admission Controller to ensure that things like Daemonsets, Pods with hostPath volumes, or other things not supported by EKS/Fargate, don't run on the Fargate Nodes.
Which service(s) is this request for?
EKS/Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Right now, setting up EKS/Fargate requires a small amount of AWS-side configuration, and inverts the relationship from "pods are scheduled by their labels" to "a set of labels and namespaces is defined elsewhere to pull pods out of the scheduler and onto Fargate", which is an added complexity when already doing passably-complex setups involving
nodeSelectors
andtaints
/tolerations
.Are you currently working around this issue?
Not using Fargate at this point. This hasn't exactly encouraged me to experiment with it, since I can't as easily mess about with the scheduler and experiment from kubectl, as I can with other k8s systems like the Cluster Autoscaler.
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
The text was updated successfully, but these errors were encountered: