Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [Fargate] [request]: Make Fargate look like a normal k8s Node/Nodes #998

Open
TBBle opened this issue Jul 29, 2020 · 0 comments
Open
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue

Comments

@TBBle
Copy link

TBBle commented Jul 29, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request

I'd like Fargate to appear as a Node or Nodes in my EKS cluster, and hence interact natively with things like nodeSelector, taints and tolerations, podAffinity, metrics, Pod Topology Spread Constraints, etc.

i.e. the closer I can come to having Fargate feel like a special NodeGroup (or really, a Managed NodeGroup, once the various missing features there are ironed out), the more natural and easy it is to use in a cluster.

As an exempler, if a Fargate Profile (without the selector) was exposed as a tainted and labelled Node, then allowing Pods to run on Kubelet would only require the same tolerations as used to allow a Pod to use any specialised Nodes in Kubernetes, and forcing a Pod to run on Fargate would only require the same nodeSelector as used to lock a Pod to any specialised Nodes in Kubernetes; i.e. the existing Kubernetes documentation "just works".

It seems like the virtual-kubelet project is intended to offer exactly that, but the AWS Fargate provider is abandoned, despite an interesting blog post demonstrating it with a non-EKS AWS environment in 2018.

A brief poke at Virtual Kubelet suggests that the provider-specific implementation would need an Admission Controller to ensure that things like Daemonsets, Pods with hostPath volumes, or other things not supported by EKS/Fargate, don't run on the Fargate Nodes.

Which service(s) is this request for?
EKS/Fargate

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

Right now, setting up EKS/Fargate requires a small amount of AWS-side configuration, and inverts the relationship from "pods are scheduled by their labels" to "a set of labels and namespaces is defined elsewhere to pull pods out of the scheduler and onto Fargate", which is an added complexity when already doing passably-complex setups involving nodeSelectors and taints/tolerations.

Are you currently working around this issue?

Not using Fargate at this point. This hasn't exactly encouraged me to experiment with it, since I can't as easily mess about with the scheduler and experiment from kubectl, as I can with other k8s systems like the Cluster Autoscaler.

Additional context
Anything else we should know?

Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

@TBBle TBBle added the Proposed Community submitted issue label Jul 29, 2020
@TBBle TBBle changed the title [EKS/Fargate] [request]: Make Fargate look like a normal k8s Node/Nodes [EKS] [Fargate] [request]: Make Fargate look like a normal k8s Node/Nodes Jul 29, 2020
@mikestef9 mikestef9 added EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate labels Jul 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests

2 participants