Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set initial node_labels for default workers and worker pools #550

Merged
merged 1 commit into from
Sep 28, 2019

Conversation

valer-cara
Copy link
Contributor

High level description of the change.

  • Add node_labels variable to kubernetes module (currently only for aws/container-linux)

Testing

Launched a new cluster with extra labels that were successfully applied on the worker nodes.


This is a quick fix I needed for dedicated worker pools and nodeSelector. I guess that for this to be merged it'd require uniformity across all cloud/distro variants.

I'm submitting it just to see if this would be an acceptable feature. If so, I can add it to the rest of the platforms/distros.

Thanks!

@rochacon
Copy link

👍 I think this is useful, likewise, a similar feature to support taints (e.g. --register-with-taints) would be nice.

@@ -61,6 +61,9 @@ systemd:
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
--node-labels=node.kubernetes.io/node \
%{ for label in split(";", node_labels) }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since var.node_labels is of type list, can't we get rid of the join/split here?

(Sorry if useless

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that was my initial take too, but tf only takes strings as template vars.
(hashicorp/terraform#9368 , etc..)

@valer-cara
Copy link
Contributor Author

yup, also had taints in mind, similar pattern; that would work as a separate PR i guess (if acceptable)

@JordanP
Copy link
Contributor

JordanP commented Sep 25, 2019

I think this is acceptable becauce it's already been reported and supported here: #429

I, too, second this feature.

@valer-cara
Copy link
Contributor Author

There's one more thing slightly related. While typhoon aims to keep things simple, my specific usecase requires cluster autoscaling for some one-time workload happening only on initialization.

In order to fully support CA, i'd also need to be able to add tags to the workers autoscaling group (it uses those k8s.io/cluster-autoscaler/node-template* tags to figure out some details). Would that be ok as part of this PR or at all?

@dghubble
Copy link
Member

Yes, I'll take a node-labels PR per #429. Please set them for platforms with worker pools only (AWS, GCP, Azure), leaving out bare-metal and DO which have separate considerations. The inner worker modules can have a node_labels variable. The kubernetes module should use worker_node_labels (or possibly don't provide it yet since this primarily concerns worker pools).

No changes to tags.

@@ -86,6 +86,7 @@ data "template_file" "worker-config" {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
node_labels = join(";", var.node_labels)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's at least use a "," separator so this is less weird.

@valer-cara
Copy link
Contributor Author

@dghubble - I've made the requested changes. Added node labels to aws (cl + fedora), gcp, azure.

I've only tested the aws/cl setup so far.

…` to workers

- aws (cl, fedora)
- google-cloud
- azure
@dghubble
Copy link
Member

Looks good. It'll need updates to docs and tutorials, but its probably simpler if I just add that.

@dghubble dghubble merged commit 99ab81f into poseidon:master Sep 28, 2019
@dghubble dghubble changed the title Add node_labels variable to aws/container-linux/kubernetes/ Set initial node_labels for default workers and worker pools Sep 28, 2019
@dghubble
Copy link
Member

Thanks 👍

dghubble added a commit that referenced this pull request Apr 11, 2021
* Add `node_taints` variable to worker modules to set custom
initial node taints on cloud platforms that support auto-scaling
worker pools of heterogeneous nodes (i.e. AWS, Azure, GCP)
* Worker pools could use custom `node_labels` to allowed workloads
to select among differentiated nodes, while custom `node_taints`
allows a worker pool's nodes to be tainted as special to prevent
scheduling, except by workloads that explicitly tolerate the
taint
* Expose `daemonset_tolerations` in AWS, Azure, and GCP kubernetes
cluster modules, to determine whether `kube-system` components
should tolerate the custom taint (advanced use covered in docs)

Rel: #550, #663
Closes #429
dghubble added a commit that referenced this pull request Apr 11, 2021
* Add `node_taints` variable to worker modules to set custom
initial node taints on cloud platforms that support auto-scaling
worker pools of heterogeneous nodes (i.e. AWS, Azure, GCP)
* Worker pools could use custom `node_labels` to allowed workloads
to select among differentiated nodes, while custom `node_taints`
allows a worker pool's nodes to be tainted as special to prevent
scheduling, except by workloads that explicitly tolerate the
taint
* Expose `daemonset_tolerations` in AWS, Azure, and GCP kubernetes
cluster modules, to determine whether `kube-system` components
should tolerate the custom taint (advanced use covered in docs)

Rel: #550, #663
Closes #429
dghubble added a commit that referenced this pull request Apr 11, 2021
* Add `node_taints` variable to worker modules to set custom
initial node taints on cloud platforms that support auto-scaling
worker pools of heterogeneous nodes (i.e. AWS, Azure, GCP)
* Worker pools could use custom `node_labels` to allowed workloads
to select among differentiated nodes, while custom `node_taints`
allows a worker pool's nodes to be tainted as special to prevent
scheduling, except by workloads that explicitly tolerate the
taint
* Expose `daemonset_tolerations` in AWS, Azure, and GCP kubernetes
cluster modules, to determine whether `kube-system` components
should tolerate the custom taint (advanced use covered in docs)

Rel: #550, #663
Closes #429
dghubble added a commit that referenced this pull request Apr 11, 2021
* Add `node_taints` variable to worker modules to set custom
initial node taints on cloud platforms that support auto-scaling
worker pools of heterogeneous nodes (i.e. AWS, Azure, GCP)
* Worker pools could use custom `node_labels` to allowed workloads
to select among differentiated nodes, while custom `node_taints`
allows a worker pool's nodes to be tainted as special to prevent
scheduling, except by workloads that explicitly tolerate the
taint
* Expose `daemonset_tolerations` in AWS, Azure, and GCP kubernetes
cluster modules, to determine whether `kube-system` components
should tolerate the custom taint (advanced use covered in docs)

Rel: #550, #663
Closes #429
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants