-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Environment Variable Substitution or CLI Flags #2893
Comments
Seems like a decent feature to me. @jonathan-innis , this would solve our local dev story, too. |
@johnnyhuy I'm missing the use-case here. Why not just use |
Yes, we could use Good question, We can put it into a scenario ScenarioAs an end-user, I don't know values like a unique cluster name Here's an example using the External Secrets controller to do just that. Fetched the cluster name apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: example-secret
spec:
refreshInterval: 5m
secretStoreRef:
name: my-ssm-secret-store
kind: ClusterSecretStore
target:
creationPolicy: Owner
data:
- secretKey: clusterName
remoteRef:
key: /my/auto-generated/clusterName In return, the controller creates a Secret: apiVersion: v1
kind: Secret
metadata:
name: example-secret
type: Opaque
stringData:
clusterName: my-cluster-abdw32109d We could then hypothetically do either of these to load values into the Karpenter pod: apiVersion: apps/v1
kind: Deployment
metadata:
name: karpenter
spec:
...
template:
...
spec:
...
containers:
- name: controller
image: public.ecr.aws/karpenter/controller:v0.18.1
env:
- name: KARPENTER_CLUSTER_NAME
valueFrom:
secretKeyRef:
name: example-secret
key: clusterName
---
# Or this
apiVersion: apps/v1
kind: Deployment
metadata:
name: karpenter
spec:
...
template:
...
spec:
...
containers:
- name: controller
image: public.ecr.aws/karpenter/controller:v0.18.1
args:
- --cluster-name=$(CLUSTER_NAME)
env:
- name: CLUSTER_NAME
valueFrom:
secretKeyRef:
name: example-secret
key: clusterName Final thoughtsAs we can see, the end-user does not need to know exactly what the auto-generated cluster name is; they'd refer to the SSM parameter instead to give us the value. This is one scenario; another one would be another team deploying an EKS cluster and creates a Secret or ConfigMap with the cluster name in it. We could either imperatively build a script to deploy Karpenter or declaratively use environment variables or CLI flags on the Karpenter pod. |
we also follow the approach where we fill in a configmap with infra details and let other deployments fetch those details |
@jonathan-innis , it strikes me that cluster name and endpoint aren't the same as other dynamic global settings. Do they make more sense as cli args? |
Can someone elaborate on the decision that was taken to move all of the env vars and cli args to a configmap? Also the current implementation of karpenter reading all of its configuration from configmaps directly via the kube api rather than mounting the configmaps into files is very opinionated and seems like it was inspired by argocd. |
|
Open-minded about this. Curious why it's easier to configure an env car instead of a config map. The configmap is nice because it's dynamic, but I could see static overrides as well. We originally decided to make just one mechanism because it reduces the chance for users to set both and get confused. |
@ellistarn I think a configmap is necessary when you have deeply nested configuration files, for example the Prometheus config file. The downsides of using a configmap is that you have to configure your application to watch the mounted configmap files for changes and reload your application when that happens, and in a case where your configmap file is corrupted this might cause a downtime. Specifically with Karpenter the main issue that bothers me is that Karpenter does not mount the configmap into a file on the disk but watches the actual configmap k8s object for changes, the k8s object name is hardcoded so we must use this name for the configmap. So to summarize:
|
As a workaround we can do the following. Create a Job to run a Job apiVersion: batch/v1
kind: Job
metadata:
name: karpenter-global-settings
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
helm.sh/hook-weight: "1"
spec:
backoffLimit: 4
template:
metadata:
labels:
app.kubernetes.io/name: karpenter-global-settings
spec:
restartPolicy: OnFailure
serviceAccount: karpenter-global-settings
containers:
- name: k8s
image: alpine/k8s:1.25.13
command:
- /bin/sh
- -c
volumeMounts:
- name: config
mountPath: /tmp/config
args:
- envsubst < /tmp/config/manifest.yaml | kubectl apply -f -
env:
- name: CLUSTER_NAME
valueFrom:
secretKeyRef:
key: clusterName
name: karpenter-env
optional: false
- name: CLUSTER_ENDPOINT
valueFrom:
secretKeyRef:
key: clusterEndpoint
name: karpenter-env
optional: false
- name: AWS_DEFAULT_INSTANCE_PROFILE
valueFrom:
secretKeyRef:
key: awsDefaultInstanceProfile
name: karpenter-env
optional: false
volumes:
- name: config
configMap:
name: karpenter-global-settings-temp ConfigMap ---
apiVersion: v1
kind: ConfigMap
metadata:
name: karpenter-global-settings-temp
data:
manifest.yaml: |
apiVersion: v1
kind: ConfigMap
metadata:
name: karpenter-global-settings
data:
"aws.defaultInstanceProfile": ${AWS_DEFAULT_INSTANCE_PROFILE}
"aws.clusterName": ${CLUSTER_NAME}
"aws.clusterEndpoint": ${CLUSTER_ENDPOINT}
"aws.enableENILimitedPodDensity": "true"
"aws.enablePodENI": "false"
"aws.isolatedVPC": "true"
"aws.nodeNameConvention": "resource-name"
"aws.vmMemoryOverheadPercent": "0.075"
"batchIdleDuration": "1s"
"batchMaxDuration": "10s"
"featureGates.driftEnabled": "false" RBAC apiVersion: v1
kind: ServiceAccount
metadata:
name: karpenter-global-settings
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: karpenter-global-settings
rules:
- apiGroups: [""]
resources: [configmaps]
verbs: [list, label, get, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: karpenter-global-settings
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: karpenter-global-settings
subjects:
- kind: ServiceAccount
name: karpenter-global-settings
namespace: kube-system |
We are having the same issue currently: We manage e.g. the Instance profile and ServiceAccount with terraform and want to deploy karpenter itself with the helm chart in ArgoCD (we're currently using However, we currently can't set the I was surprised by this, expecting every setting to be configurable via an environment variable instead of being loaded from a ConfigMap with a hard-coded name. As a mid-term workaround, I'll be opening a PR for the chart to be able to disable the deployment of the ConfigMap, so that we can manage the ConfigMap in terraform for now. On the long term, I'd prefer for karpenter to be fully configurable via env vars only. This would benefit everyone in multiple ways:
Using environment variables would also be easier then implementing environment variable substitution with the ConfigMap. |
@morremeyer we pass instance profile generated in terraform into a git yaml file that is used by Argo Cd deployment of karpenter with zero issues. |
@FernandoMiguel Thank you! This does not work for us since our Terraform runs are not able to commit to the repository. |
Since the PR name has room for improvement I'll leave a brief blurb here before closing. As part of Karpenter's graduation to beta we've removed the |
Tell us about your request
Have the ability to use environment variable substitution on the ConfigMap with the usual
$
syntax similar to Java's Spingboot configuration.Or dynamically expose deterministic environment variable/CLI flags names e.g.
aws.clusterName
=AWS_CLUSTER_NAME
similar to other controllers like External DNS.Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We've lost the ability to use environment variables to substitute configurations like cluster name and cluster endpoint.
This is useful when we want to handle these values elsewhere, like a separate configmap/secret in a different namespace. There are also advanced scenarios like using External Secrets to fetch values from SSM to upsert a Kubernetes Secret.
Are you currently working around this issue?
Using older versions before #2746 otherwise, we'd have a manually paste in dynamic values like the cluster endpoint into our Helm chart to configure Karpenter correctly.
Additional Context
No response
Attachments
No response
Community Note
The text was updated successfully, but these errors were encountered: