-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change etcd logging level in kops #7859
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Looks like we're in the same boat. An increase in logs. Any idea how to enable rotation and delete? |
Still no clue on the 1.13.0 version. I recently upgraded the version to 1.14.10, but haven't checked what's going on with the logging level in this version. |
This can now be achieved in Kops 1.18 by setting env vars via #8402. etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-us-test-1a
name: us-test-1a
manager:
env:
- name: ETCD_LOG_LEVEL
value: "warn" |
@rifelpet thanks for the info.. is this only available in 1.18? Not on the lower version as far as you know? |
Correct, it will be in the next 1.18 alpha release. We're hoping to get 1.17 out the door soon and then we'll be able to focus on releasing 1.18 :) |
Closing this issue, since it's confirmed above that it's only going to be part of future Kubernetes releases (apart from 1.17) on not the previous ones. Thanks, @rifelpet for confirmation |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.13.0
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.v1.13.0
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Upgraded from Kube 1.12.8 --> 1.13.0
5. What happened after the commands executed?
New pods came up for etcd namely
etcd-manager-events-ip
andetcd-manager-main-ip
as part of etcd V36. What did you expect to happen?
The above scenario (two new pod types for etcd in kube-system namespace) is correct as per the etcd upgrade. But the amount of logging (which has increased significantly) is putting stress on our ES clusters
7. Anything else do we need to know?
This is basically a query, not a bug. The amount of logging has increased in comparison to older Kube version. I have referred to this link, but it only mentions the logging level of kubeapiserver.
Moreover, tried changing the manifests in the manifests --> etcd -->
main.yaml
&events.yaml
folders. The following entry was changed for testing:Old
--v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events
New
--v=2 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events
but had no effect (after doing kops cluster upgrade) on the log count generated. Can anyone assist on the best way to change the logging level in etcd to
warn
or something like that. Referred to this link which states that we can change the logging level in etcd, but not sure how to do it via KOPS.Any pointers would be helpful here.
The text was updated successfully, but these errors were encountered: