Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change etcd logging level in kops #7859

Closed
grv231 opened this issue Oct 31, 2019 · 8 comments
Closed

Change etcd logging level in kops #7859

grv231 opened this issue Oct 31, 2019 · 8 comments

Comments

@grv231
Copy link

grv231 commented Oct 31, 2019

1. What kops version are you running? The command kops version, will display
this information.

1.13.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

v1.13.0

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
Upgraded from Kube 1.12.8 --> 1.13.0

5. What happened after the commands executed?
New pods came up for etcd namely etcd-manager-events-ip and etcd-manager-main-ip as part of etcd V3

6. What did you expect to happen?
The above scenario (two new pod types for etcd in kube-system namespace) is correct as per the etcd upgrade. But the amount of logging (which has increased significantly) is putting stress on our ES clusters

7. Anything else do we need to know?
This is basically a query, not a bug. The amount of logging has increased in comparison to older Kube version. I have referred to this link, but it only mentions the logging level of kubeapiserver.

Moreover, tried changing the manifests in the manifests --> etcd --> main.yaml & events.yaml folders. The following entry was changed for testing:

Old
--v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events

New
--v=2 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events

but had no effect (after doing kops cluster upgrade) on the log count generated. Can anyone assist on the best way to change the logging level in etcd to warn or something like that. Referred to this link which states that we can change the logging level in etcd, but not sure how to do it via KOPS.

Any pointers would be helpful here.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2020
@grv231
Copy link
Author

grv231 commented Jan 29, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2020
@major7x
Copy link

major7x commented Apr 7, 2020

Looks like we're in the same boat. An increase in logs. Any idea how to enable rotation and delete?

@grv231
Copy link
Author

grv231 commented Apr 7, 2020

Still no clue on the 1.13.0 version. I recently upgraded the version to 1.14.10, but haven't checked what's going on with the logging level in this version.

@rifelpet
Copy link
Member

This can now be achieved in Kops 1.18 by setting env vars via #8402.

  etcdClusters:
   - cpuRequest: 200m
     etcdMembers:
     - instanceGroup: master-us-test-1a
       name: us-test-1a
     manager:
       env:
       - name: ETCD_LOG_LEVEL
         value: "warn"

@grv231
Copy link
Author

grv231 commented Apr 18, 2020

@rifelpet thanks for the info.. is this only available in 1.18? Not on the lower version as far as you know?

@rifelpet
Copy link
Member

Correct, it will be in the next 1.18 alpha release. We're hoping to get 1.17 out the door soon and then we'll be able to focus on releasing 1.18 :)

@grv231
Copy link
Author

grv231 commented Apr 27, 2020

Closing this issue, since it's confirmed above that it's only going to be part of future Kubernetes releases (apart from 1.17) on not the previous ones. Thanks, @rifelpet for confirmation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants