Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[pull-charts-e2e] CI needs to work well with stateful applications #1724

Closed
dhilipkumars opened this issue Aug 13, 2017 · 4 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dhilipkumars
Copy link
Contributor

dhilipkumars commented Aug 13, 2017

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature Request

Version of Helm and Kubernetes:

Which chart:
Could impact all stateful set charts

What happened:
Statefulset charts with multiple replicas tend to start one after the other, some statefulset also has initContianers configured, e2e CI job fails for these types of workloads which might need longer duration to come up. Currently the CI waits for 3 mins.

W0812 11:24:00.672] + COUNT=18
W0812 11:24:00.672] + kubectl get pods --no-headers --namespace pr-1721-2537
I0812 11:24:00.772] etcd-2537-etcd-0   0/1       Pending   0         3m
I0812 11:24:00.773] INFO: Sleeping waiting for containers to be ready
I0812 11:24:01.011] etcd-2537-etcd-0   0/1       Pending   0         3m
W0812 11:24:01.112] + sleep 10
I0812 11:24:11.019] ERROR: Some containers failed to reach the ready state

What you expected to happen:
We should either give a very long time such as 10mins to allow the stateful applications to come up

or

Should find a better way to test stateful applications in the ci.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
cc: #1721
The CI used to wait for 2 mins, we recently increased it to 3mins still doesn't help.

@dhilipkumars dhilipkumars changed the title CI needs to work well with stateful applications [pull-charts-e2e] CI needs to work well with stateful applications Aug 13, 2017
@dhilipkumars
Copy link
Contributor Author

cc: @kubernetes/charts-maintainers

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants