Skip to content
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.

e2e tests sometimes fail because we try to create clusterrolebinding before the etcd server is ready #117

Closed
wallrj opened this issue Nov 9, 2017 · 1 comment
Labels

Comments

@wallrj
Copy link
Member

wallrj commented Nov 9, 2017

Test failed in #116

+kubectl get nodes
The connection to the server 127.0.0.1:8443 was refused - did you specify the right host or port?
++date +%s
+local current_time=1510242196
+local remaining_time=288
+[[ 288 -lt 0 ]]
+local sleep_time=10
+[[ 288 -lt 10 ]]
+sleep 10
+true
+kubectl get nodes
No resources found.
+return 0
+kubectl create clusterrolebinding cluster-admin:kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:default
Error from server: client: etcd member http://0.0.0.0:2379 has no leader

/kind bug

Obviously the test for kubectl get nodes isn't sufficient.
We need to find some other way to know that the API is ready to accept configuration changes.
Or just retry the configuration change until it works.

@munnerz
Copy link
Contributor

munnerz commented Nov 9, 2017

Nice find.

I think for now, retrying this with a 1 minute timeout or something will work.
Otherwise, a kubectl get componentstatus should return ok/healthy for etcd. But it may be difficult to properly parse the output of this command reliably.

jetstack-bot added a commit that referenced this issue Nov 13, 2017
Automatic merge from submit-queue.

Retry the kube-system RBAC cluserrolebinding fix

This should prevent intermittent E2E test failures in case Minikube API server is
not yet ready to accept configuration changes.

Inspired by: kubernetes/minikube#1904

Fixes: #117
**Release note**:
```release-note
NONE
```
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants