Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting for ingress reached timeout #10231

Closed
kostiamol opened this issue Jun 29, 2018 · 21 comments
Closed

Waiting for ingress reached timeout #10231

kostiamol opened this issue Jun 29, 2018 · 21 comments
Labels
kind/question Questions that haven't been identified as being feature requests or bugs.

Comments

@kostiamol
Copy link

I'm deploying che in multi-user mode as a helm chart. During workspace loading I get this error:

image

On the basis of my previous experience I suppose that it's again a problem of my private cluster. But it's the second cluster I tried. Firstly it worked on the first cluster. But after some weeks I started to receive errors during che loading. So I decided to change cluster. And the same chart worked on the second cluster for 2 days. Now I'm experiencing this error. My admins have no idea what's going on. It's extremeley wierd. So my question is:
How can I find out what's going on? For now I see only 3 pods and their containers. Maybe I should check out workspace containers? But how?

@ghost ghost added the kind/question Questions that haven't been identified as being feature requests or bugs. label Jun 29, 2018
@ghost
Copy link

ghost commented Jun 29, 2018

@kostiamol when a workspace start, you should see a new namespace (this is the default behavior) in your k8s cluster.

Also, anything suspicious in server logs?

@kostiamol
Copy link
Author

kostiamol commented Jul 2, 2018

@eivantsov Yes, I have a new namespace:
image
Also I've checked the logs of che container in the che pod. But there were only [INFO] messages.

The thing is that something fails in the very beginning of workspace init because even Agent for command execution is not active.
image

@kostiamol
Copy link
Author

@eivantsov Maybe the cause of the problem is the lack of rights? If so, how could I elevate them?
image

@ghost
Copy link

ghost commented Sep 10, 2018

@kostiamol is this still an issue for you?

@kostiamol
Copy link
Author

It's hard to say for sure but I think it was again a proxy issue.

@cq-z
Copy link

cq-z commented Nov 26, 2018

@kostiamol I also encountered the same problem. Do you have a solution?

@kostiamol
Copy link
Author

@cq-z Nope ;(

@ghost
Copy link

ghost commented Nov 26, 2018

@kostiamol do you see ingresses created? Anything suspicious in k8s events?

@kostiamol
Copy link
Author

@eivantsov I didn't see anything suspicious. I guess it was a cluster internal error.

@ghost
Copy link

ghost commented Nov 26, 2018

So, k8s wan't able to create ingresses.

What strategy did you use? Single host? Default host? Multi host? It's in helm chart

@cq-z
Copy link

cq-z commented Nov 26, 2018

Default host or Multi host Neither!

@ghost
Copy link

ghost commented Nov 26, 2018

@cq-z do you see created ingresses?

@cq-z
Copy link

cq-z commented Nov 26, 2018

Have!
image

@ghost
Copy link

ghost commented Nov 26, 2018

@cq-z i cannot read Chinese but it looks like ingresses did not receive their endpoints.

@cq-z
Copy link

cq-z commented Nov 26, 2018

2018-11-26 10:56:03,479[io-8080-exec-10]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 322]   - Starting workspace 'admin/wksp-7crm' with id 'workspacend2cqkoke5xfsjy5' by user 'admin'
2018-11-26 10:56:04,078[aceSharedPool-0]  [WARN ] [i.f.k.c.i.VersionUsageUtils 55]      - The client is using resource type 'ingresses' with unstable version 'v1beta1'
2018-11-26 11:01:04,270[aceSharedPool-0]  [WARN ] [.i.k.KubernetesInternalRuntime 228]  - Failed to start Kubernetes runtime of workspace workspacend2cqkoke5xfsjy5. Cause: Waiting for ingress 'ingressnf1ac8jz' reached timeout
2018-11-26 11:01:05,076[aceSharedPool-0]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 383]   - Workspace 'admin:wksp-7crm' with id 'workspacend2cqkoke5xfsjy5' start failed
2018-11-26 11:03:51,085[io-8080-exec-12]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 322]   - Starting workspace 'admin/wksp-7crm' with id 'workspacend2cqkoke5xfsjy5' by user 'admin'
2018-11-26 11:08:51,347[aceSharedPool-1]  [WARN ] [.i.k.KubernetesInternalRuntime 228]  - Failed to start Kubernetes runtime of workspace workspacend2cqkoke5xfsjy5. Cause: Waiting for ingress 'ingress036ybeia' reached timeout
2018-11-26 11:08:51,533[aceSharedPool-1]  [INFO ] [o.e.c.a.w.s.WorkspaceRuntimes 383]   - Workspace 'admin:wksp-7crm' with id 'workspacend2cqkoke5xfsjy5' start failed

@cq-z
Copy link

cq-z commented Nov 26, 2018

ingress036ybeia has been created but is invalid

@cq-z
Copy link

cq-z commented Nov 27, 2018

hello @eivantsov Is there any solution?

@xuchenhao001
Copy link

Got the same issue, when I create a namespace, there alway errors like this:

Error: Failed to run the workspace: "Waiting for ingress 'ingressh0t7poya' reached timeout"

I have checked that, there are 4 ingresses, 1 service and 1 pvc successfully created under a new namespace when creating new workspace.
However, there are no error message in che-server pod.

@cq-z
Copy link

cq-z commented Feb 12, 2019

@xuchenhao001 I AM switch to helm reload ingress . it's OK !

@xuchenhao001
Copy link

@cq-z Congratulations! So this bug is because of the nodePort ingress? Could you please give me some links/docs about helm reload ingress? thanks!

@cq-z
Copy link

cq-z commented Feb 12, 2019

helm  upgrade --install nginx-ingress  --namespace nginx-ingress -f values.yaml stable/nginx-ingress

https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://docs.helm.sh/using_helm/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Questions that haven't been identified as being feature requests or bugs.
Projects
None yet
Development

No branches or pull requests

3 participants