-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream ExternalName services - proxy not working #1600
Comments
@mooperd two things:
|
Hey Manuel, The service has been updated to:
Still getting 503's Cheers, Andrew |
@mooperd please update the image to |
@aledbf I'm afraid it persists. The only relevant logs that I can find are in the access log.
|
@mooperd please check the dns. This is my test and works as expected
|
Hi @aledbf, Upon switching the image I saw the proxy_pass directive in the nginx conf was replaced with:
Although this remained in the config.
The previously available proxy at With some poking I got the I'm unsure what I should be looking for in DNS other than checking if ta |
@mooperd are you connected to the kubertenes slack channel? |
@aledbf Is this because services with |
@mooperd Try setting the |
@chrismoos - That annotation did not seem to change anything. In the nginx conf I noticed the following that looks a bit suspicious.
for the record the ingress now looks like:
The relevant service
Here is the full nginx.conf
|
I tested this and unfortunately it won't work out of the box with
|
ok, first please update the image to This is the service:
and this the ingress:
the most important part here is the this is the output
|
Great! It's working! |
@aledbf Does your solution just put the external name in the
More info here: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ |
@chrismoos please check the generated nginx.conf. The resolver caches the dns responses only for 30 seconds |
I am having this issue with chart version 3.34.0. I'm passing traffic to an I've tried using these annotations:
Lots of these errors in the logs:
|
I think I also have the same problem as #1332: I am seeing a 503 error in the nginx-controller log.
Here is the service which points to an AWS Elasticsearch available over https.
Here is my ingress config dumped from kubectl. I trimmed out some stuff for readability
The nginx config from the ingress pod.
From the nginx pod the service does indeed seem to be available.
Here is an access log entry.
The text was updated successfully, but these errors were encountered: