Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why nginx controller is trying to connect the application pod not service ? #8079

Closed
tholvoleak opened this issue Dec 28, 2021 · 11 comments
Closed
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@tholvoleak
Copy link

tholvoleak commented Dec 28, 2021

Hi, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".

cat nginx-app.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  selector:
    matchLabels:
      run: nginx-app
  replicas: 3 
  template:
    metadata:
      labels:
        run: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cat nginx-service.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app

cat nginx-ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /demo
        pathType: Prefix
        backend:
          service: 
             name: nginx-service
             port:
               number: 8080

kubectl get pod -o wide

NAME                                   READY   STATUS      RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
nginx-app-744fc45d8f-drnml             1/1     Running     0          14m    10.42.0.16   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-lc9zn             1/1     Running     0          14m    10.42.0.15   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-njjkr             1/1     Running     0          14m    10.42.0.14   10.*.*.207   <none>           <none>

kubectl get svc -o wide

NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE    SELECTOR
nginx-service                        ClusterIP   10.43.89.106   <none>        8080/TCP   8m2s   run=nginx-app

kubectl get ingress -o wide

NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
nginx-ingress   <none>   *       10.*.*.207   80      22m

curl http://10.*.*.207/demo

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>

Error logs of pod nginx-ingress controller

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

I wonder why nginx controller is trying to connect the application pod nginx-app (upstream: "http://10.42.0.16:80/demo"") not service nginx-service ???

@k8s-ci-robot
Copy link
Contributor

@tholvoleak: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Dec 28, 2021
@longwuyuan
Copy link
Contributor

longwuyuan commented Dec 28, 2021 via email

@longwuyuan
Copy link
Contributor

/close

@tholvoleak
Copy link
Author

Change the service web to --type clusterIP Thanks, ; Long

I have changed, but still does not working. i updated info above

@tholvoleak tholvoleak reopened this Dec 28, 2021
@longwuyuan
Copy link
Contributor

Hi, this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .

Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Hi, this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .

Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tholvoleak
Copy link
Author

tholvoleak commented Dec 28, 2021

Hi, this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .

Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.

/close

Hi brother,

As it's a basic functionality of the ingress-nginx-controller. how to allow ingress-nginx-controller to application pod? because it's unreachable.

I thought the flow is ingress-nginx-controller -> service -> pod.

@longwuyuan
Copy link
Contributor

Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .

@tholvoleak tholvoleak changed the title 45875 connect() failed (113: Host is unreachable) while connecting to upstream Why nginx controller is trying to connect the application pod not service ? Dec 28, 2021
@uniuuu
Copy link

uniuuu commented Apr 10, 2023

Hi, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".

cat nginx-app.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  selector:
    matchLabels:
      run: nginx-app
  replicas: 3 
  template:
    metadata:
      labels:
        run: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cat nginx-service.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app

cat nginx-ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /demo
        pathType: Prefix
        backend:
          service: 
             name: nginx-service
             port:
               number: 8080

kubectl get pod -o wide

NAME                                   READY   STATUS      RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
nginx-app-744fc45d8f-drnml             1/1     Running     0          14m    10.42.0.16   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-lc9zn             1/1     Running     0          14m    10.42.0.15   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-njjkr             1/1     Running     0          14m    10.42.0.14   10.*.*.207   <none>           <none>

kubectl get svc -o wide

NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE    SELECTOR
nginx-service                        ClusterIP   10.43.89.106   <none>        8080/TCP   8m2s   run=nginx-app

kubectl get ingress -o wide

NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
nginx-ingress   <none>   *       10.*.*.207   80      22m

curl http://10.*.*.207/demo

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>

Error logs of pod nginx-ingress controller

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

I wonder why nginx controller is trying to connect the application pod nginx-app (upstream: "http://10.42.0.16:80/demo"") not service nginx-service ???

I've got this issue in bare-metal (Fedora) microk8s setup.
The root cause the firewalld was running. Once disabled it and ensured that iptables legacy are running and it's not nftables then Ingress was able to reach pod by its IP and returned requested page.
Firewalld cannot create some rules required by k8s and it's also displayed in firewalld logs.

To troubleshoot your issue you can post output of next command and upload log here:

  1. kubectl get all -A && echo && kubectl get nodes && echo && kubectl cluster-info
  2. Delete your deployment kubectl delete -f nginx-app.yml kubectl delete -f nginx-service.yml kubectl delete -f nginx-ingress.yml
  3. run journalctl -f > journalctl.log
  4. Recreate your deployment kubectl apply -f nginx-app.yml kubectl apply -f nginx-service.yml kubectl apply -f nginx-ingress.yml
  5. Ctrl+c to stop journalctl command and upload log.

@cleanet
Copy link

cleanet commented May 2, 2024

The logs:

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

means that nginx is accessing at application since the endpoint 10.42.0.15:80.

This socket, is the endpoint of you service. You can see it, do it:

kubectl get endpoints -n nginx-service

In this case, is the endpoints of service nginx-service.
But seeing that throw a 502 Bad Gateway and the logs, this means that the ingress controller is trying access at service via endpoint (trying with all the endpoints of ingress controller). And the ingress controller's pod cannot access.

For test it, entry in the pod of ingress controller and checks the connection.

$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.42.0.16 80
nc: 10.85.0.12 (10.85.0.12:8080): Host is unreachable
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ 

As we see exactly , this cannot access.

You look that IP has the service nginx-service and try access

$ kubectl describe service
$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.43.89.106 8080
10.43.89.106 (10.43.89.106:8080) open

And as we see, the pod has access. With the ClusterIP and Port of the service.

So that a solution would be do the follow.

You must tell at Ingress, that uses the ClusterIP:port instead of use endpoints list of ingress controller.

For this you edit the Ingress resource and add the follow annotation.

nginx.ingress.kubernetes.io/service-upstream: "true"

FYI

Service Upstream

By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

This can be desirable for things like zero-downtime deployments . See issue #257.

Known Issues

If the service-upstream annotation is specified the following things should be taken into consideration:

  • Sticky Sessions will not work as only round-robin load balancing is supported.
  • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.

@rome-legacy
Copy link

Hi, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".
cat nginx-app.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  selector:
    matchLabels:
      run: nginx-app
  replicas: 3 
  template:
    metadata:
      labels:
        run: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cat nginx-service.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app

cat nginx-ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /demo
        pathType: Prefix
        backend:
          service: 
             name: nginx-service
             port:
               number: 8080

kubectl get pod -o wide

NAME                                   READY   STATUS      RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
nginx-app-744fc45d8f-drnml             1/1     Running     0          14m    10.42.0.16   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-lc9zn             1/1     Running     0          14m    10.42.0.15   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-njjkr             1/1     Running     0          14m    10.42.0.14   10.*.*.207   <none>           <none>

kubectl get svc -o wide

NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE    SELECTOR
nginx-service                        ClusterIP   10.43.89.106   <none>        8080/TCP   8m2s   run=nginx-app

kubectl get ingress -o wide

NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
nginx-ingress   <none>   *       10.*.*.207   80      22m

curl http://10.*.*.207/demo

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>

Error logs of pod nginx-ingress controller

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

I wonder why nginx controller is trying to connect the application pod nginx-app (upstream: "http://10.42.0.16:80/demo"") not service nginx-service ???

I've got this issue in bare-metal (Fedora) microk8s setup. The root cause the firewalld was running. Once disabled it and ensured that iptables legacy are running and it's not nftables then Ingress was able to reach pod by its IP and returned requested page. Firewalld cannot create some rules required by k8s and it's also displayed in firewalld logs.

To troubleshoot your issue you can post output of next command and upload log here:

1. `kubectl get all -A && echo && kubectl get nodes && echo && kubectl cluster-info`

2. Delete your deployment `kubectl delete -f  nginx-app.yml` `kubectl delete -f  nginx-service.yml` `kubectl delete -f nginx-ingress.yml`

3. run `journalctl -f > journalctl.log`

4. Recreate your deployment `kubectl apply -f  nginx-app.yml` `kubectl apply -f  nginx-service.yml` `kubectl apply -f nginx-ingress.yml`

5. Ctrl+c to stop journalctl command and upload log.

thank you.
i had the same issue on debian. could reproduce the analysis steps...
just disabled the firewalld.service and the 502 was gone instantly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

6 participants