Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does it support a second jump? #657

Closed
linydquantil opened this issue Mar 1, 2019 · 9 comments
Closed

Does it support a second jump? #657

linydquantil opened this issue Mar 1, 2019 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@linydquantil
Copy link

Our back-end service is consul-cluster, I deployed nginx services in front of consul cluster;
But when I'm visiting, it can jump to /ui normally. But what's displayed on the page is default server - 404
And then I use loadbalancer to open web services; It will have normal access to the back-end consul ui;
This is how I deploy consul:https://github.com/kelseyhightower/consul-on-kubernetes
and my nginx services:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-consul-conf
data:
  nginx.conf: |
    user nginx;
    worker_processes  2;
    error_log  /var/log/nginx/error.log;
    events {
      worker_connections  10240;
    }
    http {
      server_tokens off;
      log_format  main
              'remote_addr:$remote_addr\t'
              'time_local:$time_local\t'
              'method:$request_method\t'
              'uri:$request_uri\t'
              'host:$host\t'
              'status:$status\t'
              'bytes_sent:$body_bytes_sent\t'
              'referer:$http_referer\t'
              'useragent:$http_user_agent\t'
              'forwardedfor:$http_x_forwarded_for\t'
              'request_time:$request_time';
      access_log    /var/log/nginx/access.log main;
      server {
          listen       18500;
          server_name  _;
          location / {
              root   html;
              index  index.html index.htm;
          }
      }
      include /etc/nginx/virtualhost/virtualhost.conf;
    }
  htpasswd: |
    consul_access:$apr1$2JLE03xxxxxxU6.
  virtualhost.conf: |
    upstream consul {
      server consul-0.consul.xxx.svc.cluster.local:8500;
      server consul-1.consul.xxx.svc.cluster.local:8500;
      server consul-2.consul.xxx.svc.cluster.local:8500;
    }
    server {
      listen 18500 default_server;
      server_name _;
      access_log /var/log/nginx/consul.access.log main;
      error_log /var/log/nginx/consul.error.log;
      location / {
        proxy_http_version 1.1;
        proxy_pass http://consul;
        proxy_read_timeout 300;
        proxy_connect_timeout 300;
        proxy_redirect off;
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/htpasswd;
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-consul-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-consul-ui
  template:
    metadata:
      labels:
        app: nginx-consul-ui
    spec:
      containers:
      - name: nginx
        image: nginx:1.14
        imagePullPolicy: Always
        ports:
        - containerPort: 18500
        volumeMounts:
        - mountPath: /etc/nginx # mount nginx-conf volumn to /etc/nginx
          readOnly: true
          name: nginx-conf
        - mountPath: /var/log/nginx
          name: log
        resources:
          limits:
            cpu: "512m"
            memory: 512Mi
          requests:
            cpu: "256m"
            memory: 256Mi
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-consul-conf # place ConfigMap `nginx-conf` on /etc/nginx
          items:
            - key: nginx.conf
              path: nginx.conf
            - key: htpasswd
              path: htpasswd
            - key: virtualhost.conf
              path: virtualhost/virtualhost.conf # dig directory
      - name: log
        emptyDir: {}
@rramkumar1
Copy link
Contributor

rramkumar1 commented Mar 4, 2019

@linydquantil I think I know what you are trying to say, but just so I am clear, can you elaborate on what the ask is?

@linydquantil
Copy link
Author

@rramkumar1 My problem is that when I use loadbalancer, I can access the site normally. But when I use ingress, I can't access the site properly.
My understanding is that when the request passes through ingress, it goes through a jump, And then the request goes through the reverse proxy on nginx. so it returns default server 404.
Well, in this scenario, this is a normal phenomenon, or is it a bug?If I want to continue using Ingress, what adjustments do I need to make?

@bowei
Copy link
Member

bowei commented Mar 5, 2019

Are you using ingress on GKE or nginx ingress?
Can you see requests coming to your nginx server?
If you are using GKE, what are you seeing in the stackdriver logging for the LB?

@linydquantil
Copy link
Author

linydquantil commented Mar 5, 2019

i use ingress on GKE @bowei , Ok, I'll collect the log information later

@linydquantil
Copy link
Author

linydquantil commented Mar 6, 2019

pantsel/konga#348 Has anyone ever had a problem with that? Besides, I had this problem, When I use LB, konga can works well. but When I use ingress-gcp, it goes wrong.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 4, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants