Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Whitelist not working #2096

Closed
mvineza opened this issue Feb 15, 2018 · 26 comments
Closed

Whitelist not working #2096

mvineza opened this issue Feb 15, 2018 · 26 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mvineza
Copy link

mvineza commented Feb 15, 2018

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.): yes

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): whitelist


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:

We are running Rancher v1.6.12 locally with 3 virtual machine nodes.

  • OS (e.g. from /etc/os-release):

Here is the configuration of the nodes:

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a):
    3.10.0-693.5.2.el7.x86_64 - Install tools:
  • Others:

What happened:
I added a whitelist on our Ingress resource using the following YAML file:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: testing
  namespace: testing
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/8"
spec:
  rules:
  - host: testing.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80

I tried curl on the page and I was still able to access it.

some.ip.here - - [16/Feb/2018:01:37:45 +0000] "GET / HTTP/1.1" 200 58 "-" "curl/7.53.1" "my.ip.is.here"
What you expected to happen:
I should not be able to access it since I'm on a different IP.

How to reproduce it (as minimally and precisely as possible):
curl http://test.com

Anything else we need to know:

@aledbf
Copy link
Member

aledbf commented Feb 15, 2018

@mvineza please use the issue template to provide context.
Check the logs to make sure you see the real source IP address of the clients

@azweb76
Copy link

azweb76 commented Feb 16, 2018

@aledbf Looks like if --ssl-passthrough is enabled, the nginx controller uses proxy protocol for HTTPS. use-proxy-protocol must be enabled for nginx to unwrap the IP for use in the whitelist. When proxy protocol is enabled, it is enabled for 80 and 443. With --ssl-passthrough enabled, the whitelist does not work unless use-proxy-protocol: "true" is set. The problem for us, is that our load balancer does not support proxy protocol, so port 80 requests fail with curl: (52) Empty reply from server. @mvineza, can you confirm if --ssl-passthrough is enabled?

Access log: 127.0.0.1 - [127.0.0.1] - - [16/Feb/2018:00:16:32 +0000] "GET / HTTP/1.1" 403 169 "-" "curl/7.58.0" 91 0.000

FYI... if --ssl-passthrough is enabled, the nginx controller handles sending HTTPS traffic to nginx over 442, whereas HTTP traffic is handled by nginx directly.

@aledbf
Copy link
Member

aledbf commented Feb 16, 2018

FYI... if --ssl-passthrough is enabled, the nginx controller handles sending HTTPS traffic to the pods, whereas HTTP traffic is handled by nginx.

Correct, we just pipe the TCP connection to the backend.

@azweb76
Copy link

azweb76 commented Feb 16, 2018

Do you think we can expose an option to not enable proxy protocol for HTTP? @aledbf

@azweb76
Copy link

azweb76 commented Feb 16, 2018

Basically our load balancer does not use proxy protocol and its only --ssl-passthrough that is requiring proxy protocol by nginx. HTTPS is fine. HTTP fails.

@azweb76
Copy link

azweb76 commented Feb 16, 2018

Or we need to have the controller handle port 80, then forward to maybe 81 and wrap it with proxy protocol if --ssl-passthrough is enabled.

@mvineza
Copy link
Author

mvineza commented Feb 16, 2018

@aledbf Done updating issue using the template. I confirm that the IP that is being shown on the nginx logs is the IP where I am connecting from which is from my laptop.

@mvineza
Copy link
Author

mvineza commented Feb 16, 2018

@azweb76 It is not enabled. Here is the "deploy/nginx-ingress-controller" args

  - args:
      - /nginx-ingress-controller
      - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      - --configmap=$(POD_NAMESPACE)/nginx-configuration
      - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      - --annotations-prefix=nginx.ingress.kubernetes.io

@fripoli
Copy link

fripoli commented Mar 14, 2018

This is also a problem for me.

I created the nginx ingress using helm and have a simple ingress like this

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xxx.x/xx"
spec:
  rules:
  - host: grafana.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000

The ingress itself works ... all good, but there is no whitelisting kicking in ... I was expecting the nginx-controller pod to be reloaded with a deny config but theres nothing there... how does that work?

@ghost
Copy link

ghost commented Mar 14, 2018

@fripoli Unless you are starting the controller with the flag --annotations-prefix=ingress.kubernetes.io, please change the whitelist annotation to: nginx.ingress.kubernetes.io/whitelist-source-range

@fripoli
Copy link

fripoli commented Mar 14, 2018

thanks, that was the issue :)

@grebois
Copy link

grebois commented May 28, 2018

Not working in 0.15.0

@YvonneArnoldus
Copy link

I'm using quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0

I'm always getting a 403 when I add nginx.ingress.kubernetes.io/whitelist-source-range : "x.x.x.x" where x.x.x.x is the Ip I get from https://whatismyipaddress.com.

@antoineco
Copy link
Contributor

antoineco commented Jun 12, 2018

/assign @antoineco

@idirouhab
Copy link

If I use a configmap like this one it works:

apiVersion: v1
data:
  enable-vts-status: "false"
  whitelist-source-range : "1.2.3.4" 
kind: ConfigMap
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.13.2
    component: controller
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
  namespace: default

@idirouhab
Copy link

idirouhab commented Jun 12, 2018

I'd confirm this, in my case was my mistake.
I didn't add the nginx prefix.

@antoineco
Copy link
Contributor

@grebois could you confirm this is happening with the latest version when using the correct annotation prefix? (nginx.ingress.kubernetes.io/)
Also please provide more information about your environment, and make sure the NGINX access logs do display the expected external IP.

@YvonneArnoldus that's most likely because NGINX interprets the incoming traffic as coming from a load balancer IP instead of your own IP, same comment as above.

@antoineco
Copy link
Contributor

related: #2567

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2018
@grebois
Copy link

grebois commented Sep 14, 2018

@antoineco I wasn't able to test if it works or not anymore, now I only get the IP of the load balancer so that's a blocker, but will follow up on this as soon as possible, currently using 0.19.0.

@borqosky
Copy link

borqosky commented Oct 9, 2018

@grebois : what ip do you see in ingress logs (public, or private) ?
if private, this is probably IP of LB node. If you installed it with helm, try to upgrade ingress with helm upgrade --name stable/nginx-ingress --set controller.service.externalTrafficPolicy=Local

@Foxsa
Copy link

Foxsa commented Oct 9, 2018

Having same issue.

In logs - private address (behind NAT)
192.168.0.35 - [192.168.0.35] - - [09/Oct/2018:10:04:32 +0000] "GET / HTTP/1.1" 403 177 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/69.0.3497.81 Chrome/69.0.3497.81 Safari/537.36" 729 0.000 [monitoring-prometheus-k8s-web] - - - - 9a59fbcb47e8f4092e709fa60333503d

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 11, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@MrWnn
Copy link

MrWnn commented Nov 15, 2019

ThxGod, this hlp a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.