Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server_names_hash_bucket_size is to low, when longest hostname has 46 characters #4992

Closed
mmueller90 opened this issue Jan 31, 2020 · 2 comments · Fixed by #4993
Closed

server_names_hash_bucket_size is to low, when longest hostname has 46 characters #4992

mmueller90 opened this issue Jan 31, 2020 · 2 comments · Fixed by #4993
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@mmueller90
Copy link

NGINX Ingress controller version: 0.24.1
complete image url: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:54Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.6", GitCommit:"7015f71e75f670eb9e7ebd4b5749639d42e20079", GitTreeState:"clean", BuildDate:"2019-11-13T11:11:50Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

What happened:
When creating an ingress with exactly 46 character long hostname, the ingress controller logs an error message:

2020/01/31 10:47:48 [emerg] 4436#4436: could not build server_names_hash, you should increase server_names_hash_bucket_size: 64
nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64
nginx: configuration file /tmp/nginx-cfg525869296 test failed

This only happens, when the longest host-name of all ingress rules in the cluster have 46 characters.

What you expected to happen:
Ingress deployment works and nginx-ingress-controller does not throw an error message.

I assume, that the server_names_hash_bucket_size calculation algorithm calculates a wrong size on edge cases of server-name-lengths. One example for this edge case is the length of 46, see tests of 322be61

How to reproduce it:
Precondition: running kubernetes cluster and an installed ingress-controller

create a default http backend

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: default-http-backend
  name: default-http-backend
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: default-http-backend
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      containers:
      - image: k8s.gcr.io/defaultbackend:1.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: default-http-backend
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: default-http-backend
  name: default-http-backend
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: default-http-backend
  sessionAffinity: None
  type: ClusterIP

create a test-ingress

test-ingress has a host with exact 46 characters

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-realm: auth required
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
  generation: 1
  labels:
    app: test-app
  name: test-ingress
spec:
  rules:
  - host: 01234567891234567890123456789012345678.test.de
    http:
      paths:
      - backend:
          serviceName: default-http-backend
          servicePort: 80
        path: /

Anything else we need to know:
A workaround for that is to configure the ingress controller via configMap described at nginxinc/kubernetes-ingress#34 (comment)

/kind bug

@mmueller90 mmueller90 added the kind/bug Categorizes issue or PR as related to a bug. label Jan 31, 2020
@aledbf
Copy link
Member

aledbf commented Jan 31, 2020

A workaround for that is to configure the ingress controller via configMap

You already can do that using https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size

@aledbf
Copy link
Member

aledbf commented Jan 31, 2020

I can reproduce the issue following running the steps

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/baremetal/service-nodeport.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/docs/examples/http-svc.yaml

echo '
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-realm: auth required
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
  generation: 1
  labels:
    app: test-app
  name: test-ingress
spec:
  rules:
  - host: 01234567891234567890123456789012345678.test.de
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /
' | kubectl apply -f -

POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME)

kubectl exec -n ingress-nginx $POD_NAME cat /tmp/nginx-cfg844427337|grep server_names_hash_bucket_size
	server_names_hash_bucket_size   64;

The issue here is nginx.ingress.kubernetes.io/from-to-www-redirect: "true" because the new hostname www.01234567891234567890123456789012345678.test.de is not taken into account in the call to nginxHashBucketSize.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants