Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm install failed #7615

Closed
ytianxia6 opened this issue Sep 9, 2021 · 6 comments
Closed

helm install failed #7615

ytianxia6 opened this issue Sep 9, 2021 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ytianxia6
Copy link

ytianxia6 commented Sep 9, 2021

NGINX Ingress controller version: v1.0.0

Kubernetes version (use kubectl version): 1.22.0

Environment:

NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"

  • Kernel (e.g. uname -a):
    Linux k8s-master 5.4.0-81-generic Fix x-forwarded-port mapping #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • kubeadm

  • Basic cluster related info: ,

    • kubectl version: 1.22
    • one cluster and one node
  • How was the ingress-nginx-controller installed:
    ingress-nginx ingress-nginx 1 2021-09-09 11:31:22.590858719 +0800 CST failed ingress-nginx-4.0.1 1.0.0

  • Current State of the controller:

NAMESPACE              NAME                                             READY   STATUS    RESTARTS       AGE     IP               NODE         NOMINATED NODE   READINESS GATES
default                pod/my-nginx-89c886496-6qbkm                     1/1     Running   0              16h     10.111.156.109   k8s-node1    <none>           <none>
ingress-nginx          pod/ingress-nginx-controller-7944657cb4-vpvnx    1/1     Running   0              15m     192.168.79.22    k8s-node1    <none>           <none>
kube-system            pod/calico-kube-controllers-58497c65d5-5qgzn     1/1     Running   7 (20m ago)    19d     10.108.82.200    k8s-master   <none>           <none>
kube-system            pod/calico-node-56h4t                            1/1     Running   5 (19m ago)    19d     192.168.79.21    k8s-master   <none>           <none>
kube-system            pod/calico-node-nrtqr                            1/1     Running   1 (19h ago)    19d     192.168.79.22    k8s-node1    <none>           <none>
kube-system            pod/coredns-8449c98c7d-7vc75                     1/1     Running   2 (19d ago)    20d     10.108.82.199    k8s-master   <none>           <none>
kube-system            pod/coredns-8449c98c7d-jz4vn                     1/1     Running   2 (19d ago)    20d     10.108.82.198    k8s-master   <none>           <none>
kube-system            pod/etcd-k8s-master                              1/1     Running   2 (19d ago)    20d     192.168.79.21    k8s-master   <none>           <none>
kube-system            pod/kube-apiserver-k8s-master                    1/1     Running   4 (5d6h ago)   20d     192.168.79.21    k8s-master   <none>           <none>
kube-system            pod/kube-controller-manager-k8s-master           1/1     Running   15 (19m ago)   20d     192.168.79.21    k8s-master   <none>           <none>
kube-system            pod/kube-proxy-9djr9                             1/1     Running   2 (19d ago)    20d     192.168.79.21    k8s-master   <none>           <none>
kube-system            pod/kube-proxy-dgmhs                             1/1     Running   1 (19h ago)    20d     192.168.79.22    k8s-node1    <none>           <none>
kube-system            pod/kube-scheduler-k8s-master                    1/1     Running   14 (19m ago)   20d     192.168.79.21    k8s-master   <none>           <none>
kubernetes-dashboard   pod/dashboard-metrics-scraper-856586f554-pjhv5   1/1     Running   0              6d18h   10.108.82.207    k8s-master   <none>           <none>
kubernetes-dashboard   pod/kubernetes-dashboard-67484c44f6-9vjsv        1/1     Running   6 (21m ago)    6d18h   10.108.82.208    k8s-master   <none>           <none>
tigera-operator        pod/tigera-operator-698876cbb5-g5r9g             1/1     Running   15 (19m ago)   20d     192.168.79.22    k8s-node1    <none>           <none>

NAMESPACE              NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
default                service/kubernetes                           ClusterIP      10.96.0.1        <none>        443/TCP                      20d     <none>
default                service/my-nginx                             ClusterIP      10.98.133.0      <none>        80/TCP                       16h     app=my-nginx
ingress-nginx          service/ingress-nginx-controller             LoadBalancer   10.107.213.231   <pending>     80:30666/TCP,443:32145/TCP   15m     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx          service/ingress-nginx-controller-admission   ClusterIP      10.105.21.74     <none>        443/TCP                      15m     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system            service/kube-dns                             ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       20d     k8s-app=kube-dns
kube-system            service/kubernetes-dashboard                 ClusterIP      10.111.214.25    <none>        443/TCP                      20d     k8s-app=kubernetes-dashboard
kubernetes-dashboard   service/dashboard-metrics-scraper            ClusterIP      10.108.154.37    <none>        8000/TCP                     6d18h   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard   service/kubernetes-dashboard                 NodePort       10.103.68.152    <none>        443:32001/TCP                6d18h   k8s-app=kubernetes-dashboard

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS    IMAGES                                    SELECTOR
kube-system   daemonset.apps/calico-node   2         2         2       2            2           kubernetes.io/os=linux   19d   calico-node   docker.io/calico/node:v3.20.0             k8s-app=calico-node
kube-system   daemonset.apps/kube-proxy    2         2         2       2            2           kubernetes.io/os=linux   20d   kube-proxy    192.168.79.112:49153/kube-proxy:v1.22.0   k8s-app=kube-proxy

NAMESPACE              NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                  IMAGES                                                                       SELECTOR
default                deployment.apps/my-nginx                    1/1     1            1           16h     my-nginx                    nginx:1.7.9                                                                  app=my-nginx
ingress-nginx          deployment.apps/ingress-nginx-controller    1/1     1            1           15m     controller                  k8y8z7nh.mirror.aliyuncs.com/willdockerhub/ingress-nginx-controller:v1.0.0   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system            deployment.apps/calico-kube-controllers     1/1     1            1           19d     calico-kube-controllers     docker.io/calico/kube-controllers:v3.20.0                                    k8s-app=calico-kube-controllers
kube-system            deployment.apps/coredns                     2/2     2            2           20d     coredns                     192.168.79.112:49153/coredns:v1.8.4                                          k8s-app=kube-dns
kubernetes-dashboard   deployment.apps/dashboard-metrics-scraper   1/1     1            1           6d18h   dashboard-metrics-scraper   kubernetesui/metrics-scraper:v1.0.6                                          k8s-app=dashboard-metrics-scraper
kubernetes-dashboard   deployment.apps/kubernetes-dashboard        1/1     1            1           6d18h   kubernetes-dashboard        kubernetesui/dashboard:v2.3.1                                                k8s-app=kubernetes-dashboard
tigera-operator        deployment.apps/tigera-operator             1/1     1            1           20d     tigera-operator             quay.io/tigera/operator:v1.20.0                                              name=tigera-operator

NAMESPACE              NAME                                                   DESIRED   CURRENT   READY   AGE     CONTAINERS                  IMAGES                                                                       SELECTOR
default                replicaset.apps/my-nginx-89c886496                     1         1         1       16h     my-nginx                    nginx:1.7.9                                                                  app=my-nginx,pod-template-hash=89c886496
ingress-nginx          replicaset.apps/ingress-nginx-controller-7944657cb4    1         1         1       15m     controller                  k8y8z7nh.mirror.aliyuncs.com/willdockerhub/ingress-nginx-controller:v1.0.0   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7944657cb4
kube-system            replicaset.apps/calico-kube-controllers-58497c65d5     1         1         1       19d     calico-kube-controllers     docker.io/calico/kube-controllers:v3.20.0                                    k8s-app=calico-kube-controllers,pod-template-hash=58497c65d5
kube-system            replicaset.apps/coredns-8449c98c7d                     2         2         2       20d     coredns                     192.168.79.112:49153/coredns:v1.8.4                                          k8s-app=kube-dns,pod-template-hash=8449c98c7d
kubernetes-dashboard   replicaset.apps/dashboard-metrics-scraper-856586f554   1         1         1       6d18h   dashboard-metrics-scraper   kubernetesui/metrics-scraper:v1.0.6                                          k8s-app=dashboard-metrics-scraper,pod-template-hash=856586f554
kubernetes-dashboard   replicaset.apps/kubernetes-dashboard-67484c44f6        1         1         1       6d18h   kubernetes-dashboard        kubernetesui/dashboard:v2.3.1                                                k8s-app=kubernetes-dashboard,pod-template-hash=67484c44f6
tigera-operator        replicaset.apps/tigera-operator-698876cbb5             1         1         1       20d     tigera-operator             quay.io/tigera/operator:v1.20.0                                              name=tigera-operator,pod-template-hash=698876cbb5

NAMESPACE       NAME                                      COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES                                                             SELECTOR
ingress-nginx   job.batch/ingress-nginx-admission-patch   0/1           14m        14m   patch        k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen:v1.0.0   controller-uid=42355fa9-fc35-48e4-80b1-ba51fddde877
  • Current state of ingress object, if applicable:

  • Others:

I use the helm pull ingress-nginx/ingress-nginx command to get helm package and modified the values.yaml

## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
##

## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:

controller:
  name: controller
  image:
    registry: k8y8z7nh.mirror.aliyuncs.com
    image: willdockerhub/ingress-nginx-controller
    # for backwards compatibility consider setting the full image url via the repository value below
    # use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    # repository:
    tag: "v1.0.0"
    #digest: sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6
    pullPolicy: IfNotPresent
    # www-data -> uid 101
    runAsUser: 101
    allowPrivilegeEscalation: true

  # Use an existing PSP instead of creating one
  existingPsp: ""

  # Configures the controller container name
  containerName: controller

  # Configures the ports the nginx-controller listens on
  containerPort:
    http: 80
    https: 443
    ssh: 22

  # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  config: {}

  ## Annotations to be added to the controller config configuration configmap
  ##
  configAnnotations: {}

  # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
  proxySetHeaders: {}

  # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
  addHeaders: {}

  # Optionally customize the pod dnsConfig.
  dnsConfig: {}

  # Optionally customize the pod hostname.
  hostname: {}

  # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
  # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
  # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
  dnsPolicy: ClusterFirst

  # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
  # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
  reportNodeInternalIp: false

  # Process Ingress objects without ingressClass annotation/ingressClassName field
  # Overrides value for --watch-ingress-without-class flag of the controller binary
  # Defaults to false
  watchIngressWithoutClass: false

  # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
  # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
  # is merged
  hostNetwork: true

  ## Use host ports 80 and 443
  ## Disabled by default
  ##
  hostPort:
    enabled: false
    ports:
      http: 80
      https: 443
      ssh: 22

  ## Election ID to use for status update
  ##
  electionID: ingress-controller-leader

  # This section refers to the creation of the IngressClass resource
  # IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19
  ingressClassResource:
    name: nginx
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx"

    # Parameters is a link to a custom resource containing additional
    # configuration for the controller. This is optional if the controller
    # does not require extra parameters.
    parameters: {}

  # labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  ## Security Context policies for controller pods
  ##
  podSecurityContext: {}

  ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
  ## notes on enabling and using sysctls
  ###
  sysctls: {}
  # sysctls:
  #   "net.core.somaxconn": "8192"

  ## Allows customization of the source of the IP address or FQDN to report
  ## in the ingress status field. By default, it reads the information provided
  ## by the service. If disable, the status field reports the IP address of the
  ## node or nodes where an ingress controller pod is running.
  publishService:
    enabled: true
    ## Allows overriding of the publish service to bind to
    ## Must be <namespace>/<service_name>
    ##
    pathOverride: ""

  ## Limit the scope of the controller
  ##
  scope:
    enabled: false
    namespace: ""   # defaults to $(POD_NAMESPACE)

  ## Allows customization of the configmap / nginx-configmap namespace
  ##
  configMapNamespace: ""   # defaults to $(POD_NAMESPACE)

  ## Allows customization of the tcp-services-configmap
  ##
  tcp:
    configMapNamespace: ""   # defaults to $(POD_NAMESPACE)
    ## Annotations to be added to the tcp config configmap
    annotations: {}

  ## Allows customization of the udp-services-configmap
  ##
  udp:
    configMapNamespace: ""   # defaults to $(POD_NAMESPACE)
    ## Annotations to be added to the udp config configmap
    annotations: {}

  # Maxmind license key to download GeoLite2 Databases
  # https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
  maxmindLicenseKey: ""

  ## Additional command line arguments to pass to nginx-ingress-controller
  ## E.g. to specify the default SSL certificate you can use
  ## extraArgs:
  ##   default-ssl-certificate: "<namespace>/<secret_name>"
  extraArgs: {}

  ## Additional environment variables to set
  extraEnvs: []
  # extraEnvs:
  #   - name: FOO
  #     valueFrom:
  #       secretKeyRef:
  #         key: FOO
  #         name: secret-resource

  ## DaemonSet or Deployment
  ##
  kind: Deployment

  ## Annotations to be added to the controller Deployment or DaemonSet
  ##
  annotations: {}
  #  keel.sh/pollSchedule: "@every 60m"

  ## Labels to be added to the controller Deployment or DaemonSet
  ##
  labels: {}
  #  keel.sh/policy: patch
  #  keel.sh/trigger: poll


  # The update strategy to apply to the Deployment or DaemonSet
  ##
  updateStrategy: {}
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

  # minReadySeconds to avoid killing pods before we are ready
  ##
  minReadySeconds: 0


  ## Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  ## Affinity and anti-affinity
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
    # # An example of preferred pod anti-affinity, weight is in the range 1-100
    # podAntiAffinity:
    #   preferredDuringSchedulingIgnoredDuringExecution:
    #   - weight: 100
    #     podAffinityTerm:
    #       labelSelector:
    #         matchExpressions:
    #         - key: app.kubernetes.io/name
    #           operator: In
    #           values:
    #           - ingress-nginx
    #         - key: app.kubernetes.io/instance
    #           operator: In
    #           values:
    #           - ingress-nginx
    #         - key: app.kubernetes.io/component
    #           operator: In
    #           values:
    #           - controller
    #       topologyKey: kubernetes.io/hostname

    # # An example of required pod anti-affinity
    # podAntiAffinity:
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #   - labelSelector:
    #       matchExpressions:
    #       - key: app.kubernetes.io/name
    #         operator: In
    #         values:
    #         - ingress-nginx
    #       - key: app.kubernetes.io/instance
    #         operator: In
    #         values:
    #         - ingress-nginx
    #       - key: app.kubernetes.io/component
    #         operator: In
    #         values:
    #         - controller
    #     topologyKey: "kubernetes.io/hostname"

  ## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.
  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
  ##
  topologySpreadConstraints: []
    # - maxSkew: 1
    #   topologyKey: failure-domain.beta.kubernetes.io/zone
    #   whenUnsatisfiable: DoNotSchedule
    #   labelSelector:
    #     matchLabels:
    #       app.kubernetes.io/instance: ingress-nginx-internal

  ## terminationGracePeriodSeconds
  ## wait up to five minutes for the drain of connections
  ##
  terminationGracePeriodSeconds: 300

  ## Node labels for controller pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector:
    kubernetes.io/os: linux

  ## Liveness and readiness probe values
  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  # startupProbe:
  #   httpGet:
  #     # should match container.healthCheckPath
  #     path: "/healthz"
  #     port: 10254
  #     scheme: HTTP
  #   initialDelaySeconds: 5
  #   periodSeconds: 5
  #   timeoutSeconds: 2
  #   successThreshold: 1
  #   failureThreshold: 5
  livenessProbe:
    httpGet:
      # should match container.healthCheckPath
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  readinessProbe:
    httpGet:
      # should match container.healthCheckPath
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3


  # Path of the health check endpoint. All requests received on the port defined by
  # the healthz-port parameter are forwarded internally to this path.
  healthCheckPath: "/healthz"

  ## Annotations to be added to controller pods
  ##
  podAnnotations: {}

  replicaCount: 1

  minAvailable: 1

  # Define requests resources to avoid probe issues due to CPU utilization in busy nodes
  # ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903
  # Ideally, there should be no limits.
  # https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/
  resources:
  #  limits:
  #    cpu: 100m
  #    memory: 90Mi
    requests:
      cpu: 100m
      memory: 90Mi

  # Mutually exclusive with keda autoscaling
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 11
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
    behavior: {}
      # scaleDown:
      #   stabilizationWindowSeconds: 300
      #  policies:
      #   - type: Pods
      #     value: 1
      #     periodSeconds: 180
      # scaleUp:
      #   stabilizationWindowSeconds: 300
      #   policies:
      #   - type: Pods
      #     value: 2
      #     periodSeconds: 60

  autoscalingTemplate: []
  # Custom or additional autoscaling metrics
  # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
  # - type: Pods
  #   pods:
  #     metric:
  #       name: nginx_ingress_controller_nginx_process_requests_total
  #     target:
  #       type: AverageValue
  #       averageValue: 10000m

  # Mutually exclusive with hpa autoscaling
  keda:
    apiVersion: "keda.sh/v1alpha1"
  # apiVersion changes with keda 1.x vs 2.x
  # 2.x = keda.sh/v1alpha1
  # 1.x = keda.k8s.io/v1alpha1
    enabled: false
    minReplicas: 1
    maxReplicas: 11
    pollingInterval: 30
    cooldownPeriod: 300
    restoreToOriginalReplicaCount: false
    scaledObject:
      annotations: {}
      # Custom annotations for ScaledObject resource
      #  annotations:
      # key: value
    triggers: []
 #     - type: prometheus
 #       metadata:
 #         serverAddress: http://<prometheus-host>:9090
 #         metricName: http_requests_total
 #         threshold: '100'
 #         query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))

    behavior: {}
 #     scaleDown:
 #       stabilizationWindowSeconds: 300
 #       policies:
 #       - type: Pods
 #         value: 1
 #         periodSeconds: 180
 #     scaleUp:
 #       stabilizationWindowSeconds: 300
 #       policies:
 #       - type: Pods
 #         value: 2
 #         periodSeconds: 60

  ## Enable mimalloc as a drop-in replacement for malloc.
  ## ref: https://github.com/microsoft/mimalloc
  ##
  enableMimalloc: true

  ## Override NGINX template
  customTemplate:
    configMapName: ""
    configMapKey: ""

  service:
    enabled: true

    annotations: {}
    labels: {}
    # clusterIP: ""

    ## List of IP addresses at which the controller services are available
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
    ##
    externalIPs: []

    # loadBalancerIP: ""
    loadBalancerSourceRanges: []

    enableHttp: true
    enableHttps: true

    ## Set external traffic policy to: "Local" to preserve source IP on
    ## providers supporting it
    ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
    # externalTrafficPolicy: ""

    # Must be either "None" or "ClientIP" if set. Kubernetes will default to "None".
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    # sessionAffinity: ""

    # specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified,
    # the service controller allocates a port from your cluster’s NodePort range.
    # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    # healthCheckNodePort: 0

    ports:
      http: 80
      https: 443


    targetPorts:
      http: http
      https: https


    type: LoadBalancer

    # type: NodePort
    # nodePorts:
    #   http: 32080
    #   https: 32443
    #   tcp:
    #     8080: 32808
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}

    ## Enables an additional internal load balancer (besides the external one).
    ## Annotations are mandatory for the load balancer to come up. Varies with the cloud service.
    internal:
      enabled: false
      annotations: {}

      # loadBalancerIP: ""

      ## Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0.
      loadBalancerSourceRanges: []

      ## Set external traffic policy to: "Local" to preserve source IP on
      ## providers supporting it
      ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
      # externalTrafficPolicy: ""

  extraContainers: []
  ## Additional containers to be added to the controller pod.
  ## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
  #  - name: my-sidecar
  #    image: nginx:latest
  #  - name: lemonldap-ng-controller
  #    image: lemonldapng/lemonldap-ng-controller:0.2.0
  #    args:
  #      - /lemonldap-ng-controller
  #      - --alsologtostderr
  #      - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
  #    env:
  #      - name: POD_NAME
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.name
  #      - name: POD_NAMESPACE
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.namespace
  #    volumeMounts:
  #    - name: copy-portal-skins
  #      mountPath: /srv/var/lib/lemonldap-ng/portal/skins

  extraVolumeMounts: []
  ## Additional volumeMounts to the controller main container.
  #  - name: copy-portal-skins
  #   mountPath: /var/lib/lemonldap-ng/portal/skins

  extraVolumes: []
  ## Additional volumes to the controller pod.
  #  - name: copy-portal-skins
  #    emptyDir: {}

  extraInitContainers: []
  ## Containers, which are run before the app containers are started.
  # - name: init-myservice
  #   image: busybox
  #   command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

  admissionWebhooks:
    annotations: {}
    enabled: true
    failurePolicy: Fail
    # timeoutSeconds: 10
    port: 8443
    certificate: "/usr/local/certificates/cert"
    key: "/usr/local/certificates/key"
    namespaceSelector: {}
    objectSelector: {}

    # Use an existing PSP instead of creating one
    existingPsp: ""

    service:
      annotations: {}
      # clusterIP: ""
      externalIPs: []
      # loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP

    createSecretJob:
      resources: {}
        # limits:
        #   cpu: 10m
        #   memory: 20Mi
        # requests:
        #   cpu: 10m
        #   memory: 20Mi

    patchWebhookJob:
      resources: {}

    patch:
      enabled: true
      image:
        registry: k8y8z7nh.mirror.aliyuncs.com
        image: jettech/kube-webhook-certgen
        # for backwards compatibility consider setting the full image url via the repository value below
        # use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
        # repository:
        tag: v1.0.0
        #digest: sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
        pullPolicy: IfNotPresent
      ## Provide a priority class name to the webhook patching job
      ##
      priorityClassName: ""
      podAnnotations: {}
      nodeSelector:
        kubernetes.io/os: linux
      tolerations: []
      runAsUser: 2000

  metrics:
    port: 10254
    # if this port is changed, change healthz-port: in extraArgs: accordingly
    enabled: false

    service:
      annotations: {}
      # prometheus.io/scrape: "true"
      # prometheus.io/port: "10254"

      # clusterIP: ""

      ## List of IP addresses at which the stats-exporter service is available
      ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
      ##
      externalIPs: []

      # loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
      # externalTrafficPolicy: ""
      # nodePort: ""

    serviceMonitor:
      enabled: false
      additionalLabels: {}
      # The label to use to retrieve the job name from.
      # jobLabel: "app.kubernetes.io/name"
      namespace: ""
      namespaceSelector: {}
      # Default: scrape .Release.Namespace only
      # To scrape all, use the following:
      # namespaceSelector:
      #   any: true
      scrapeInterval: 30s
      # honorLabels: true
      targetLabels: []
      metricRelabelings: []

    prometheusRule:
      enabled: false
      additionalLabels: {}
      # namespace: ""
      rules: []
        # # These are just examples rules, please adapt them to your needs
        # - alert: NGINXConfigFailed
        #   expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0
        #   for: 1s
        #   labels:
        #     severity: critical
        #   annotations:
        #     description: bad ingress config - nginx config test failed
        #     summary: uninstall the latest ingress changes to allow config reloads to resume
        # - alert: NGINXCertificateExpiry
        #   expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800
        #   for: 1s
        #   labels:
        #     severity: critical
        #   annotations:
        #     description: ssl certificate(s) will expire in less then a week
        #     summary: renew expiring certificates to avoid downtime
        # - alert: NGINXTooMany500s
        #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
        #   for: 1m
        #   labels:
        #     severity: warning
        #   annotations:
        #     description: Too many 5XXs
        #     summary: More than 5% of all requests returned 5XX, this requires your attention
        # - alert: NGINXTooMany400s
        #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
        #   for: 1m
        #   labels:
        #     severity: warning
        #   annotations:
        #     description: Too many 4XXs
        #     summary: More than 5% of all requests returned 4XX, this requires your attention

  ## Improve connection draining when ingress controller pod is deleted using a lifecycle hook:
  ## With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds
  ## to 300, allowing the draining of connections up to five minutes.
  ## If the active connections end before that, the pod will terminate gracefully at that time.
  ## To effectively take advantage of this feature, the Configmap feature
  ## worker-shutdown-timeout new value is 240s instead of 10s.
  ##
  lifecycle:
    preStop:
      exec:
        command:
          - /wait-shutdown

  priorityClassName: ""

## Rollback limit
##
revisionHistoryLimit: 10

## Default 404 backend
##
defaultBackend:
  ##
  enabled: false

  name: defaultbackend
  image:
    registry: k8y8z7nh.mirror.aliyuncs.com
    image: mirrorgooglecontainers/defaultbackend-amd64
    # for backwards compatibility consider setting the full image url via the repository value below
    # use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    # repository:
    tag: "1.5"
    pullPolicy: IfNotPresent
    # nobody user -> uid 65534
    runAsUser: 65534
    runAsNonRoot: true
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false

  # Use an existing PSP instead of creating one
  existingPsp: ""

  extraArgs: {}

  serviceAccount:
    create: true
    name: ""
    automountServiceAccountToken: true
  ## Additional environment variables to set for defaultBackend pods
  extraEnvs: []

  port: 8080

  ## Readiness and liveness probes for default backend
  ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
  ##
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 0
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  ## Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  affinity: {}

  ## Security Context policies for controller pods
  ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
  ## notes on enabling and using sysctls
  ##
  podSecurityContext: {}

  # labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  ## Node labels for default backend pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector:
    kubernetes.io/os: linux

  ## Annotations to be added to default backend pods
  ##
  podAnnotations: {}

  replicaCount: 1

  minAvailable: 1

  resources: {}
  # limits:
  #   cpu: 10m
  #   memory: 20Mi
  # requests:
  #   cpu: 10m
  #   memory: 20Mi

  extraVolumeMounts: []
  ## Additional volumeMounts to the default backend container.
  #  - name: copy-portal-skins
  #   mountPath: /var/lib/lemonldap-ng/portal/skins

  extraVolumes: []
  ## Additional volumes to the default backend pod.
  #  - name: copy-portal-skins
  #    emptyDir: {}

  autoscaling:
    annotations: {}
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50

  service:
    annotations: {}

    # clusterIP: ""

    ## List of IP addresses at which the default backend service is available
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
    ##
    externalIPs: []

    # loadBalancerIP: ""
    loadBalancerSourceRanges: []
    servicePort: 80
    type: ClusterIP

  priorityClassName: ""

## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:
  create: true
  scope: false

# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
  enabled: false

serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: true

## Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName

# TCP service key:value pairs
# Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
#  8080: "default/example-tcp-svc:9000"

# UDP service key:value pairs
# Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
#  53: "kube-system/kube-dns:53"

# A base64ed Diffie-Hellman parameter
# This can be generated with: openssl dhparam 4096 2> /dev/null | base64
# Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam:

What happened:

when I run helm install ingress-nginx . -n ingress-nginx , the job ingress-nginx-admission-create error and the pod ingress-nginx-admission-patch--** logs error:

{"level":"info","msg":"patching webhook configurations 'ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail","source":"k8s/k8s.go:38","time":"2021-09-09T03:31:47Z"}
{"err":"the server could not find the requested resource","level":"fatal","msg":"failed getting validating webhook","source":"k8s/k8s.go:47","time":"2021-09-09T03:31:47Z"}
root@k8s-master:~/k8s/nginx-demo# "err":"the server could not find the requested resource","level":"fatal","msg":"failed getting validating webhook","source":"k8s/k8s.go:47","time":"2021-09-09T03:31:47Z"}ingress-nginx-admission-patch--1-ff9mv

@ytianxia6 ytianxia6 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 9, 2021
@k8s-ci-robot
Copy link
Contributor

@ytianxia6: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Sep 9, 2021
@shuaizi
Copy link

shuaizi commented Sep 9, 2021

it looks missing a job named ingress-nginx-admission-create, it will create a secret "ingress-nginx-admission" first.

you can learn from https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml

@ytianxia6
Copy link
Author

it looks missing a job named ingress-nginx-admission-create, it will create a secret "ingress-nginx-admission" first.

you can learn from https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml

the job has finished and the secret is created. but the job ingress-nginx-admission-patch will contiues get error Back-off restarting failed container until the helm install failed.

@shuaizi
Copy link

shuaizi commented Sep 9, 2021

it looks missing a job named ingress-nginx-admission-create, it will create a secret "ingress-nginx-admission" first.
you can learn from https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml

the job has finished and the secret is created. but the job ingress-nginx-admission-patch will contiues get error Back-off restarting failed container until the helm install failed.

you can find more detailed error info for pods of the job by kubectl get pods ingress-nginx-admission-patch-xxxxx -n ingress-nginx -o yaml , for further deal

@ytianxia6
Copy link
Author

it looks missing a job named ingress-nginx-admission-create, it will create a secret "ingress-nginx-admission" first.
you can learn from https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml

the job has finished and the secret is created. but the job ingress-nginx-admission-patch will contiues get error Back-off restarting failed container until the helm install failed.

you can find more detailed error info for pods of the job by kubectl get pods ingress-nginx-admission-patch-xxxxx -n ingress-nginx -o yaml , for further deal

thank you, i get the error info:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 35193fbf73f188be63dfe63034f9684f93892a44fce446d38e89fae2f3f93248
    cni.projectcalico.org/podIP: 10.111.156.124/32
    cni.projectcalico.org/podIPs: 10.111.156.124/32
  creationTimestamp: "2021-09-10T00:45:56Z"
  generateName: ingress-nginx-admission-patch--1-
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    controller-uid: 73891ebd-f956-4def-94e8-8f52368ffd55
    helm.sh/chart: ingress-nginx-4.0.1
    job-name: ingress-nginx-admission-patch
  name: ingress-nginx-admission-patch--1-2dl9f
  namespace: ingress-nginx
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: ingress-nginx-admission-patch
    uid: 73891ebd-f956-4def-94e8-8f52368ffd55
  resourceVersion: "4241120"
  uid: 4a2f1eed-7142-44dc-82ec-ef30227fe241
spec:
  containers:
  - args:
    - patch
    - --webhook-name=ingress-nginx-admission
    - --namespace=$(POD_NAMESPACE)
    - --patch-mutating=false
    - --secret-name=ingress-nginx-admission
    - --patch-failure-policy=Fail
    env:
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    image: k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen:v1.0.0
    imagePullPolicy: IfNotPresent
    name: patch
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-9vt2d
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8s-node1
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: OnFailure
  schedulerName: default-scheduler
  securityContext:
    runAsNonRoot: true
    runAsUser: 2000
  serviceAccount: ingress-nginx-admission
  serviceAccountName: ingress-nginx-admission
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-9vt2d
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    message: 'containers with unready status: [patch]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    message: 'containers with unready status: [patch]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://b1422e7676eb71c2bd3b39bd7da484cc750eb0f3c315242565636dadf868970c
    image: k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen:v1.0.0
    imageID: docker-pullable://k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen@sha256:58fde0ddd7a0d1bf1483fed53e363144ae8741d8a2d6c129422e8b1b9aa0543c
    lastState:
      terminated:
        containerID: docker://f81e5eaffacb7a43093b70f78583cf76b1cdf990bddce61b99b3770f5bc235d0
        exitCode: 1
        finishedAt: "2021-09-10T00:45:58Z"
        reason: Error
        startedAt: "2021-09-10T00:45:58Z"
    name: patch
    ready: false
    restartCount: 2
    started: false
    state:
      terminated:
        containerID: docker://b1422e7676eb71c2bd3b39bd7da484cc750eb0f3c315242565636dadf868970c
        exitCode: 1
        finishedAt: "2021-09-10T00:46:14Z"
        reason: Error
        startedAt: "2021-09-10T00:46:14Z"
  hostIP: 192.168.79.22
  phase: Running
  podIP: 10.111.156.124
  podIPs:
  - ip: 10.111.156.124
  qosClass: BestEffort
  startTime: "2021-09-10T00:45:56Z"

the error reason is container not ready but i still cannot find the real reason and how to fix it.

@ytianxia6
Copy link
Author

it looks missing a job named ingress-nginx-admission-create, it will create a secret "ingress-nginx-admission" first.
you can learn from https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml

the job has finished and the secret is created. but the job ingress-nginx-admission-patch will contiues get error Back-off restarting failed container until the helm install failed.

you can find more detailed error info for pods of the job by kubectl get pods ingress-nginx-admission-patch-xxxxx -n ingress-nginx -o yaml , for further deal

thank you, i get the error info:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 35193fbf73f188be63dfe63034f9684f93892a44fce446d38e89fae2f3f93248
    cni.projectcalico.org/podIP: 10.111.156.124/32
    cni.projectcalico.org/podIPs: 10.111.156.124/32
  creationTimestamp: "2021-09-10T00:45:56Z"
  generateName: ingress-nginx-admission-patch--1-
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    controller-uid: 73891ebd-f956-4def-94e8-8f52368ffd55
    helm.sh/chart: ingress-nginx-4.0.1
    job-name: ingress-nginx-admission-patch
  name: ingress-nginx-admission-patch--1-2dl9f
  namespace: ingress-nginx
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: ingress-nginx-admission-patch
    uid: 73891ebd-f956-4def-94e8-8f52368ffd55
  resourceVersion: "4241120"
  uid: 4a2f1eed-7142-44dc-82ec-ef30227fe241
spec:
  containers:
  - args:
    - patch
    - --webhook-name=ingress-nginx-admission
    - --namespace=$(POD_NAMESPACE)
    - --patch-mutating=false
    - --secret-name=ingress-nginx-admission
    - --patch-failure-policy=Fail
    env:
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    image: k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen:v1.0.0
    imagePullPolicy: IfNotPresent
    name: patch
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-9vt2d
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8s-node1
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: OnFailure
  schedulerName: default-scheduler
  securityContext:
    runAsNonRoot: true
    runAsUser: 2000
  serviceAccount: ingress-nginx-admission
  serviceAccountName: ingress-nginx-admission
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-9vt2d
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    message: 'containers with unready status: [patch]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    message: 'containers with unready status: [patch]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-09-10T00:45:56Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://b1422e7676eb71c2bd3b39bd7da484cc750eb0f3c315242565636dadf868970c
    image: k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen:v1.0.0
    imageID: docker-pullable://k8y8z7nh.mirror.aliyuncs.com/jettech/kube-webhook-certgen@sha256:58fde0ddd7a0d1bf1483fed53e363144ae8741d8a2d6c129422e8b1b9aa0543c
    lastState:
      terminated:
        containerID: docker://f81e5eaffacb7a43093b70f78583cf76b1cdf990bddce61b99b3770f5bc235d0
        exitCode: 1
        finishedAt: "2021-09-10T00:45:58Z"
        reason: Error
        startedAt: "2021-09-10T00:45:58Z"
    name: patch
    ready: false
    restartCount: 2
    started: false
    state:
      terminated:
        containerID: docker://b1422e7676eb71c2bd3b39bd7da484cc750eb0f3c315242565636dadf868970c
        exitCode: 1
        finishedAt: "2021-09-10T00:46:14Z"
        reason: Error
        startedAt: "2021-09-10T00:46:14Z"
  hostIP: 192.168.79.22
  phase: Running
  podIP: 10.111.156.124
  podIPs:
  - ip: 10.111.156.124
  qosClass: BestEffort
  startTime: "2021-09-10T00:45:56Z"

the error reason is container not ready but i still cannot find the real reason and how to fix it.

thank you @shuaizi ! I found the issue is the image I use jettech/kube-webhook-certgen:v1.0.0 not work. I use the liangjw/kube-webhook-certgen:v1.0 and it runs ok!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants