Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc cluster up --service-catalog=true timing out on web-console #18046

Closed
jmrodri opened this issue Jan 10, 2018 · 24 comments
Closed

oc cluster up --service-catalog=true timing out on web-console #18046

jmrodri opened this issue Jan 10, 2018 · 24 comments
Assignees
Labels
component/kubernetes kind/bug Categorizes issue or PR as related to a bug. priority/P2

Comments

@jmrodri
Copy link
Contributor

jmrodri commented Jan 10, 2018

[provide a description of the issue]

Version

oc v3.9.0-alpha.1+a027e963-84
kubernetes v1.9.0-beta1
features: Basic-Auth GSSAPI Kerberos SPNEGO

Steps To Reproduce
  1. oc cluster up --service-catalog=true --loglevel=10
Current Result
  1. spews tons of logs trying to ping the webconsole:
I0110 09:48:33.019855   21448 webconsole.go:91] polling for web console server availability              
I0110 09:48:33.019950   21448 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.9.0 (linux/amd64) kubernetes/3258431" https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole                                                
I0110 09:48:33.029464   21448 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 9 milliseconds                           
  1. FAILS with an error:
FAIL                                                
   Error: failed to start the web console server: timed out waiting for the condition 
Expected Result
OpenShift server started.

The server is accessible via web console at:
    https://127.0.0.1:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin
Additional Information

[try to run $ oc adm diagnostics (or oadm diagnostics) command if possible]
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5]
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

Output of oc adm diagnostics:

[jesusr@speed3 ~]$ oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/home/jesusr/.kube/config'

ERROR: [CED1008 from controller openshift/origin/pkg/oc/admin/diagnostics/cluster.go]
       Unknown error testing cluster-admin access for context '/172-17-0-1:8443/system':
       Unauthorized
       
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[dh-postgresql-apb-prov-fxb4l/127-0-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0006 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'dh-postgresql-apb-prov-fxb4l/127-0-0-1:8443/system:admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'dh-postgresql-apb-prov-fxb4l'
       (*url.Error) Get https://127.0.0.1:8443/apis/project.openshift.io/v1/projects: x509: certificate signed by unknown authority
       
       This means that we cannot validate the certificate in use by the
       master API server, so we cannot securely communicate with it.
       Connections could be intercepted and your credentials stolen.
       
       Since the server certificate we see when connecting is not validated
       by public certificate authorities (CAs), you probably need to specify a
       certificate from a private CA to validate the connection.
       
       Your config may be specifying the wrong CA cert, or none, or there
       could actually be a man-in-the-middle attempting to intercept your
       connection.  If you are unconcerned about any of this, you can add the
       --insecure-skip-tls-verify flag to bypass secure (TLS) verification,
       but this is risky and should not be necessary.
       ** Connections could be intercepted and your credentials stolen. **
       
[Note] Running diagnostic: ConfigContexts[ansible-service-broker/172-17-0-1:8443/admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'ansible-service-broker/172-17-0-1:8443/admin':
       The server URL is 'https://172.17.0.1:8443'
       The user authentication is 'admin/172-17-0-1:8443'
       The current project is 'ansible-service-broker'
       (*errors.StatusError) Unauthorized
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
       
[Note] Running diagnostic: ConfigContexts[/127-0-0-1:8443/system]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0006 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context '/127-0-0-1:8443/system':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system/127-0-0-1:8443'
       The current project is 'default'
       (*url.Error) Get https://127.0.0.1:8443/apis/project.openshift.io/v1/projects: x509: certificate signed by unknown authority
       
       This means that we cannot validate the certificate in use by the
       master API server, so we cannot securely communicate with it.
       Connections could be intercepted and your credentials stolen.
       
       Since the server certificate we see when connecting is not validated
       by public certificate authorities (CAs), you probably need to specify a
       certificate from a private CA to validate the connection.
       
       Your config may be specifying the wrong CA cert, or none, or there
       could actually be a man-in-the-middle attempting to intercept your
       connection.  If you are unconcerned about any of this, you can add the
       --insecure-skip-tls-verify flag to bypass secure (TLS) verification,
       but this is risky and should not be necessary.
       ** Connections could be intercepted and your credentials stolen. **
       
[Note] Running diagnostic: ConfigContexts[myproject/172-17-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0013 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'myproject/172-17-0-1:8443/system:admin':
       The server URL is 'https://172.17.0.1:8443'
       The user authentication is 'system:admin/172-17-0-1:8443'
       The current project is 'myproject'
       (*errors.StatusError) projects.project.openshift.io is forbidden: User "system:anonymous" cannot list projects.project.openshift.io at the cluster scope: User "system:anonymous" cannot list all projects.project.openshift.io in the cluster
       
       This means that when we tried to make a request to the master API
       server, your kubeconfig did not present valid credentials to
       authenticate your client. Credentials generally consist of a client
       key/certificate or an access token. Your kubeconfig may not have
       presented any, or they may be invalid.
       
[Note] Running diagnostic: ConfigContexts[/127-0-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0006 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context '/127-0-0-1:8443/developer':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'developer/127-0-0-1:8443'
       The current project is 'default'
       (*url.Error) Get https://127.0.0.1:8443/apis/project.openshift.io/v1/projects: x509: certificate signed by unknown authority
       
       This means that we cannot validate the certificate in use by the
       master API server, so we cannot securely communicate with it.
       Connections could be intercepted and your credentials stolen.
       
       Since the server certificate we see when connecting is not validated
       by public certificate authorities (CAs), you probably need to specify a
       certificate from a private CA to validate the connection.
       
       Your config may be specifying the wrong CA cert, or none, or there
       could actually be a man-in-the-middle attempting to intercept your
       connection.  If you are unconcerned about any of this, you can add the
       --insecure-skip-tls-verify flag to bypass secure (TLS) verification,
       but this is risky and should not be necessary.
       ** Connections could be intercepted and your credentials stolen. **
       
[Note] Running diagnostic: ConfigContexts[ansible-service-broker/127-0-0-1:8443/admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0006 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'ansible-service-broker/127-0-0-1:8443/admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'admin/127-0-0-1:8443'
       The current project is 'ansible-service-broker'
       (*url.Error) Get https://127.0.0.1:8443/apis/project.openshift.io/v1/projects: x509: certificate signed by unknown authority
       
       This means that we cannot validate the certificate in use by the
       master API server, so we cannot securely communicate with it.
       Connections could be intercepted and your credentials stolen.
       
       Since the server certificate we see when connecting is not validated
       by public certificate authorities (CAs), you probably need to specify a
       certificate from a private CA to validate the connection.
       
       Your config may be specifying the wrong CA cert, or none, or there
       could actually be a man-in-the-middle attempting to intercept your
       connection.  If you are unconcerned about any of this, you can add the
       --insecure-skip-tls-verify flag to bypass secure (TLS) verification,
       but this is risky and should not be necessary.
       ** Connections could be intercepted and your credentials stolen. **
       
[Note] Running diagnostic: ConfigContexts[default/172-17-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0013 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'default/172-17-0-1:8443/system:admin':
       The server URL is 'https://172.17.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       (*errors.StatusError) projects.project.openshift.io is forbidden: User "system:anonymous" cannot list projects.project.openshift.io at the cluster scope: User "system:anonymous" cannot list all projects.project.openshift.io in the cluster
       
       This means that when we tried to make a request to the master API
       server, your kubeconfig did not present valid credentials to
       authenticate your client. Credentials generally consist of a client
       key/certificate or an access token. Your kubeconfig may not have
       presented any, or they may be invalid.
       
[Note] Running diagnostic: ConfigContexts[default/172-17-0-1-nip-io:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0008 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context 'default/172-17-0-1-nip-io:8443/system:admin':
       The server URL is 'https://172.17.0.1.nip.io:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       (*url.Error) Get https://172.17.0.1.nip.io:8443/apis/project.openshift.io/v1/projects: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, 10.13.57.144, 10.192.208.144, 127.0.0.1, 172.17.0.1, 172.18.0.1, 172.30.0.1, not 172.17.0.1.nip.io
       
       This means that the certificate in use by the master API server
       (master) does not match the hostname by which you are addressing it:
         172.17.0.1.nip.io
       so a secure connection is not allowed. In theory, this *could* mean that
       someone is intercepting your connection and presenting a certificate
       that is valid but for a different server, which is why secure validation
       fails in this case.
       
       However, the most likely explanation is that the server certificate
       needs to be updated to include the name you are using to reach it.
       
       If the master API server is generating its own certificates (which
       is the default), then specifying the public master address in the
       master-config.yaml or with the --public-master flag is usually the easiest
       way to do this. If you need something more complicated (for instance,
       multiple public addresses for the API, or your own CA), then you will need
       to custom-generate the server certificate with the right names yourself.
       
       If you are unconcerned about any of this, you can add the
       --insecure-skip-tls-verify flag to bypass secure (TLS) verification,
       but this is risky and should not be necessary.
       ** Connections could be intercepted and your credentials stolen. **
       
[Note] Running diagnostic: ConfigContexts[/172-17-0-1:8443/system]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       The current client config context is '/172-17-0-1:8443/system':
       The server URL is 'https://172.17.0.1:8443'
       The user authentication is 'system/172-17-0-1:8443'
       The current project is 'default'
       (*errors.StatusError) Unauthorized
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
       
[Note] Running diagnostic: ConfigContexts[/172-17-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity
       
ERROR: [DCli0015 from diagnostic ConfigContexts@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/config_contexts.go:285]
       For client config context '/172-17-0-1:8443/developer':
       The server URL is 'https://172.17.0.1:8443'
       The user authentication is 'developer/172-17-0-1:8443'
       The current project is 'default'
       (*errors.StatusError) Unauthorized
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.
       
[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint
       
ERROR: [DCli2001 from diagnostic DiagnosticPod@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/client/run_diagnostics_pod.go:81]
       Creating diagnostic pod with image openshift/origin-deployer:v3.9.0-alpha.1 failed. Error: (*errors.errorString) an empty namespace may not be set during creation
       
[Note] Running diagnostic: NetworkCheck
       Description: Create a pod on all schedulable nodes and run network diagnostics from the application standpoint
       
ERROR: [DNet2001 from diagnostic NetworkCheck@openshift/origin/pkg/oc/admin/diagnostics/diagnostics/network/run_pod.go:84]
       Checking network plugin failed. Error: Unauthorized
       
[Note] Summary of diagnostics execution (version v3.9.0-alpha.1+a027e963-84):
[Note] Errors seen: 12

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

Output of journalctl -e

Jan 10 09:41:08 speed3 dockerd-current[21270]: W0110 14:41:08.646423   23589 eviction_manager.go:332] eviction manager: attempting to reclaim imagefs                                                              
Jan 10 09:41:08 speed3 dockerd-current[21270]: I0110 14:41:08.646459   23589 helpers.go:1070] eviction manager: attempting to delete unused containers                                                             
Jan 10 09:41:08 speed3 dockerd-current[21270]: I0110 14:41:08.651090   23589 helpers.go:1080] eviction manager: attempting to delete unused images                                                                 
Jan 10 09:41:08 speed3 dockerd-current[21270]: I0110 14:41:08.654500   23589 image_gc_manager.go:350] [imageGCManager]: Removing image "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" to
Jan 10 09:41:08 speed3 audit[21270]: VIRT_CONTROL pid=21270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:container_runtime_t:s0 msg='reason=api op=remove user=? exe=? vm=? vm-pid=? auid=4294967295
Jan 10 09:41:08 speed3 dockerd-current[21270]: time="2018-01-10T09:41:08.656762788-05:00" level=error msg="Handler for DELETE /v1.26/images/sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e
Jan 10 09:41:08 speed3 dockerd-current[21270]: time="2018-01-10T09:41:08.657031303-05:00" level=error msg="Handler for DELETE /v1.26/images/sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e
Jan 10 09:41:08 speed3 dockerd-current[21270]: E0110 14:41:08.657420   23589 remote_image.go:130] RemoveImage "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" from image service failed: 
Jan 10 09:41:08 speed3 dockerd-current[21270]: E0110 14:41:08.657447   23589 kuberuntime_image.go:126] Remove image "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" failed: rpc error: co
Jan 10 09:41:08 speed3 dockerd-current[21270]: W0110 14:41:08.657470   23589 eviction_manager.go:435] eviction manager: unexpected error when attempting to reduce imagefs pressure: wanted to free 922337203685477
Jan 10 09:41:08 speed3 dockerd-current[21270]: I0110 14:41:08.657488   23589 eviction_manager.go:346] eviction manager: must evict pod(s) to reclaim imagefs                                                       
Jan 10 09:41:08 speed3 dockerd-current[21270]: E0110 14:41:08.657499   23589 eviction_manager.go:357] eviction manager: eviction thresholds have been met, but no pods are active to evict                         
Jan 10 09:41:18 speed3 dockerd-current[21270]: W0110 14:41:18.702763   23589 eviction_manager.go:332] eviction manager: attempting to reclaim imagefs                                                              
Jan 10 09:41:18 speed3 dockerd-current[21270]: I0110 14:41:18.702807   23589 helpers.go:1070] eviction manager: attempting to delete unused containers                                                             
Jan 10 09:41:18 speed3 dockerd-current[21270]: I0110 14:41:18.707062   23589 helpers.go:1080] eviction manager: attempting to delete unused images                                                                 
Jan 10 09:41:18 speed3 dockerd-current[21270]: I0110 14:41:18.710589   23589 image_gc_manager.go:350] [imageGCManager]: Removing image "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" to
Jan 10 09:41:18 speed3 audit[21270]: VIRT_CONTROL pid=21270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:container_runtime_t:s0 msg='vm-pid=? user=? hostname=? reason=api op=remove vm=? auid=42949
Jan 10 09:41:18 speed3 dockerd-current[21270]: time="2018-01-10T09:41:18.713113182-05:00" level=error msg="Handler for DELETE /v1.26/images/sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e
Jan 10 09:41:18 speed3 dockerd-current[21270]: time="2018-01-10T09:41:18.713317896-05:00" level=error msg="Handler for DELETE /v1.26/images/sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e
Jan 10 09:41:18 speed3 dockerd-current[21270]: E0110 14:41:18.713544   23589 remote_image.go:130] RemoveImage "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" from image service failed: 
Jan 10 09:41:18 speed3 dockerd-current[21270]: E0110 14:41:18.713575   23589 kuberuntime_image.go:126] Remove image "sha256:69233f28c0b0095fe94109572bc4be8dabbcf2bf012f84ad8fb383da7f13091e" failed: rpc error: co
Jan 10 09:41:18 speed3 dockerd-current[21270]: W0110 14:41:18.713606   23589 eviction_manager.go:435] eviction manager: unexpected error when attempting to reduce imagefs pressure: wanted to free 922337203685477
Jan 10 09:41:18 speed3 dockerd-current[21270]: I0110 14:41:18.713623   23589 eviction_manager.go:346] eviction manager: must evict pod(s) to reclaim imagefs                                                       
Jan 10 09:41:18 speed3 dockerd-current[21270]: E0110 14:41:18.713634   23589 eviction_manager.go:357] eviction manager: eviction thresholds have been met, but no pods are active to evict    

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

  • Fedora 27
$ uname -a
Linux speed3 4.14.11-300.fc27.x86_64 #1 SMP Wed Jan 3 13:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • docker version
$ rpm -q docker
docker-1.13.1-42.git4402c09.fc27.x86_64
  • Disk freespace:
[jesusr@speed3 linux{master}]$ df -h                
Filesystem              Type      Size  Used Avail Use% Mounted on                                       
devtmpfs                devtmpfs  9.6G     0  9.6G   0% /dev                                             
tmpfs                   tmpfs     9.6G  100M  9.5G   2% /dev/shm                                         
tmpfs                   tmpfs     9.6G  2.3M  9.6G   1% /run                                             
tmpfs                   tmpfs     9.6G     0  9.6G   0% /sys/fs/cgroup                                   
/dev/mapper/fedora-root ext4      460G  388G   49G  89% /                                                
tmpfs                   tmpfs     9.6G  258M  9.3G   3% /tmp                                             
/dev/sda1               ext4      477M  184M  264M  42% /boot                                            
tmpfs                   tmpfs     2.0G   20K  2.0G   1% /run/user/42                                     
tmpfs                   tmpfs     2.0G  124K  2.0G   1% /run/user/1000                                   
/dev/mmcblk0p1          vfat      951M  4.0K  951M   1% /run/media/jesusr/SDCARD  

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

the loglevel=10 setting yields:









I0110 09:19:09.200961    4755 webconsole.go:91] polling for web console server availability
I0110 09:19:09.201350    4755 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.9.0 (linux/amd64) kubernetes/3258431" https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole
I0110 09:19:09.207537    4755 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 6 milliseconds
I0110 09:19:09.207612    4755 round_trippers.go:442] Response Headers:
I0110 09:19:09.207651    4755 round_trippers.go:445]     Content-Length: 2518
I0110 09:19:09.207696    4755 round_trippers.go:445]     Date: Wed, 10 Jan 2018 14:19:09 GMT
I0110 09:19:09.207733    4755 round_trippers.go:445]     Cache-Control: no-store
I0110 09:19:09.207776    4755 round_trippers.go:445]     Content-Type: application/json
I0110 09:19:09.207963    4755 request.go:873] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole","uid":"1430f0ba-f611-11e7-b9e6-c85b76145add","resourceVersion":"905","generation":1,"creationTimestamp":"2018-01-10T14:18:07Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.9.0-alpha.1","command":["/usr/bin/origin-web-console","--audit-log-path=-","-v=0","--config=/var/webconsole-config/webconsole-config.yaml"],"ports":[{"containerPort":8443,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"serving-cert","mountPath":"/var/serving-cert"},{"name":"webconsole-config","mountPath":"/var/webconsole-config"}],"livenessProbe":{"httpGet":{"path":"/","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"webconsole","serviceAccount":"webconsole","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"Recreate"},"revisionHistoryLimit":2,"progressDeadlineSeconds":600},"status":{"observedGeneration":1,"replicas":1,"updatedReplicas":1,"unavailableReplicas":1,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."},{"type":"Progressing","status":"True","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"webconsole-6965b68585\" is progressing."}]}}
I0110 09:19:10.201028    4755 webconsole.go:91] polling for web console server availability
I0110 09:19:10.201488    4755 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.9.0 (linux/amd64) kubernetes/3258431" https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole
I0110 09:19:10.207924    4755 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 6 milliseconds
I0110 09:19:10.207995    4755 round_trippers.go:442] Response Headers:
I0110 09:19:10.208034    4755 round_trippers.go:445]     Cache-Control: no-store
I0110 09:19:10.208069    4755 round_trippers.go:445]     Content-Type: application/json
I0110 09:19:10.208104    4755 round_trippers.go:445]     Content-Length: 2518
I0110 09:19:10.208142    4755 round_trippers.go:445]     Date: Wed, 10 Jan 2018 14:19:10 GMT
I0110 09:19:10.208366    4755 request.go:873] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole","uid":"1430f0ba-f611-11e7-b9e6-c85b76145add","resourceVersion":"905","generation":1,"creationTimestamp":"2018-01-10T14:18:07Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.9.0-alpha.1","command":["/usr/bin/origin-web-console","--audit-log-path=-","-v=0","--config=/var/webconsole-config/webconsole-config.yaml"],"ports":[{"containerPort":8443,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"serving-cert","mountPath":"/var/serving-cert"},{"name":"webconsole-config","mountPath":"/var/webconsole-config"}],"livenessProbe":{"httpGet":{"path":"/","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"webconsole","serviceAccount":"webconsole","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"Recreate"},"revisionHistoryLimit":2,"progressDeadlineSeconds":600},"status":{"observedGeneration":1,"replicas":1,"updatedReplicas":1,"unavailableReplicas":1,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."},{"type":"Progressing","status":"True","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"webconsole-6965b68585\" is progressing."}]}}
I0110 09:19:11.200843    4755 webconsole.go:91] polling for web console server availability
I0110 09:19:11.200975    4755 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.9.0 (linux/amd64) kubernetes/3258431" https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole
I0110 09:19:11.202748    4755 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 1 milliseconds
I0110 09:19:11.202772    4755 round_trippers.go:442] Response Headers:
I0110 09:19:11.202779    4755 round_trippers.go:445]     Content-Length: 2518
I0110 09:19:11.202784    4755 round_trippers.go:445]     Date: Wed, 10 Jan 2018 14:19:11 GMT
I0110 09:19:11.202788    4755 round_trippers.go:445]     Cache-Control: no-store
I0110 09:19:11.202792    4755 round_trippers.go:445]     Content-Type: application/json
I0110 09:19:11.202824    4755 request.go:873] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole","uid":"1430f0ba-f611-11e7-b9e6-c85b76145add","resourceVersion":"905","generation":1,"creationTimestamp":"2018-01-10T14:18:07Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.9.0-alpha.1","command":["/usr/bin/origin-web-console","--audit-log-path=-","-v=0","--config=/var/webconsole-config/webconsole-config.yaml"],"ports":[{"containerPort":8443,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"serving-cert","mountPath":"/var/serving-cert"},{"name":"webconsole-config","mountPath":"/var/webconsole-config"}],"livenessProbe":{"httpGet":{"path":"/","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"webconsole","serviceAccount":"webconsole","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"Recreate"},"revisionHistoryLimit":2,"progressDeadlineSeconds":600},"status":{"observedGeneration":1,"replicas":1,"updatedReplicas":1,"unavailableReplicas":1,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."},{"type":"Progressing","status":"True","lastUpdateTime":"2018-01-10T14:18:09Z","lastTransitionTime":"2018-01-10T14:18:09Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"webconsole-6965b68585\" is progressing."}]}}

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

[jesusr@speed3 ansible-service-broker{async-bind}]$ docker images                                        
REPOSITORY                               TAG                 IMAGE ID            CREATED             SIZE                                                                                                          docker.io/openshift/origin-web-console   v3.9.0-alpha.1      4053da2957e0        10 days ago         596 MB                                                                                                        docker.io/openshift/origin               v3.9.0-alpha.1      69233f28c0b0        10 days ago         1.28 GB     


# run the oc cluster up --service-catalog=true --loglevel=10

# image seems to go away

[jesusr@speed3 ansible-service-broker{async-bind}]$ docker images                                        
REPOSITORY                   TAG                 IMAGE ID            CREATED             SIZE
docker.io/openshift/origin   v3.9.0-alpha.1      69233f28c0b0        10 days ago         1.28 GB


[jesusr@speed3 catasb{master}]$ docker ps -a
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS               NAMES
93860a8f9b2b        openshift/origin:v3.9.0-alpha.1   "/usr/bin/openshif..."   2 minutes ago       Up 2 minutes                            origin

# log output shows this now

I0110 09:28:39.928736   11692 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 5 milliseconds
I0110 09:28:39.928799   11692 round_trippers.go:442] Response Headers:
I0110 09:28:39.928843   11692 round_trippers.go:445]     Cache-Control: no-store
I0110 09:28:39.928880   11692 round_trippers.go:445]     Content-Type: application/json
I0110 09:28:39.928924   11692 round_trippers.go:445]     Content-Length: 2518
I0110 09:28:39.928953   11692 round_trippers.go:445]     Date: Wed, 10 Jan 2018 14:28:39 GMT
I0110 09:28:39.929121   11692 request.go:873] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole","uid":"3cdbe8a0-f612-11e7-beca-c85b76145add","resourceVersion":"884","generation":1,"creationTimestamp":"2018-01-10T14:26:24Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.9.0-alpha.1","command":["/usr/bin/origin-web-console","--audit-log-path=-","-v=0","--config=/var/webconsole-config/webconsole-config.yaml"],"ports":[{"containerPort":8443,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"serving-cert","mountPath":"/var/serving-cert"},{"name":"webconsole-config","mountPath":"/var/webconsole-config"}],"livenessProbe":{"httpGet":{"path":"/","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"webconsole","serviceAccount":"webconsole","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"Recreate"},"revisionHistoryLimit":2,"progressDeadlineSeconds":600},"status":{"observedGeneration":1,"replicas":1,"updatedReplicas":1,"unavailableReplicas":1,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2018-01-10T14:26:26Z","lastTransitionTime":"2018-01-10T14:26:26Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."},{"type":"Progressing","status":"True","lastUpdateTime":"2018-01-10T14:26:26Z","lastTransitionTime":"2018-01-10T14:26:26Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"webconsole-6965b68585\" is progressing."}]}}

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

Tried to login as system:admin to diagnose but that wouldn't work presumably because the cluster hasn't started:

[jesusr@speed3 catasb{master}]$ oc login -u system:admin
Authentication required for https://172.17.0.1:8443 (openshift)
Username: system:admin
Password: 
error: username system:admin is invalid for basic auth - verify you have provided the correct host and port and that the server is currently running.
error: username system:admin is invalid for basic auth

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

I tried some of the following things too:

  • restarted docker
  • removed all docker images, started oc cluster up
  • removed all docker images, docker pulled the docker.io/openshift/origin-web-console:v3.9.0-alpha.1 then started oc cluster up

They all resulted in the same scenario.

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 10, 2018

md5sum ~/bin/oc
45b52a58fe1d96dd4a09950217879d65  /home/jesusr/bin/oc

@mfojtik
Copy link
Contributor

mfojtik commented Jan 11, 2018

@sjenning #18046 (comment) this looks weird, why would eviction manager delete the image for web console?

@jwforres something you experienced already with cluster up?

@jwforres
Copy link
Member

@spadgett since I think he was part of the discussion around this yesterday

@sjenning
Copy link
Contributor

sjenning commented Jan 11, 2018

@mfojtik something does seem weird there. The amount of memory trying to be freed 922337203685477 is really 2^63-1.

The only change to the eviction code since v3.9.0-alpha.0 was the 1.9.0beta1 kube rebase #17576

@sjenning
Copy link
Contributor

Upstream issue mentioning eviction trying to free all the memory in the world
kubernetes/kubernetes#51018

@sjenning sjenning self-assigned this Jan 11, 2018
@sjenning
Copy link
Contributor

Turns out the immediate reclamation of all possible images when disk pressure it hit is the expected behaviour

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/image_gc_manager.go#L302-L304

The pod is getting evicted to due to disk pressure. Can you provide the output of df -h /var/lib/docker?

@pweil- pweil- added kind/bug Categorizes issue or PR as related to a bug. priority/P2 component/kubernetes labels Jan 15, 2018
@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 16, 2018

@sjenning interesting:

$ df -h /var/lib/docker
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root ext4  460G  393G   44G  90% /

it can be seen in this comment as well: #18046 (comment)

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 16, 2018

[root@speed3 docker]# du -sh .
21G	.
[root@speed3 docker]# pwd
/var/lib/docker

@sjenning
Copy link
Contributor

@jmrodri ah ok. So you are probably hitting imageGC and eviction.

https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig/v1alpha1/defaults.go#L198-L205

You can either:

  1. free up disk space so that you get below 85% disk usage
  2. adjust --eviction-hard nodefs.available and imagefs.available to be absolute values like 2Gi since your root disk is so large, the default % start reclaim with >40GB disk space remaining

https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds

Unfortunately, I don't think oc cluster up will allow to specify custom node options like --eviction-hard so your only option at the moment might be to free up some disk.

@jmrodri
Copy link
Contributor Author

jmrodri commented Jan 17, 2018

@sjenning deleted a bunch of stuff, got the drive down to 61% and the cluster came up just fine.

@sjenning
Copy link
Contributor

@jmrodri glad to hear it. I'm going to close this then. If you want to see some way to pass node options through oc cluster up, then open feature issue for that. Or, even better, implement it and open a PR :)

@wgbeckmann
Copy link

wgbeckmann commented Jun 10, 2018

I have the same Problem, but with 208GB (99%) free so do you have another Idea?
It is an ablolute new and clean Fedora Workstation Installation

@marusak
Copy link

marusak commented Jun 12, 2018

I also have the same problem.

df -h /var/lib/docker
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root   50G   20G   27G  43% /

oc cluster --loglevel=10 up ends up with the whole bunch of these messages:

I0612 10:55:43.752285   18480 round_trippers.go:436] GET https://127.0.0.1:8443/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole 200 OK in 44 milliseconds
I0612 10:55:43.752312   18480 round_trippers.go:442] Response Headers:
I0612 10:55:43.752318   18480 round_trippers.go:445]     Content-Length: 2936
I0612 10:55:43.752323   18480 round_trippers.go:445]     Date: Tue, 12 Jun 2018 08:55:43 GMT
I0612 10:55:43.752329   18480 round_trippers.go:445]     Cache-Control: no-store
I0612 10:55:43.752333   18480 round_trippers.go:445]     Content-Type: application/json
I0612 10:55:43.752378   18480 request.go:874] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/extensions/v1beta1/namespaces/openshift-web-console/deployments/webconsole","uid":"5eff31c4-6e1e-11e8-a117-54e1ad49a388","resourceVersion":"708","generation":1,"creationTimestamp":"2018-06-12T08:55:35Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"openshift-web-console","webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"app":"openshift-web-console","webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.9.0","command":["/usr/bin/origin-web-console","--audit-log-path=-","-v=0","--config=/var/webconsole-config/webconsole-config.yaml"],"ports":[{"containerPort":8443,"protocol":"TCP"}],"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"volumeMounts":[{"name":"serving-cert","mountPath":"/var/serving-cert"},{"name":"webconsole-config","mountPath":"/var/webconsole-config"}],"livenessProbe":{"exec":{"command":["/bin/sh","-i","-c","if [[ ! -f /tmp/webconsole-config.hash ]]; then \\\n  md5sum /var/webconsole-config/webconsole-config.yaml \u003e /tmp/webconsole-config.hash; \\\nelif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \\\n  exit 1; \\\nfi \u0026\u0026 curl -k -f https://0.0.0.0:8443/console/"]},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":8443,"scheme":"HTTPS"},"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"webconsole","serviceAccount":"webconsole","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"Recreate"},"revisionHistoryLimit":2,"progressDeadlineSeconds":600},"status":{"observedGeneration":1,"replicas":1,"updatedReplicas":1,"unavailableReplicas":1,"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2018-06-12T08:55:37Z","lastTransitionTime":"2018-06-12T08:55:37Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."},{"type":"Progressing","status":"True","lastUpdateTime":"2018-06-12T08:55:38Z","lastTransitionTime":"2018-06-12T08:55:37Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"webconsole-7dfbffd44d\" is progressing."}]}}

and after around 10 minutes ends up with Error: failed to start the web console server: timed out waiting for the condition

Using Fedora 28 with the newest updates (origin-clients-3.9.0-2.fc28)

@marusak
Copy link

marusak commented Jun 12, 2018

It seems to be related to the version of docker.
Docker version 2:1.13.1-51.git4032bd5.fc28 (with docker-common and docker-rhel-push-plugin with the same version) does not work, but version docker-1.13.1-51.git4032bd5.fc28.x86_64 works like charm.
@wgbeckmann you can try to downgrade docker, maybe it will help you as well :) (the credit goes to @mfojtik )

@wgbeckmann
Copy link

You are right. It works after a downgrade.
Thanks!

@alauzon
Copy link

alauzon commented Aug 21, 2019

@marusak, I have the same problem too. I run actually docker-1.13.1-102.git7f2769b.el7.centos.x86_64. In your last comment you specified a version not working and another one working but you have put the same version for both. So which version was working exactly?

@marusak
Copy link

marusak commented Aug 22, 2019

you specified a version not working and another one working but you have put the same version for both. So which version was working exactly?

Ah, that is typo. I am sorry, but I have no idea today anymore. But I guess it was something around that 1.13.1-51, maybe something like -50 or -49ish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes kind/bug Categorizes issue or PR as related to a bug. priority/P2
Projects
None yet
Development

No branches or pull requests

8 participants