-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libvirt: Unable to access web console #1007
Comments
Duplicate of #411. |
@crawford: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
#411 was closed, since AWS works. Reopening for libvirt. |
Docs in flight with #1371 |
Hi, Does this working? #1371 Best Regards, |
90b0d45 only documents a workaround, unfortunately. /reopen |
@zeenix: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Has anyone had luck with the work around posted in 90b0d45 recently? My libvirt cluster does not bring up the console operator with or without the documented workaround. |
I tried setting the oauth hostname statically without wildcards in my dnsmasq config and im still getting oauth console errors. dnsmasq config
Sanity check that hostname is resolving to proper node IP
Output of openshift-console crashed pod logs
Am I missing something? |
I just did and except for the usual timeout issue, the cluster came up all good afaict. |
…ole issue. Currently cluster created by libvirt not able to resolve the auth route and because of that console doesn't comeup. This troubleshooting doc entry direct users to make some modification before running the cluster so that auth route can be resolved by the cluster. Fix openshift#1007
…ole issue. Currently cluster created by libvirt not able to resolve the auth route and because of that console doesn't comeup. This troubleshooting doc entry direct users to make some modification before running the cluster so that auth route can be resolved by the cluster. Fix openshift#1007
…ole issue. Currently cluster created by libvirt not able to resolve the auth route and because of that console doesn't comeup. This troubleshooting doc entry direct users to make some modification before running the cluster so that auth route can be resolved by the cluster. Fix openshift#1007
/priority important-longterm |
@zeenix: GitHub didn't allow me to assign the following users: cfergeau. Note that only openshift members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi. I did the same but still error persist.
|
Here's a workaround: #1648 (comment) |
Last week while trying to do some basic verification I ran into an issue where the workaround listed in the installer troubleshooting doc wasn't working. We figured out it was due to the fact that I had spun up a cluster with three workers, but the ingress controller has 2 set in its replicaset. So neither of those pods landed on the. 51 worker -- and we saw the same symptoms as if no workaround had been applied. It doesn't look like there's a way to do wildcards and have multiple IPs for a host entry. dnsmasq seems to take the last entry in a file as the IP instead of do any kind of round-robin. Any suggestions? Or do we just need to edit the manifest for the ingress operator to create 3 replicas? |
@clnperez I'm running into the same issue. Did you manage to find a solve? |
@marshallford no, nothing other than spinning up that 3rd replica for the ingress. |
Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This is something that is useful because: - this solves [2] through the installer itself without a workaround of setting the dnsmasq options manually. this will also enable easy multi cluster deploys on a system by just specifying the domain mapping in the install-config - can also be used to point to a loadbalancer in cases like the multi-arch CI/CD systems where there are multiple clusters running on each system and it uses a load balancer. Note that there is also an option to just inject an xslt, but this cannot be done after the network is created and this method is much cleaner since the terraform provider itself supports this option [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. Leveraging this would be useful for the multi-arch automation as we have a hacky workaround today to inject host records [2] which waits for the libvirt network to be created and then updates it with the records (which is need in libvirt because of [3]). With this dnsmasq option which can be specified in the libvirt network at creation time, we can point to the .1 address and have a load balancer forward the traffic which would be much cleaner. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L498 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqoptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: address: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: - name: "address" value: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: - name: "address" value: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: - name: "address" value: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: - name: "address" value: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
…config Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems: - eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself). - eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config. - another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node. With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example: ``` platform: libvirt: network: dnsmasqOptions: - name: "address" value: "/.apps.tt.testing/192.168.126.51" if: tt0 ``` The terraform provider supports rendering these options through a datasource and injecting them into the network xml. Since this config is optional, not specifying it will continue to work as before without issues. [1] https://libvirt.org/formatnetwork.html#elementsNamespaces [2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 [2] openshift#1007
Version
(compiled from master)
Platform (aws|libvirt|openstack):
libvirt
What happened?
I'm trying to install openshift 4 using this installer. It seems, that everything was OK. I've done all the steps described in here. Installation was ok, I was able to login using
oc
with credentials from the installation output, but I'm not able to access web console.Looking at
openshift-console
project, everything seems ok:OUTPUT
The pods are running, service and route are up, but accessing https://console-openshift-console.apps.test1.tt.testing in browser says it couldn't resolve IP address.
As part of the setup I've configured dnsmasq as it was described in the libvirt guide.
For example,
ping test1-api.tt.testing
works as expected, butping console-openshift-console.apps.test1.tt.testing
throws:What you expected to happen?
Web console to be accessible.
How to reproduce it (as minimally and precisely as possible)?
Follow https://github.com/openshift/installer/blob/master/docs/dev/libvirt-howto.md (my host machine is Fedora 29)
INSTALLATION OUTPUT
The text was updated successfully, but these errors were encountered: