-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
console 403: no RBAC policy matched for system:anonymous #411
Comments
What is the JSON output? Things have been exciting since #330 landed, there are many things that could be going wrong ;). Also, we have a nice issue template... ;). Even if you push your issues with |
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/console\": no RBAC policy matched",
"reason": "Forbidden",
"details": {
},
"code": 403
} Yeah, it wasn't working last week either, not sure if the output was the same, but never got a console. |
We think this is openshift/origin#20983. |
Ran into the same problem. Platform: libvirt {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/console\": no RBAC policy matched",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Was the web-console replaced by something? |
I was getting same json issue and just run below command
Now i am able to access it |
Same error I am getting today. # kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default registry-6fcb8b7789-4x72w 1/1 Running 0 20m
kube-system kube-apiserver-krqjc 1/1 Running 0 41m
kube-system kube-controller-manager-b6985978d-zxhm2 1/1 Running 0 41m
kube-system kube-core-operator-7f4d6b8dcf-hdfgw 1/1 Running 0 32m
kube-system kube-dns-787c975867-kz8np 3/3 Running 0 41m
kube-system kube-flannel-h2kqm 2/2 Running 5 29m
kube-system kube-flannel-lt5dd 2/2 Running 0 36m
kube-system kube-flannel-t4l49 2/2 Running 0 36m
kube-system kube-proxy-5xz66 1/1 Running 0 29m
kube-system kube-proxy-kvgxr 1/1 Running 0 41m
kube-system kube-proxy-nbqvd 1/1 Running 0 41m
kube-system kube-scheduler-78d86f9754-j4pr8 1/1 Running 0 41m
kube-system metrics-server-5767bfc576-67znk 2/2 Running 0 30m
kube-system pod-checkpointer-7v2xn 1/1 Running 0 41m
kube-system pod-checkpointer-7v2xn-test1-master-0 1/1 Running 0 39m
kube-system tectonic-network-operator-76qt7 1/1 Running 0 41m
openshift-apiserver apiserver-kgcgc 1/1 Running 0 31m
openshift-cluster-api clusterapi-apiserver-6b855f7bc5-s2flm 2/2 Running 0 33m
openshift-cluster-api clusterapi-controllers-85f6bfd9d5-vrw2s 2/2 Running 0 31m
openshift-cluster-api machine-api-operator-5d85454676-m9z9d 1/1 Running 0 39m
openshift-cluster-version cluster-version-operator-fqkl6 1/1 Running 0 41m
openshift-controller-manager controller-manager-w9kdb 1/1 Running 0 30m
openshift-core-operators openshift-cluster-openshift-apiserver-operator-5fbd49d8f7-vlthq 1/1 Running 0 39m
openshift-core-operators openshift-cluster-openshift-controller-manager-operator-7cw26h6 1/1 Running 0 39m
openshift-core-operators openshift-service-cert-signer-operator-6d6c6f55db-jspwh 1/1 Running 0 39m
openshift-image-registry cluster-image-registry-operator-58c7c9bfd6-r7l92 1/1 Running 0 29m
openshift-ingress tectonic-ingress-controller-operator-fcb9c6f4b-tlhzq 0/1 CrashLoopBackOff 9 31m
openshift-machine-config-operator machine-config-controller-6948b45dd9-n8cxd 1/1 Running 0 32m
openshift-machine-config-operator machine-config-daemon-drsmd 1/1 Running 4 29m
openshift-machine-config-operator machine-config-daemon-fsxft 1/1 Running 0 31m
openshift-machine-config-operator machine-config-operator-545fcb447d-h9pfj 1/1 Running 0 39m
openshift-machine-config-operator machine-config-server-5dwbh 1/1 Running 0 31m
openshift-monitoring cluster-monitoring-operator-c64f5b475-tlplc 1/1 Running 0 29m
openshift-monitoring prometheus-operator-5bf8644c75-xmrcm 1/1 Running 0 20m
openshift-operator-lifecycle-manager catalog-operator-5d5d8c7689-wf2ql 1/1 Running 0 39m
openshift-operator-lifecycle-manager olm-operator-76b7f57649-8zj5v 1/1 Running 0 39m
openshift-operator-lifecycle-manager package-server-f994b8699-f7hsc 0/1 CrashLoopBackOff 8 39m
openshift-service-cert-signer apiservice-cabundle-injector-d4c746869-z66ks 1/1 Running 0 32m
openshift-service-cert-signer configmap-cabundle-injector-77bd46b-bqnd8 1/1 Running 0 32m
openshift-service-cert-signer service-serving-cert-signer-55fb7cc589-l6rzs 1/1 Running 0 32m
openshift-web-console webconsole-86f4f55644-cl9r5 1/1 Running 0 20m
tectonic-system kube-addon-operator-784b4b6c7-fdkqj 1/1 Running 0 32m == web console output {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/console/\": no RBAC policy matched",
"reason": "Forbidden",
"details": {
},
"code": 403
} @imranrazakhan where did you run that oc command to login? |
@praveenkumar i just run this command on shell of VM where i installed 3.11 [root@c1-ocp ~]# oc login -u system:admin |
This is the old console that is being removed. The new console is behind a route. You're seeing this because the master console proxy is removed (since it's no longer needed). I believe @deads2k was going to add a redirect to the new console from the old URL, but I'm not sure if that's in place. |
It's a different cause. |
For new 4.0 installs, it is |
@spadgett I'm not entirely sure what you mean by
edit: this is calling it from the KVM host, not KVM guest |
@stlaz Look at the route in the openshift-console namespace. You can see the hostname there. By default, console uses a generated hostname. |
Thanks. Here's what I did:
Curling this just gives me
|
Yeah, we depend on the default route working. I don't think it does out of the box yet. You could add the console hostname to the /etc/hosts or update the Console CR to another hostname. (@benjaminapetersen, does the operator currently reconcile console hostname?) |
The |
Routing workaround for AWS from @ironcladlou: https://gist.github.com/ironcladlou/784e4bd5cdd7e270ae0bea444809cbfd If anyone goes through something similar for libvirt, please post your notes here. |
Anyone knows how to enable console for the libvirt? |
See this comment: https://gist.github.com/ironcladlou/784e4bd5cdd7e270ae0bea444809cbfd#gistcomment-2764321 ATM, libvert seems to be the trickiest option. There are workarounds for AWS & GCE around. |
There is now a 'special user' who goes by the name
(you can now also view prometheus/grafana console) |
This problem is fixed, so you shouldn't need to delete the pod anymore. Console should just work now on AWS. (We just need to log the URL, #782) |
@sallyom So #411 (comment) will only for AWS side, from libvirt there is still no way to get the console? |
@spadgett confirmed don't need the delete pod hack today (although even after logging in to the console, going to prometheus and using Login with OpenShift gives me a login screen). |
To access the console from libvirt, execute the following steps: Allow
Note: If you omit the above, you have to start kubectl using
Get the routes and add bind them to
Use the credentials as described in #411 (comment) Open i.e. https://grafana-openshift-monitoring.apps.test1.tt.testing or https://console-openshift-console.apps.test1.tt.testing. |
I just tried @s-urbaniak suggestion and it works as said. Thanks for this quick workaround 👍 |
I have to mention that if the libvirt machine have no additional memory or no X to open a browser, then we have to use proxy to make the browser and terminal work on another machine with something like https://gist.github.com/corehello/6f50472d9b4f9a5624c49312c17d3e99 ✗ cat proxy.sh
export HTTPS_PROXY=http://10.66.140.70:8888/
export ALL_PROXY=socks://10.66.140.70:8888/
export ftp_proxy=http://10.66.140.70:8888/
export FTP_PROXY=http://10.66.140.70:8888/
export no_proxy=localhost,127.0.0.0/8,::1
export https_proxy=http://10.66.140.70:8888/
export HTTP_PROXY=http://10.66.140.70:8888/
export all_proxy=socks://10.66.140.70:8888/
export http_proxy=http://10.66.140.70:8888/
export NO_PROXY=localhost,127.0.0.0/8,::1 |
Since this is working out of the box now for AWS and the incorrect instructions were removed from the README, closing the issue. /close |
@spadgett: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@s-urbaniak Since most of the folks are using you suggested way #411 (comment) to get the console and recently
|
Just a quick note, and I haven't had time to dig into the background of the use case, but it's unsafe to rely on resource names like |
DONE..! Thank You,,! |
https://github.com/openshift/installer/blob/master/docs/dev/libvirt-howto.md#connect-to-the-cluster-console
https://${OPENSHIFT_INSTALL_CLUSTER_NAME}-api.${OPENSHIFT_INSTALL_BASE_DOMAIN}:6443/console/
does not pull up a console, just json output.
bf36c90
The text was updated successfully, but these errors were encountered: