-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage-provisioner addon: kube-system:storage-provisioner cannot list events in the namespace #3129
Comments
I observe this behavior on a "no driver" install as well. |
Any update on this issue? |
Sorry that this isn't working for you. I'm not familiar yet with PVC, so I'm a little unclear how to replicate. When I run:
With kvm2 I get: With VirtualBox and macOS I get a little further:
What am I missing? |
+1 |
In my case minikube worked well until I stopped it. Now it fails at startup, the VM is running but won't finish configuring. Anyone knows where in the VM the config files are stored so I can manually edit them? @tstromberg I installed a DB, in my case RethinkDB, with tiller/helm. Wait for it to install and provision everything. After VM reboot I keep getting:
|
Still present… |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Same here, macOS & VirtualBox. |
Steps to ReproduceHere is the simplest reproduction of this bug. Step 1: $ minikube start
⚠️ minikube 1.5.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v/1.5.2
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
😄 minikube v1.3.1 on Darwin 10.13.2
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Starting existing virtualbox VM for "minikube" ...
⌛ Waiting for the host to be provisioned ...
🐳 Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🔄 Relaunching Kubernetes using kubeadm ...
⌛ Waiting for: apiserver proxy etcd scheduler controller dns
🏄 Done! kubectl is now configured to use "minikube"
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default vault-agent-example 2/2 Running 0 19d
kube-system coredns-5c98db65d4-9rrhb 1/1 Running 1 19d
kube-system coredns-5c98db65d4-d2rmk 1/1 Running 1 19d
kube-system etcd-minikube 1/1 Running 0 19d
kube-system kube-addon-manager-minikube 1/1 Running 0 19d
kube-system kube-apiserver-minikube 1/1 Running 0 19d
kube-system kube-controller-manager-minikube 1/1 Running 0 13s
kube-system kube-proxy-fsqv7 1/1 Running 0 19d
kube-system kube-scheduler-minikube 1/1 Running 0 19d
kube-system kubernetes-dashboard-7b8ddcb5d6-ll8w7 1/1 Running 0 19d
kube-system storage-provisioner 1/1 Running 0 19d
kube-system tiller-deploy-597567bdfd-pctlg 1/1 Running 0 19d Step 2: create any PVC - note that it does successfully provision and bind to a PV $ kubectl apply -f https://gist.githubusercontent.com/bodom0015/d920e22df8ff78ee05929d4c3ae736f8/raw/edccc530bf6fa748892d47130a1311fce5513f37/test.pvc.default.yaml
persistentvolumeclaim/test created
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test Bound pvc-fa9c1a0d-df76-4931-9ce5-1cfe4f0375eb 1Mi RWX standard 4s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-fa9c1a0d-df76-4931-9ce5-1cfe4f0375eb 1Mi RWX Delete Bound default/test standard 3s Step 3: Check the provisioner logs to see the error message $ kubectl logs -f storage-provisioner -n kube-system
E1118 16:45:27.950319 1 controller.go:682] Error watching for provisioning success, can't provision for claim "default/test": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "default" The ProblemThis error, while innocuous, indicates that the built-in ServiceAccount named Possible FixIf this permission is needed in more cases than not, then the correct way to fix this might be to create a PR back to A simple way to fix this in the short-term would be to create a thin ClusterRole (or a full copy of apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:persistent-volume-provisioner-supl
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storage-provisioner-supl
labels:
addonmanager.kubernetes.io/mode: EnsureExists
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:persistent-volume-provisioner-supl
subjects:
- kind: ServiceAccount
name: storage-provisioner
namespace: kube-system |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@bodom0015 thank you for updating this issue, I am curious if this issue still exist in 1.7.3 ? we have done some changes to the addon system since then |
@medyagh I can confirm that this still happens in v1.8.1 with the exact same error message: # Clear out old state
wifi-60-235:universal lambert8$ minikube delete
🙄 "minikube" profile does not exist, trying anyways.
💀 Removed all traces of the "minikube" cluster.
# Update to newest minikube
wifi-60-235:universal lambert8$ minikube version
minikube version: v1.8.1
commit: cbda04cf6bbe65e987ae52bb393c10099ab62014
wifi-60-235:universal lambert8$ minikube update-check
CurrentVersion: v1.8.1
LatestVersion: v1.8.1
# Start new Minikube cluster using v1.8.1
wifi-60-235:universal lambert8$ minikube start
😄 minikube v1.8.1 on Darwin 10.13.2
✨ Automatically selected the hyperkit driver
💾 Downloading driver docker-machine-driver-hyperkit:
> docker-machine-driver-hyperkit.sha256: 65 B / 65 B [---] 100.00% ? p/s 0s
> docker-machine-driver-hyperkit: 10.90 MiB / 10.90 MiB 100.00% 39.88 MiB
🔑 The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
$ sudo chown root:wheel /Users/lambert8/.minikube/bin/docker-machine-driver-hyperkit
$ sudo chmod u+s /Users/lambert8/.minikube/bin/docker-machine-driver-hyperkit
💿 Downloading VM boot image ...
> minikube-v1.8.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.8.0.iso: 173.56 MiB / 173.56 MiB [-] 100.00% 36.15 MiB p/s 5s
🔥 Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
💾 Downloading preloaded images tarball for k8s v1.17.3 ...
> preloaded-images-k8s-v1-v1.17.3-docker-overlay2.tar.lz4: 499.26 MiB / 499
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
⚠️ /usr/local/bin/kubectl is version 1.15.3, and is incompatible with Kubernetes 1.17.3. You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster
# Verify cluster is ready
wifi-60-235:universal lambert8$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-kczhj 0/1 ContainerCreating 0 10s
kube-system coredns-6955765f44-v6x8n 0/1 Running 0 10s
kube-system etcd-m01 1/1 Running 0 14s
kube-system kube-apiserver-m01 1/1 Running 0 14s
kube-system kube-controller-manager-m01 1/1 Running 0 14s
kube-system kube-proxy-n7mhx 1/1 Running 0 10s
kube-system kube-scheduler-m01 1/1 Running 0 14s
kube-system storage-provisioner 1/1 Running 0 14s
# Create a Test PVC
wifi-60-235:universal lambert8$ kubectl apply -f https://gist.githubusercontent.com/bodom0015/d920e22df8ff78ee05929d4c3ae736f8/raw/edccc530bf6fa748892d47130a1311fce5513f37/test.pvc.default.yaml
persistentvolumeclaim/test created
# Check the storage-provisioner logs
wifi-60-235:universal lambert8$ kubectl logs -f storage-provisioner -n kube-system
E0309 17:58:24.988551 1 controller.go:682] Error watching for provisioning success, can't provision for claim "default/test": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "default" |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
your minikube version is very old, do you mind trying with a newer minikube version ? |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is still present in:
|
/remove-lifecycle rotten |
@ctron Did you happen to find any hack to get away with this issue? I am on minikube version |
Nope. My "hack" was to go back to version |
Did you try the new storage-provisioner (v3), see if that helps with the issue ?
|
This issue still happens on Minikube |
I'm having some issues with storage-provisioner on:
|
Still present
|
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Please provide the following details:
Environment: minikube v0.28.2 on macOS 10.13.2 + VirtualBox
Minikube version (use
minikube version
): minikube version: v0.28.2/etc/os-release: No such file or directory
)cat ~/.minikube/machines/minikube/config.json | grep DriverName
):"DriverName": "virtualbox"
cat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
):"Boot2DockerURL": "file:///Users/lambert8/.minikube/cache/iso/minikube-v0.28.1.iso"
What happened:
My minikube cluster (created yesterday) with the
storage-provisioner
addon enabled.At first, I was apparently in a bad state:
kubectl describe pvc
yielded the familiar "the provisioner hasn't worked yet" warning message, and the provisioner logs were complaining about some unknown connectivity issue:Upon deleting and recreating the minikube cluster (to clear the bad state), when repeating the test case I saw the following in the logs:
The provisioner did still create a PV and Bound the PVC to it in such cases:
$ kubectl get pvc -n test NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE s4rdfk-cloudcmd Bound pvc-e794fa3e-b6ac-11e8-8044-080027add193 1Mi RWX standard 12m spd9xt-cloudcmd Bound pvc-6baa04ab-b6ad-11e8-8044-080027add193 1Mi RWX standard 8m src67q-cloudcmd Bound pvc-2243a82c-b6ae-11e8-8044-080027add193 1Mi RWX standard 3m
What you expected to happen:
The provisioner shouldn't throw an error when provisioning was successful.
How to reproduce it (as minimally and precisely as possible):
minikube start
wget https://gist.githubusercontent.com/bodom0015/d920e22df8ff78ee05929d4c3ae736f8/raw/edccc530bf6fa748892d47130a1311fce5513f37/test.pvc.default.yaml
kubectl create -f test.pvc.default.yaml
kubectl get pvc
Bound
to a PVstorage-provisioner
logsOutput of
minikube logs
(if applicable):minikube logs
did not seem to yield any pertinent debugging information, but thestorage-provisioner
pod logs did yield the following error message:Anything else do we need to know:
As a temporary manual workaround, the following seemed to work:
# Edit to add the "list" verb to the "events" resource $ kubectl edit clusterrole -n kube-system system:persistent-volume-provisioner
The text was updated successfully, but these errors were encountered: