Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Allow customizing host path used for dynamically provisioned PersistentVolumes #5144

Open
Eelis opened this issue Aug 20, 2019 · 25 comments
Assignees
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@Eelis
Copy link

Eelis commented Aug 20, 2019

Dynamic persistent volume provisioning works great, but uses paths like

/tmp/hostpath-provisioner/pvc-433c40c9-7cf1-40a8-91ab-f9210dceec50

which are not easy to remember and type. For development and testing, it would be much more convenient if the path could be customized to something along the lines of:

/data/<claim>

This would make a PVC named mysqldb resolve to an automatically generated PV that stores the data in /data/mysqldb. Much easier to remember and type! :)

@Eelis Eelis changed the title Feature request: Allow customizing host path used for generated PersistentVolumes Feature request: Allow customizing host path used for dynamically provisioned PersistentVolumes Aug 20, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 20, 2019
@tstromberg
Copy link
Contributor

This sounds like a good idea! PR's welcome!

Since /data has some things in it already, what would you think about another path, such as /pvc/<claim>

@Eelis
Copy link
Author

Eelis commented Aug 20, 2019

Ah, /pvc/<claim> would be equally nice. :)

@afbjorklund
Copy link
Collaborator

I thought the complaint was about the <claim> rather than the /tmp/hostpath-provisioner ?
We already know that the default paths for hostpath provisioner isn't perfect, as noted in #3318

But I'm not sure about giving human names to a machine-managed resources, ripe for conflicts ?
Seems better to have a UUID-style host directory, and then mostly worry about the mount point...

@Eelis
Copy link
Author

Eelis commented Aug 20, 2019

I thought the complaint was about the <claim> rather than the /tmp/hostpath-provisioner ?
We already know that the default paths for hostpath provisioner isn't perfect, as noted in #3318

This feature request (not complaint!) is about both, because I did not know about #3318. :)

But I'm not sure about giving human names to a machine-managed resources, ripe for conflicts ?

For there to be a conflict, there would have to be two PVCs with the same name, which is already disallowed by kubernetes, right?

@afbjorklund
Copy link
Collaborator

For there to be a conflict, there would have to be two PVCs with the same name, which is already disallowed by kubernetes, right?

Possibly. The whole hostpath provisioner desperately needs a rewrite anyway, so maybe this is a good feature to have in the new one

@11janci
Copy link
Contributor

11janci commented Sep 9, 2019

I could look into this, along with #3318. Did a quick look into the code, do you think it is enough to just change/parametrize the path (minikube-only change), or should we change how the PV name is created (change in sig-storage-lib-external-provisioner, affecting all implementations)?

@tstromberg tstromberg added the addon/storage-provisioner Issues relating to storage provisioner addon label Sep 20, 2019
@tstromberg
Copy link
Contributor

From my point-of-view,It's probably good enough to fix this for minikube.

Would switching to https://github.com/rancher/local-path-provisioner help?

@frank-dspeed
Copy link

i will stick with mounting "/tmp/hostpath-provisioner" on that path and then run minikube start via a wrapper script until that is done

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

This issue is free to be picked up,
as @11janci mentioned you could cherry pick form this PR
#5400

@nanikjava
Copy link
Contributor

/assign @nanikjava

@nanikjava
Copy link
Contributor

nanikjava commented Dec 22, 2019

Addons such as storage-provisioner is deployed as Docker image and it need to be generated. Following are the steps to use the newly generated Docker image:

  • eval $(minikube docker-env)
  • Stop the storage-provisioner addons minikube addons disable storage-provisioner && minikube addons disable default-storageclass
  • Remove current image (not necessary but good to do) docker rmi -f gcr.io/k8s-minikube/storage-provisioner:v1.8.1
  • Build storage-provisioner docker image, this will be build automatically inside minikube make clean && make storage-provisioner-image && kubectl delete pod storage-provisioner --grace-period=0 --force --namespace kube-system
  • Start the service minikube addons enable storage-provisioner && minikube addons enable default-storageclass

There are primarily 2 different issues that needs to be fixed to make storage-provisioner works:

  • Docker image generated with the available /deploy/storage-provisioner/Dockerfile does not work. Generated image kept on throwing standard_init_linux.go:211: exec user process caused "no such file or directory". Testing with different distro image (such as ubuntu-16) works, need to find out the most lean distro to make it work.
    (2) RBAC outlined in storage-provisioner.tmpl.yaml does not work as it is lacking the 'endpoints' cluster role permission. This can be overcome by adding it using the following command KUBE_EDITOR="nano" kubectl edit clusterrole system:persistent-volume-provisioner and a new resources 'endpoints' with all the different verb.

  • Following are the commands used for troubleshooting

    • kubectl logs storage-provisioner -n kube-system
    • kubectl get events -n kube-system
    • minikube logs
    • kubectl describe pv && kubectl describe pvc
    • kubectl describe clusterrole.rbac 2>&1 | tee x.txt

@nanikjava
Copy link
Contributor

The smallest generated workable image is using bitnami/minideb:jessie and debian:slim (less than 90MB) while using other distro (ubuntu, centos, etc) will be more than 100MB. Alpine distro does not work as it is throwing the exec error.

@frank-dspeed
Copy link

@nanikjava did you take into account what the others k8s images are using? as this would get shared.
when i use for example ubuntu:16.04 as base for 8 images i use it only once

@nanikjava
Copy link
Contributor

@nanikjava did you take into account what the others k8s images are using? as this would get shared.
when i use for example ubuntu:16.04 as base for 8 images i use it only once

I did looked into the other Dockerfiles that are inside minikube and most of the images that it is using are ubuntu which is much bigger in terms of size. Opted out using the debian-base from k8 repository as the final image comes to less than 75MB

@frank-dspeed
Copy link

@nanikjava while i respect your effort of keeping a image small i think that is not the most importent case as docker images are layered if you depend on the same image and image version of other images it gets reused in var/lib/docker it will exist only once! so when 1 image uses ubuntu === 200mb 50 other images can depend on that and it will take only 200mb + size of the individual images. so lets say the 50 images all include a file with 1kb we get total size of 51 Images 200.05MB

@nanikjava
Copy link
Contributor

PR #6156

@afbjorklund
Copy link
Collaborator

As discussed in the review, the storage-provisioner was supposed to be built statically.
We must have lost that feature along the way, leading to the issues with the "scratch" image.

i.e. first we reused LOCALKUBE_LDFLAGS for the storage provisioner 0fe440e
then we made localkube link dynamically dcb5c2c. Oops. (-extldflags '-static')

If we wanted to link it dynamically, we could consider using alpine rather than buster:

docker.io/golang:1.13.4-alpine

docker.io/golang:1.13.4-buster

see https://hub.docker.com/_/golang

But for now, I think we will keep the static binary. @tstromberg fixed the linking, in bf0dc79

$ LANG=C ldd out/storage-provisioner
	not a dynamic executable

That will work with the scratch image.

It also doesn't affect the paths at all ?

@afbjorklund
Copy link
Collaborator

In case you really wonder about size, here is a breakdown on relative sizes of the 27M binary:

https://gist.github.com/afbjorklund/7fc8583e4ff03e8dcc958662821e2083

Most of it comes from the dependencies, mainly github.com/golang/glog and k8s.io (with friends)

282 dependencies (54 internal, 228 external, 0 testing).

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 14, 2020

Can we get rid of the debian stuff in this one #6156, and merge it with #5400 please

@nanikjava
Copy link
Contributor

@afbjorklund have modified the PR to use scratch and also to build the storage-provisioner as static. The PR includes the #5400 changes

@nanikjava
Copy link
Contributor

More documentation on working with storage-provisioner

  • Following is storage.yaml file that can be used with kubectl apply -f storage.yaml manually
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: storage-provisioner
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: storage-provisioner
    namespace: kube-system

---
apiVersion: v1
kind: Pod
metadata:
  name: storage-provisioner
  namespace: kube-system
  labels:
    integration-test: storage-provisioner
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceAccountName: storage-provisioner
  hostNetwork: true
  containers:
  - name: storage-provisioner
    image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    command: ["/storage-provisioner"]
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /tmp
      name: tmp
  volumes:
  - name: tmp
    hostPath:
      path: /tmp
      type: Directory

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

use the following command to deploy kubectl create -f test.pvc.default.yaml

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 15, 2020
@tstromberg tstromberg added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 28, 2020
@Sarathgiggso
Copy link

Sarathgiggso commented Oct 28, 2021

kindly help me to change the hosthpath provisioner (k8s.io/minikube-hostpath) in the minikube , Because my deployments are in the root directory due to that i am facing storage issue .
i have to deploy my deployments in the local directory . kindly help me to mount on the /mnt

@Sarathgiggso
Copy link

we have created our own pv and pvc even though it is mounting on our root directory
kindly share your ideas regardin this issue .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

10 participants