Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[incubator/nfs-pv-provider] ReadWriteMany NFS PV Provisioner #2559

Closed
wants to merge 1 commit into from
Closed

[incubator/nfs-pv-provider] ReadWriteMany NFS PV Provisioner #2559

wants to merge 1 commit into from

Conversation

kingdonb
Copy link

from kubernetes/examples#108

@ahmetb had suggested that kubernetes/charts Incubator was a better home for this

There should not be any changes here that were not included in kubernetes/examples@92d1ddf which was what I had previously submitted as kubernetes/examples#108

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 22, 2017
@kingdonb
Copy link
Author

The circleci build seems to indicate that the convention here is not to create a "chart" subdirectory like I had done, I will fix it.

The README also needs to be updated as this is no longer the kubernetes/staging/examples NFS example that begot it, if it is landing in charts then it should be fully committed to being a chart.

(Marking as WIP)

@kingdonb kingdonb changed the title ReadWriteMany NFS PV Provisioner WIP: ReadWriteMany NFS PV Provisioner Oct 22, 2017
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 22, 2017
@kingdonb kingdonb changed the title WIP: ReadWriteMany NFS PV Provisioner [incubator/nfs] WIP: ReadWriteMany NFS PV Provisioner Oct 22, 2017
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 22, 2017
@kingdonb kingdonb changed the title [incubator/nfs] WIP: ReadWriteMany NFS PV Provisioner [incubator/nfs] ReadWriteMany NFS PV Provisioner Oct 22, 2017
@kingdonb
Copy link
Author

kingdonb commented Oct 22, 2017

It would be great to add testing for this chart across multiple cloud providers, OpenShift support, an example integration with something else, a flag to (enable or) disable the built-in example clients.

I would like to build Minio example that scales the frontend as a deployment and connects to these NFS PV as ReadWriteMany for shared state. I think Minio can support this type of activity for a kind of "poor man's centrally-block-backed object storage API" as a Swift or S3 storage service stand-in.

Maybe another integration could use rsync to create Multi-AZ distributed and shared storage again on top of that. It's unlikely to win any awards in performance testing benchmarks compared to a well-provisioned Ceph with enough resources, or an actual AWS S3, but ... maybe it could compete by virtue of still being some measure of HA tree filesystem with block device backing.

@unguiculus
Copy link
Member

Please read our best practices, especially those on naming and labels.
https://github.com/kubernetes/helm/blob/master/docs/chart_best_practices/labels.md

If you create a new chart with helm create you'll get an initial experience with current best practices applied.

Standard meta data should e. g. look like so:

name: {{ template "nfs.fullname" . }}
labels:
  app: {{ template "nfs.name" . }}
  chart: {{ .Chart.Name }}-{{ .Chart.Version }}
  heritage: {{ .Release.Service }}
  release: {{ .Release.Name }}

@kingdonb
Copy link
Author

@unguiculus Thank you for the feedback, that's the kind of thing I'm looking for exactly.

This was pretty much a straight port without modification as much as possible of the staging nfs example into Chart form, and I have never written a chart from scratch. I will spend some time reading the chart best practices and take a stab at implementing them into this PR.

@kingdonb
Copy link
Author

kingdonb commented Nov 17, 2017

Can you give any guidance on the ClusterIp setting that I'm using currently which is non-standard?

With Terraform, I think this would be easy to solve (and maybe that's actually what it needs, Terraform itself has a K8S provider... hmm)

I have required the user to hardcode the ClusterIp but that is clunky. I feel like this would be a problem that I could solve with helm alone very easily if I was more proficient in helm, but not honestly sure if Helm can do this.

The background on the problem is here, and I guess it's already solved(ish) in GCE COS, but anywhere else it requires a special non-standard host configuration where the Cluster services including PVC provisioner are using DNS provided by the cluster (so, if you want to help, please validate against your own setup if you are unsure this is a non-standard cluster configuration):

Here's where it gets problematic: in a regular Helm chart that does not set up any provisioners, I could use the DNS name to refer from one service to another. When defining a provisioner, in the pv.spec.nfs.server (templates/nfs-pv.yaml) I think that I need to use the ClusterIP because I can't count on cluster DNS names to resolve, at least until the service has been created first.

In Terraform, or probably CloudFormation, I could provide this as a dependency in my templated values, and it would delay creating the pv until a Service with ClusterIp had been created to pull the IP from for the pv spec.

So this might be exactly the case where you want to use the Terraform kubernetes provider. Maybe I will give this a shot. (Can I include terraform files with my kubernetes/charts chart?)

# overwrites /mnt/index.html with the time and hostname of the pod.

apiVersion: v1
kind: ReplicationController
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReplicationController? Really? You'll probably want to use a Deployment.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to think of a justification for this @unguiculus because the ReplicationController is also still used in kubernetes/examples version of this repo, and I'm assuming it should be updated to use a Deployment there too, but it still hasn't... https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs

Then I realized that the Deployment resource kind itself still is not v1 API, so presumably isn't being used in any examples yet. Actually when I realized this is when I started looking into the possibility of using Terraform and dependency ordering to get the Service cluster IP into the PV spec (template).

The Terraform k8s provider has stated explicitly that they will not be supporting Deployment or any other beta or alpha APIs until they are marked v1, most likely because of the risk that would pose for people making production instances and collecting tfstate via production uses of Terraform.

So if I go the route of using Terraform to connect PVs to the NFS service because the code in K8S that connects PVs to PVCs still uses only host DNS instead of cluster DNS (and not all hosts outside of Google COS will be using cluster DNS on the host)... we will still need to have a ReplicationController backing the pods for those services for now.

labels:
demo: nfs-pv-provisioning
annotations:
volume.alpha.kubernetes.io/storage-class: any
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use storageClassName instead.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted. I still owe some other fixes to make this chart more standard and best-practices. Thanks again for your feedback, I'm sorry holidays got me busy!

@unguiculus
Copy link
Member

Might be a special case here, but I've never set the clusterIP explicitly.

@kingdonb
Copy link
Author

kingdonb commented Dec 1, 2017

@unguiculus There is definitely a special case, I'd love for someone to tell me a better way to do this without adding a requirement of Terraform or manually updating of values.

Unless the host can be guaranteed to resolve DNS through the k8s cluster's internal DNS, there's really no way to tell the PVC where it should find the NFS service to attach to it.

I'd love to find a better solution for that.

(I'll do another round of revisions on this PR this weekend! Thanks for feedback)

@kingdonb
Copy link
Author

kingdonb commented Feb 3, 2018

Happened to stumble across [helm/helm/issues/2895], I needed NFS PVC again today

What are people doing that need ReadWriteMany PVs these days?

Spending the afternoon to convert this into Terraform chart, or convert it into Deployment, or both

(Now that Deployment is exiting beta, perhaps we can get Terraform and Deployment support in the same chart...)

There's still no solution for cluster services in PV specs (without adding template through something like Terraform+helm provider) unless you can edit your cluster and ensure hosts are using cluster DNS globally

(I got a new AKS cluster, so today seeing if AKS does or doesn't support this... worth checking in on several cloud vendors about it I guess since reports said that for a while now already Google COS did support this! Too bad I'm not on GKE)

@krancour
Copy link
Contributor

krancour commented Feb 4, 2018

I don't understand this "chart." It doesn't look like any chart I've ever seen before, as it doesn't, for instance, reference pre-built images, but instead seems to have a Dockerfile and the sources used for building a Docker image. The directions in README.md do not even cover an installation of anything via helm, rather they include a number of manual steps executed using kubectl. This isn't a chart.

If these problems were cleaned up, I'd still be questioning the value of adding what was essentially a tutorial as a chart. Automating installation via helm negates the values of the learning experience that could otherwise be obtained from following the existing directions for this example found at https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs. Charts are usable packages. Tutorials are tutorials.

And even if there's some valid rationale for this being a chart that I am completely overlooking, imho, the name "nfs" is not adequately descriptive of what is contained herein. "nfs" sounds very general and reusable, while the contents of this "chart" are anything but. At the very least, it should be "nfs-example," although, even then, the other issues noted above still apply.

@kingdonb
Copy link
Author

kingdonb commented Feb 4, 2018

@krancour Thank you for the feedback!

I'm here in "charts" because of the feedback I received when I tried to suggest improvements to the original version at kubernetes/examples

kubernetes/examples#108 (kubernetes/examples#108 (comment))

It's not a finished work, and it should probably still have "WIP" in the title

Do you have any ideas about the struggle described above in this issue?

I'm more interested in solving the issue and building a correct implementation, than of where would be a correct home for this issue or whether it follows the correct standards of a Kubernetes chart already.

(I can correct the standards issues and make it properly documented according to the standards of a chart, but if the chart is still unusable on many compliant K8S distros because of the NFS PV Service DNS issue, it's still not going to be a very good chart. If the user must set ClusterIP statically like the original example in order to deploy it, it's also not going to be a very good chart.)

This example/incubator chart issue has a long history, if you think there's a SIG that I should move it to where it's likely to get more attention, I absolutely will.

kubernetes/examples#44

Thanks!

@kingdonb
Copy link
Author

kingdonb commented Feb 4, 2018

What I mean is, are there any features in Kubernetes that I've missed that would make it possible to write a realistic, usable NFS PV driver like this one, that binds a PV from an outside ReadWriteOnce provider driver, and serve up NFS to arbitrary PVCs in a ReadWriteMany configuration, and doesn't depend on any particular configuration other than what's required by the K8S conformance model (which does not specify, in spite of any reports that may show Google COS actually does it anyway, that a node should get all of its host node's DNS from the cluster DNS)?

#2559 (comment)
^
I'm speaking particularly about this problem, is there any way to solve this without some bizarre Terraform-helm provider (or alternatively, someone actually stands up a Service with ClusterIP first, then separately stands up some deployment or PVC requester pods in a ReadWriteMany config, hard-coding the ClusterIP?)

Is it a characteristic of ALL "self-hosted" Kubernetes implementations that the kubelet's DNS queries are answered directly by the kube-dns service? (Is K8S on Google COS self-hosted or does the host use cluster DNS for some other reason)

I'm really leery to do something in any chart that depends on a specific node implementation strategy that is not described by any of the standards, but I don't really know the standards well enough to say with confidence whether there is or isn't one for what I'm talking about here...

@krancour
Copy link
Contributor

krancour commented Feb 5, 2018

It's not a finished work, and it should probably still have "WIP" in the title

Noted. Maybe edit the title accordingly then?

Do you have any ideas about the struggle described above in this issue?

Respectfully, is it possible you're putting the cart before the horse? There's a lot of low hanging fruit to address here.

I'd be willing to help you solve your problem (which, I'll be honest, I'm having a hard time understanding at this point) if this chart starts to have some semblance of viability first.

If the user must set ClusterIP statically like the original example in order to deploy it, it's also not going to be a very good chart.

Not to belabor the point, but there are more serious things than that stopping this from being a "good chart." All I'm asking is to not get so hung up on one minor issue that you overlook the big picture.

My words:

nfs" is not adequately descriptive of what is contained herein. "nfs" sounds very general and reusable, while the contents of this "chart" are anything but.

This is a point where I need to confess I misunderstood your intention. This PR, in its current form, doesn't look like an attempt to "simply" expose other volume types as file shares. The reason it doesn't appear that way is because it contains the sample web client that mounts that file share as a volume. (You can easily see why I'd be asking myself how many people there could possibly we out there wanting to install this.) At minimum, that sample web client has got to go because a. people who want "nfs" don't also need the sample client and b. the existence of that sample client is really making the waters muddy here and clouding your intent-- which I do now understand.

My recommendations are to start here:

  1. Dump the sample web client in its entirety (if you need it to validates what this chart does, install it separately).
  2. Dump the Dockerfile and the shell scripts. Find a pre-built image that does this. If one doesn't exist, start a separate project for building this image. Helm charts use pre-built images. They cannot build images for you at the time of deployment.
  3. Organize this in the manner prescribed here: https://docs.helm.sh/developing_charts/#the-chart-file-structure

Then let's regroup. I'll help you.

fwiw...

Can I include terraform files with my kubernetes/charts chart?

No. Helm charts revolve entirely around generating manifests from templates and a set of user-provided values that fill in the blanks. They don't have any significant dependencies on other technologies.

@kingdonb kingdonb changed the title [incubator/nfs] ReadWriteMany NFS PV Provisioner [incubator/nfs-pv-provider] WIP: ReadWriteMany NFS PV Provisioner Feb 5, 2018
@kingdonb
Copy link
Author

kingdonb commented Feb 5, 2018

Noted! I have updated the title and I will focus on the low-hanging fruit so we can get this into a usable form. I will think about creating a "sample-nfs-pv-provider-client" chart separate from this one so we can explore the issue about fixing ClusterIP in place, which also doesn't surface until you try and bind a PVC with a client.

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 9, 2018
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 18, 2018
@kingdonb
Copy link
Author

Yes I think it's ready for review again now

storageClass: default

deployment:
api_version: apps/v1beta2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Camel case please, but why would you make this configurable? Why not just use apps/v1beta2?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point I guess it does not really matter, as I've learned recently there is not a release of K8s out yet that will balk at v1beta2... so OK, I'll set it to be fixed as apps/v1beta2 api.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 99c9481c


storage:
name: nfs-pv-provisioning-demo
demo_label: nfs-pv-provisioning
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Camel case please.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 99c9481c

role: nfs-server
spec:
containers:
- name: nfs-server
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use consistent list indentation style. I'd suggest you always indent two spaces.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest you always indent two spaces.

afaik, it's generally preferred in YAML that - counts as indentation. Just my $0.02. Whichever way is chosen, I agree it should be applied consistently within a single YAML file.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not even really conscious of this distinction and did not realize I have done it two different ways. Addressed according to @krancour suggestion in cec82017

I will review the other files and be sure I was consistent about this everywhere. Thanks!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing that the standard in my favorite Kubernetes project's charts actually appears to be two spaces before the - and so actually think I will go with the extra spaces instead...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

78a42910

spec:
replicas: 1
selector:
matchLabels:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

app and release labels should go here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

098b72b5

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I understand app and release labels and their purpose now. Thank you for walking me through!

role: nfs-server
template:
metadata:
labels:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

app and release labels should go here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That actually makes a whole lot of sense! Thanks!
098b72b5

chart: {{ template "nfs-pv-provider.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
demo: "{{ .Values.storage.demo_label }}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the demo label for?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

demo of how to add a label for use in the corresponding PVC as selector...

The README is going to be updated to show (three?) ways to select this PV: by storageClass, by name, and by label selector. If this seems redundant, it may be. I still haven't used this heavily, early WIP implementation.

Because of the nature of this PV, you really probably want to select it by name (so you get the one that has your data on it.) I'm not really well versed on StatefulSet or multi-AZ schemes, but I have a couple of itches that scream "just use an NFS server" that needed to be scratched.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo, it's better to use PVCs where you can. Any reason not to do so here?

It's a frequently applied paradigm throughout most of these charts that when persistence is enabled, you either do dynamic provisioning by identifying a storage class or (mutually exclusive) you identify an existing PVC.

It's not common to see PVs in these charts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh... sorry... I looked at the filename and assumed this was a PV and I didn't look any farther. This is a PVC. Please fix the filename.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed the filename in 8baff259

demo: "{{ .Values.storage.demo_label }}"
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "{{ .Values.storage.storageClass }}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use our best practice for storage class cofiguration: #1869

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -0,0 +1,14 @@
image:
repository: quay.io/kingdonb/nfs-data
tag: master
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this image mutable? I'd feel better about this if the image used were not. i.e. Something tagged with semver that will never be overwritten would be better.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use gcr.io/google-samples/nfs-server:1.1

I don't have any need to add changes to the sample image yet, but if I do diverge with gcr.io/google-samples/nfs-server I'll make it a point to use semver.

Will update the chart and retest this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2865480f

examples in [kingdonb/nfs-data](//github.com/kingdonb/nfs-data) to share an NFS
path under the filesystem between pods.

Your pods can do this without building or including any extra NFS client!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize the docs are WIP, but moving the docs forward would help a lot with the review process here. If the docs present a clear vision for how this is meant to be used, it's easier to reason about what we're looking at. i.e. Imparting a conceptual understanding makes the implementation clearer to all.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted. I have an issue open to work on the README in that repo.

I actually wrote this great README and managed to overwrite it with the sample README before it made it into a commit. Had to rage quit after that.

@kingdonb
Copy link
Author

kingdonb commented Feb 22, 2018

Implemented all of the suggestions I think except for adding a proper README. TODO

Preserving the smaller commits for the duration of this PR at least, I linked them above in case it will help reviewers to see their changes have been implemented. Saving them into a branch called nfs-pv-verbose, so I can squash all of my commits into 3d0ea756 which is better for merge when ready.

I will offer another fix-up commit with one more change when I get around to writing a better README, hopefully tonight or at least before you read this. Thanks everyone for the feedback!

@kingdonb
Copy link
Author

It needs to be re-tested now I think but I believe I've implemented all of the changes suggested, and now included a better README that explains more of what this is for and why you might want it.

@kingdonb kingdonb changed the title [incubator/nfs-pv-provider] WIP: ReadWriteMany NFS PV Provisioner [incubator/nfs-pv-provider] ReadWriteMany NFS PV Provisioner Feb 22, 2018
@kingdonb
Copy link
Author

Tested and LGTM

Happy to take more feedback

selector:
matchLabels:
app: {{ template "nfs-pv-provider.name" . }}
chart: {{ template "nfs-pv-provider.chart" . }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Label selectors shold have the release label, not the chart label. Apply everywhere.

Copy link
Author

@kingdonb kingdonb Feb 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think everywhere is just both places, here in the deployment... thanks!

2aac6782ce573236f3cf68b3fed59e3e97f17727

chart: {{ template "nfs-pv-provider.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
demo: "{{ .Values.server.persistentVolume.demo_label }}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the demo label for? And again, camel case only please for values.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed it in 2aac6782ce573236f3cf68b3fed59e3e97f17727

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ .Values.server.persistentVolume.name }}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Name should use the fullname template.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯 understanding, this is sort of what I imagine you'd use demo label for.

This is actually my first K8S contribution, I've never written any original chart before!

Writing your suggestions in now

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Feb 27, 2018
started from kubernetes/examples#108

* nfs-data (client examples are now at https://github.com/kingdonb/nfs-data)

implementing suggestions from reviewers @krancour and @unguiculus

* uses gcr.io/google_containers/volume-nfs:0.8 from examples
* indent two spaces before hyphen array
* match label selectors and template metadata
* use storageClass best practices
* fixup (dns names can't be camel case)

switch back to semver nfs server image from google-samples

app / release selectors and labels
@kingdonb
Copy link
Author

I believe I have answered all of the suggestions, thanks for feedback!

@kingdonb
Copy link
Author

kingdonb commented Mar 2, 2018

So, I don't know if this is ready to merge into the incubator or not. I'm not sure I've put the boundary in the right place between nfs-pv-provider chart and the nfs-data external example "how to use NFS PV provider" kingdonb/nfs-data.

That repo also needs some cleanup as it is not a proper Helm chart. (But that is not necessarily part of this merge...)

Installing this nfs-pv-provider chart should set up an NFS server pod and NFS server service. That seems to work and I think all of the important best-practices advice has been implemented now. What is left to the user is, you must define a PV with a path and the ClusterIP of the Service, and you must make a PVC (with ReadWriteMany if you want) that matches that PV that you create.

I can't predict if the user will want to export the whole root / or several distinct exports eg. /volume1 /volume2 /volume3. I have a particular itch I'm trying to scratch and I have yet to scratch it. So unless anyone really wants to test this on their own, maybe best if I dogfood this chart for a couple of weeks and figure out what it doesn't do that would be nice to add, before we close and merge this PR to the incubator.

Also an open question is, does this type of thing belong in the incubator at all?

The motivation of writing this chart was to provide an easy way to do just what is described in https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs – use NFS to provide ReadWriteMany PVs, with minimal guarantees and no SLA – but specifically in the form of a chart, packaged and ready to go (without the necessity to read a complex README and do a lot of steps, and of course without replication controllers.)

This is not really a software product like most charts, it's more like a basic example made into a package. One of them is somewhat complex for beginners and great for learning and handling moving parts to familiarize yourself with the primitives, and the other (this chart) is something you can plug in and use in about 3 minutes if you just need a simple shared drive for pods on the cheap.

(Some of this should probably go into the README which I'm sure could still be made easier to follow before merge.)

@jdumars
Copy link

jdumars commented Mar 2, 2018

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Mar 2, 2018
@k8s-ci-robot
Copy link
Contributor

@kingdonb: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-charts-e2e bd21b85 link /test pull-charts-e2e

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kingdonb
Copy link
Author

kingdonb commented Mar 2, 2018

@unguiculus It looks like your request for changes was placed above the line I changed, so did not automatically resolve when I pushed the updated version... (or they don't automatically resolve at all, I'm a newbie at this so not sure which)

@kingdonb
Copy link
Author

kingdonb commented Mar 10, 2018

Just found #2779 and #3870, one of us should win! I see StatefulSets in #3870 , and other advancements like dynamic exports, I think it is probably more advanced. Further review forthcoming!

@yebyen
Copy link

yebyen commented Apr 23, 2018

Closing in favor of #3870 and/or #2779 (it looks like at least one of them will be merged soon)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
changes needed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants