Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kommander: Set Grafana home dashboard in Kommander Grafana #386

Merged

Conversation

gracedo
Copy link
Contributor

@gracedo gracedo commented Jan 28, 2020

This creates a Job to set the Kommander-specific Grafana home dashboard to the Clusters summary dashboard. I was unable to get a post-install hook of this Job working - the job wasn't getting triggered at all (if anybody has any ideas why this is happening, let me know!). This job has retry logic so it is given plenty of time (10m) to complete (dependent on Grafana service coming up). This job was adapted from our post-install-hook job in prometheus-operator which sets the home dashboard for the kubeaddons grafana instance. I also added in cleanup pre-delete hook jobs for all of the resources that are created in the pre-install hooks (cm with script to set the home dashboard, ops-portal secret that the script depends on), which requires clusterroles setup so that they have the correct permissions to handle these resource types.

This work is dependent on #384 so that the karma/thanos cleanup hooks occur after these clusterroles are created. Once that PR is merged, I will bump the karma/thanos dependencies here.

Once this is merged, I will open a PR with base-kubeaddons to bump the Kommander chart.

Testing

I tested this by building #384 on my personal GH repo and pointing kommander to those versions of kommander-thanos and kommander-karma. Then I built/published that version of kommander on my personal repo and created an addons branch referencing my repo to install kommander. You can use that branch to test on a Konvoy cluster: gracedo/test_kommander_grafana_home_dash_hook

Here is the grafana-home-dashboard job run that sets the home dashboard:
image

$ kubectl -n kommander get cm grafana-home-dashboard -oyaml
apiVersion: v1
data:
  run.sh: |-
    #!/bin/bash
    set -o nounset
    set -o errexit
    set -o pipefail
    CURL="curl --verbose --fail --max-time 30 --retry 20 --retry-connrefused"
    DASHBOARD_ID=$($CURL -H "X-Forwarded-User: $X_FORWARDED_USER" http://kommander-kubeaddons-grafana.kommander/api/dashboards/uid/efa86fd1d0c121a26444b636a3f509a8 | jq '.dashboard.id')
    echo "setting home dashboard to ID" $DASHBOARD_ID
    $CURL -X PUT -H "Content-Type: application/json" -H "X-Forwarded-User: $X_FORWARDED_USER" -d '{"homeDashboardId":'"$DASHBOARD_ID"'}' http://kommander-kubeaddons-grafana.kommander/api/org/preferences
kind: ConfigMap
metadata:
  creationTimestamp: "2020-01-28T22:34:52Z"
  name: grafana-home-dashboard
  namespace: kommander
  resourceVersion: "71396"
  selfLink: /api/v1/namespaces/kommander/configmaps/grafana-home-dashboard
  uid: 0af52c3e-b4b7-456c-9eec-a8a82a55f9ba

$ kubectl -n kommander get secrets ops-portal-credentials
NAME                     TYPE     DATA   AGE
ops-portal-credentials   Opaque   2      14m

I also verified that going to <cluster_url>/ops/portal/kommander/monitoring/grafana redirects to the home dashboard Kubernetes / Compute Resources / Clusters.

At deletion, all of these resources are cleaned up:

$ kubectl -n kommander get cm grafana-home-dashboard -oyaml
Error from server (NotFound): configmaps "grafana-home-dashboard" not found
$ kubectl -n kommander get secrets ops-portal-credentials
Error from server (NotFound): secrets "ops-portal-credentials" not found

@gracedo gracedo added wip blocked blocked labels Jan 28, 2020
@gracedo gracedo requested a review from a team January 28, 2020 22:55
@gracedo gracedo self-assigned this Jan 28, 2020
@gracedo gracedo removed the wip label Jan 28, 2020
@gracedo gracedo force-pushed the gracedo/kommander_grafana_home_dash_DCOS-62971 branch from 1d0e67c to ac3a752 Compare January 28, 2020 23:11
samvantran
samvantran previously approved these changes Jan 29, 2020
Copy link
Contributor

@samvantran samvantran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spun up a cluster pointing the base-addons configVersion to your branch and it worked like a charm 👍

image: dwdraju/alpine-curl-jq
secretKeyRef: ops-portal-credentials
serviceURL: http://kommander-kubeaddons-grafana.kommander
homeDashboardUID: efa86fd1d0c121a26444b636a3f509a8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this value hardcoded into the grafana json itself? what is the likelihood this value will change? The original prom job queries the api for the chart by name, then sets the ID whereas here you seem to already know it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answered my own question here but i'm wondering if we ever update the chart does the uid change or its set in stone forever. If it changes, we should query by name but if not, then this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UID would only change if a dev went in there and changed that field - but it's the same with the name, they're both fields set on the same json. Personally it felt cleaner to me to grab a dashboard by the UID that we set on the dashboard (since we are in charge of creating that dashboard and we know it exists) rather than querying for a name/string (in dcos-monitoring, we grab dashboard by UID). The api call itself also just looks cleaner to me /api/search/?query=Kubernetes+%2F+Compute+Resources+%2F+Cluster vs /api/dashboards/uid/{{ .Values.grafana.hooks.homeDashboardUID

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with querying for the dashboard by its UID. I think the only downside is that you can't tell at a glance which dashboard we're setting. How about a comment here that mentions the name of the dashboard?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is a good compromise. We may add more dashboards in the future so this helps a human to identify the dashboard without having to grep through all for the uid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at least we plumbed it to the values file, good call. I am also hesitant about this but we can figure this out later.

branden
branden previously approved these changes Jan 29, 2020
Copy link
Contributor

@branden branden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 I'll re-approve after #384 is merged and this PR is updated.

Copy link
Contributor

@samvantran samvantran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had a few extra comments but overall LGTM

image: dwdraju/alpine-curl-jq
secretKeyRef: ops-portal-credentials
serviceURL: http://kommander-kubeaddons-grafana.kommander
homeDashboardUID: efa86fd1d0c121a26444b636a3f509a8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is a good compromise. We may add more dashboards in the future so this helps a human to identify the dashboard without having to grep through all for the uid

stable/kommander/values.yaml Outdated Show resolved Hide resolved
metadata:
name: {{ .Values.grafana.hooks.jobName | quote }}
namespace: {{ .Release.Namespace }}
labels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a hook delete policy for hook-succeeded? No need to keep the job and the pods around if everything worked

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately I was not able to get the hook working, so it's just a regular job

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, i thought it was just the post-install hook but it was all hooks. Okay, all good.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please put a jira ticket to review this. we should be cleaning up after jobs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gracedo gracedo added ready ready and removed blocked blocked labels Jan 29, 2020
@gracedo gracedo force-pushed the gracedo/kommander_grafana_home_dash_DCOS-62971 branch from f9ac51e to 76efb04 Compare January 29, 2020 18:15
samvantran
samvantran previously approved these changes Jan 29, 2020
Copy link
Contributor

@samvantran samvantran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

metadata:
name: {{ .Values.grafana.hooks.jobName | quote }}
namespace: {{ .Release.Namespace }}
labels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, i thought it was just the post-install hook but it was all hooks. Okay, all good.

command:
- /bin/sh
- -c
- kubectl get secret ops-portal-credentials --namespace=kubeaddons --export -o yaml | kubectl apply --namespace={{ .Release.Namespace }} -f -
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jr0d @jongiddy I also just want to make sure it's okay to be copying the ops-portal-credentials secret from kubeaddons ns to kommander ns. The secret is needed to get the user to pass into the X-Forwarded-User header for Grafana requests

Copy link
Contributor

@jr0d jr0d Jan 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gracedo it does allow anyone with secret view privileges in the kommander namespace access to the ops portal password; which they wouldn't have had before. While it's more difficult, perhaps consider only copying the username.

What would happen access for the ops-portal user was disabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made the change to only copy over the username 0a2615e

@gracedo gracedo force-pushed the gracedo/kommander_grafana_home_dash_DCOS-62971 branch from 76efb04 to 0ef5434 Compare January 29, 2020 18:24
@gracedo gracedo requested review from branden and samvantran January 29, 2020 20:53
samvantran
samvantran previously approved these changes Jan 29, 2020
branden
branden previously approved these changes Jan 30, 2020
@gracedo gracedo dismissed stale reviews from branden and samvantran via 436eab3 January 30, 2020 18:46
samvantran
samvantran previously approved these changes Jan 31, 2020
@samvantran
Copy link
Contributor

Got a merge conflict but beyond that LGTM

@gracedo gracedo dismissed stale reviews from samvantran and jr0d via 4fd121b January 31, 2020 18:40
@gracedo gracedo force-pushed the gracedo/kommander_grafana_home_dash_DCOS-62971 branch from 0a2615e to 4fd121b Compare January 31, 2020 18:40
@gracedo
Copy link
Contributor Author

gracedo commented Jan 31, 2020

@samvantran @jr0d rebased and fixed conflicts, can you guys +1 again?

@gracedo gracedo requested review from jr0d and samvantran January 31, 2020 18:42
Copy link
Contributor

@alejandroEsc alejandroEsc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks alright, assuming things were tested thoroughly including upgrades? If so then lgtm.

metadata:
name: {{ .Values.grafana.hooks.jobName | quote }}
namespace: {{ .Release.Namespace }}
labels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please put a jira ticket to review this. we should be cleaning up after jobs.

image: dwdraju/alpine-curl-jq
secretKeyRef: ops-portal-credentials
serviceURL: http://kommander-kubeaddons-grafana.kommander
homeDashboardUID: efa86fd1d0c121a26444b636a3f509a8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at least we plumbed it to the values file, good call. I am also hesitant about this but we can figure this out later.

@gracedo
Copy link
Contributor Author

gracedo commented Jan 31, 2020

Going to go ahead and merge this to unblock d2iq-archive/kubernetes-base-addons#100

@gracedo gracedo merged commit bc5c2ee into mesosphere:master Jan 31, 2020
mesosphere-teamcity pushed a commit that referenced this pull request Jan 31, 2020
…ash_DCOS-62971 kommander: Set Grafana home dashboard in Kommander Grafana
joejulian added a commit that referenced this pull request Mar 6, 2020
[Documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/v0.5.0/docs/README.md)

filename  | sha512 hash
--------- | ------------
[v0.5.0.zip](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.zip) | `c53327e090352a7f79ee642dbf8c211733f4a2cb78968ec688a1eade55151e65f1f97cd228d22168317439f1db9f3d2f07dcaa2873f44732ad23aaf632cbef3a`
[v0.5.0.tar.gz](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.tar.gz) | `ec4963d34c601cdf718838d90b8aa6f36b16c9ac127743e73fbe76118a606d41aced116aaaab73370c17bcc536945d5ccd735bc5a4a00f523025c8e41ddedcb8`

* Add a cmdline option to add extra volume tags ([#353](kubernetes-sigs/aws-ebs-csi-driver#353), [@jieyu](https://github.com/jieyu))
* Switch to use kustomize for manifest ([#360](kubernetes-sigs/aws-ebs-csi-driver#360), [@leakingtapan](https://github.com/leakingtapan))
* enable users to set ec2-endpoint for nonstandard regions ([#369](kubernetes-sigs/aws-ebs-csi-driver#369), [@amdonov](https://github.com/amdonov))
* Add standard volume type ([#379](kubernetes-sigs/aws-ebs-csi-driver#379), [@leakingtapan](https://github.com/leakingtapan))
* Update aws sdk version to enable EKS IAM for SA ([#386](kubernetes-sigs/aws-ebs-csi-driver#386), [@leakingtapan](https://github.com/leakingtapan))
* Implement different driver modes and AWS Region override for controller service ([#438](kubernetes-sigs/aws-ebs-csi-driver#438), [@rfranzke](https://github.com/rfranzke))
* Add manifest files for snapshotter 2.0 ([#452](kubernetes-sigs/aws-ebs-csi-driver#452), [@leakingtapan](https://github.com/leakingtapan))

* Return success if instance or volume are not found ([#375](kubernetes-sigs/aws-ebs-csi-driver#375), [@bertinatto](https://github.com/bertinatto))
* Patch k8scsi sidecars CVE-2019-11255 ([#413](kubernetes-sigs/aws-ebs-csi-driver#413), [@jnaulty](https://github.com/jnaulty))
* Handle mount flags in NodeStageVolume ([#430](kubernetes-sigs/aws-ebs-csi-driver#430), [@bertinatto](https://github.com/bertinatto))

* Run upstream e2e test suites with migration  ([#341](kubernetes-sigs/aws-ebs-csi-driver#341), [@wongma7](https://github.com/wongma7))
* Use new test framework for test orchestration ([#359](kubernetes-sigs/aws-ebs-csi-driver#359), [@leakingtapan](https://github.com/leakingtapan))
* Update to use 1.16 cluster with inline test enabled ([#362](kubernetes-sigs/aws-ebs-csi-driver#362), [@leakingtapan](https://github.com/leakingtapan))
* Enable leader election ([#380](kubernetes-sigs/aws-ebs-csi-driver#380), [@leakingtapan](https://github.com/leakingtapan))
* Update go mod and mount library ([#388](kubernetes-sigs/aws-ebs-csi-driver#388), [@leakingtapan](https://github.com/leakingtapan))
* Refactor NewCloud by pass in region ([#394](kubernetes-sigs/aws-ebs-csi-driver#394), [@leakingtapan](https://github.com/leakingtapan))
* helm: provide an option to set extra volume tags ([#396](kubernetes-sigs/aws-ebs-csi-driver#396), [@jieyu](https://github.com/jieyu))
* Allow override for csi-provisioner image ([#401](kubernetes-sigs/aws-ebs-csi-driver#401), [@gliptak](https://github.com/gliptak))
* Enable volume expansion e2e test for CSI migration ([#407](kubernetes-sigs/aws-ebs-csi-driver#407), [@leakingtapan](https://github.com/leakingtapan))
* Swith to use kops 1.16 ([#409](kubernetes-sigs/aws-ebs-csi-driver#409), [@leakingtapan](https://github.com/leakingtapan))
* Added tolerations for node support ([#420](kubernetes-sigs/aws-ebs-csi-driver#420), [@zerkms](https://github.com/zerkms))
* Update helm chart to better match available values and add the ability to add annotations ([#423](kubernetes-sigs/aws-ebs-csi-driver#423), [@krmichel](https://github.com/krmichel))
* [helm] Also add toleration support to controller ([#433](kubernetes-sigs/aws-ebs-csi-driver#433), [@jyaworski](https://github.com/jyaworski))
* Add ec2:ModifyVolume action ([#434](kubernetes-sigs/aws-ebs-csi-driver#434), [@zodiac12k](https://github.com/zodiac12k))
* Schedule the EBS CSI DaemonSet on all nodes by default ([#441](kubernetes-sigs/aws-ebs-csi-driver#441), [@pcfens](https://github.com/pcfens))
joejulian added a commit that referenced this pull request Mar 9, 2020
[Documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/v0.5.0/docs/README.md)

filename  | sha512 hash
--------- | ------------
[v0.5.0.zip](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.zip) | `c53327e090352a7f79ee642dbf8c211733f4a2cb78968ec688a1eade55151e65f1f97cd228d22168317439f1db9f3d2f07dcaa2873f44732ad23aaf632cbef3a`
[v0.5.0.tar.gz](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.tar.gz) | `ec4963d34c601cdf718838d90b8aa6f36b16c9ac127743e73fbe76118a606d41aced116aaaab73370c17bcc536945d5ccd735bc5a4a00f523025c8e41ddedcb8`

* Add a cmdline option to add extra volume tags ([#353](kubernetes-sigs/aws-ebs-csi-driver#353), [@jieyu](https://github.com/jieyu))
* Switch to use kustomize for manifest ([#360](kubernetes-sigs/aws-ebs-csi-driver#360), [@leakingtapan](https://github.com/leakingtapan))
* enable users to set ec2-endpoint for nonstandard regions ([#369](kubernetes-sigs/aws-ebs-csi-driver#369), [@amdonov](https://github.com/amdonov))
* Add standard volume type ([#379](kubernetes-sigs/aws-ebs-csi-driver#379), [@leakingtapan](https://github.com/leakingtapan))
* Update aws sdk version to enable EKS IAM for SA ([#386](kubernetes-sigs/aws-ebs-csi-driver#386), [@leakingtapan](https://github.com/leakingtapan))
* Implement different driver modes and AWS Region override for controller service ([#438](kubernetes-sigs/aws-ebs-csi-driver#438), [@rfranzke](https://github.com/rfranzke))
* Add manifest files for snapshotter 2.0 ([#452](kubernetes-sigs/aws-ebs-csi-driver#452), [@leakingtapan](https://github.com/leakingtapan))

* Return success if instance or volume are not found ([#375](kubernetes-sigs/aws-ebs-csi-driver#375), [@bertinatto](https://github.com/bertinatto))
* Patch k8scsi sidecars CVE-2019-11255 ([#413](kubernetes-sigs/aws-ebs-csi-driver#413), [@jnaulty](https://github.com/jnaulty))
* Handle mount flags in NodeStageVolume ([#430](kubernetes-sigs/aws-ebs-csi-driver#430), [@bertinatto](https://github.com/bertinatto))

* Run upstream e2e test suites with migration  ([#341](kubernetes-sigs/aws-ebs-csi-driver#341), [@wongma7](https://github.com/wongma7))
* Use new test framework for test orchestration ([#359](kubernetes-sigs/aws-ebs-csi-driver#359), [@leakingtapan](https://github.com/leakingtapan))
* Update to use 1.16 cluster with inline test enabled ([#362](kubernetes-sigs/aws-ebs-csi-driver#362), [@leakingtapan](https://github.com/leakingtapan))
* Enable leader election ([#380](kubernetes-sigs/aws-ebs-csi-driver#380), [@leakingtapan](https://github.com/leakingtapan))
* Update go mod and mount library ([#388](kubernetes-sigs/aws-ebs-csi-driver#388), [@leakingtapan](https://github.com/leakingtapan))
* Refactor NewCloud by pass in region ([#394](kubernetes-sigs/aws-ebs-csi-driver#394), [@leakingtapan](https://github.com/leakingtapan))
* helm: provide an option to set extra volume tags ([#396](kubernetes-sigs/aws-ebs-csi-driver#396), [@jieyu](https://github.com/jieyu))
* Allow override for csi-provisioner image ([#401](kubernetes-sigs/aws-ebs-csi-driver#401), [@gliptak](https://github.com/gliptak))
* Enable volume expansion e2e test for CSI migration ([#407](kubernetes-sigs/aws-ebs-csi-driver#407), [@leakingtapan](https://github.com/leakingtapan))
* Swith to use kops 1.16 ([#409](kubernetes-sigs/aws-ebs-csi-driver#409), [@leakingtapan](https://github.com/leakingtapan))
* Added tolerations for node support ([#420](kubernetes-sigs/aws-ebs-csi-driver#420), [@zerkms](https://github.com/zerkms))
* Update helm chart to better match available values and add the ability to add annotations ([#423](kubernetes-sigs/aws-ebs-csi-driver#423), [@krmichel](https://github.com/krmichel))
* [helm] Also add toleration support to controller ([#433](kubernetes-sigs/aws-ebs-csi-driver#433), [@jyaworski](https://github.com/jyaworski))
* Add ec2:ModifyVolume action ([#434](kubernetes-sigs/aws-ebs-csi-driver#434), [@zodiac12k](https://github.com/zodiac12k))
* Schedule the EBS CSI DaemonSet on all nodes by default ([#441](kubernetes-sigs/aws-ebs-csi-driver#441), [@pcfens](https://github.com/pcfens))
sebbrandt87 pushed a commit that referenced this pull request Mar 18, 2020
[Documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/v0.5.0/docs/README.md)

filename  | sha512 hash
--------- | ------------
[v0.5.0.zip](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.zip) | `c53327e090352a7f79ee642dbf8c211733f4a2cb78968ec688a1eade55151e65f1f97cd228d22168317439f1db9f3d2f07dcaa2873f44732ad23aaf632cbef3a`
[v0.5.0.tar.gz](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.tar.gz) | `ec4963d34c601cdf718838d90b8aa6f36b16c9ac127743e73fbe76118a606d41aced116aaaab73370c17bcc536945d5ccd735bc5a4a00f523025c8e41ddedcb8`

* Add a cmdline option to add extra volume tags ([#353](kubernetes-sigs/aws-ebs-csi-driver#353), [@jieyu](https://github.com/jieyu))
* Switch to use kustomize for manifest ([#360](kubernetes-sigs/aws-ebs-csi-driver#360), [@leakingtapan](https://github.com/leakingtapan))
* enable users to set ec2-endpoint for nonstandard regions ([#369](kubernetes-sigs/aws-ebs-csi-driver#369), [@amdonov](https://github.com/amdonov))
* Add standard volume type ([#379](kubernetes-sigs/aws-ebs-csi-driver#379), [@leakingtapan](https://github.com/leakingtapan))
* Update aws sdk version to enable EKS IAM for SA ([#386](kubernetes-sigs/aws-ebs-csi-driver#386), [@leakingtapan](https://github.com/leakingtapan))
* Implement different driver modes and AWS Region override for controller service ([#438](kubernetes-sigs/aws-ebs-csi-driver#438), [@rfranzke](https://github.com/rfranzke))
* Add manifest files for snapshotter 2.0 ([#452](kubernetes-sigs/aws-ebs-csi-driver#452), [@leakingtapan](https://github.com/leakingtapan))

* Return success if instance or volume are not found ([#375](kubernetes-sigs/aws-ebs-csi-driver#375), [@bertinatto](https://github.com/bertinatto))
* Patch k8scsi sidecars CVE-2019-11255 ([#413](kubernetes-sigs/aws-ebs-csi-driver#413), [@jnaulty](https://github.com/jnaulty))
* Handle mount flags in NodeStageVolume ([#430](kubernetes-sigs/aws-ebs-csi-driver#430), [@bertinatto](https://github.com/bertinatto))

* Run upstream e2e test suites with migration  ([#341](kubernetes-sigs/aws-ebs-csi-driver#341), [@wongma7](https://github.com/wongma7))
* Use new test framework for test orchestration ([#359](kubernetes-sigs/aws-ebs-csi-driver#359), [@leakingtapan](https://github.com/leakingtapan))
* Update to use 1.16 cluster with inline test enabled ([#362](kubernetes-sigs/aws-ebs-csi-driver#362), [@leakingtapan](https://github.com/leakingtapan))
* Enable leader election ([#380](kubernetes-sigs/aws-ebs-csi-driver#380), [@leakingtapan](https://github.com/leakingtapan))
* Update go mod and mount library ([#388](kubernetes-sigs/aws-ebs-csi-driver#388), [@leakingtapan](https://github.com/leakingtapan))
* Refactor NewCloud by pass in region ([#394](kubernetes-sigs/aws-ebs-csi-driver#394), [@leakingtapan](https://github.com/leakingtapan))
* helm: provide an option to set extra volume tags ([#396](kubernetes-sigs/aws-ebs-csi-driver#396), [@jieyu](https://github.com/jieyu))
* Allow override for csi-provisioner image ([#401](kubernetes-sigs/aws-ebs-csi-driver#401), [@gliptak](https://github.com/gliptak))
* Enable volume expansion e2e test for CSI migration ([#407](kubernetes-sigs/aws-ebs-csi-driver#407), [@leakingtapan](https://github.com/leakingtapan))
* Swith to use kops 1.16 ([#409](kubernetes-sigs/aws-ebs-csi-driver#409), [@leakingtapan](https://github.com/leakingtapan))
* Added tolerations for node support ([#420](kubernetes-sigs/aws-ebs-csi-driver#420), [@zerkms](https://github.com/zerkms))
* Update helm chart to better match available values and add the ability to add annotations ([#423](kubernetes-sigs/aws-ebs-csi-driver#423), [@krmichel](https://github.com/krmichel))
* [helm] Also add toleration support to controller ([#433](kubernetes-sigs/aws-ebs-csi-driver#433), [@jyaworski](https://github.com/jyaworski))
* Add ec2:ModifyVolume action ([#434](kubernetes-sigs/aws-ebs-csi-driver#434), [@zodiac12k](https://github.com/zodiac12k))
* Schedule the EBS CSI DaemonSet on all nodes by default ([#441](kubernetes-sigs/aws-ebs-csi-driver#441), [@pcfens](https://github.com/pcfens))
hectorj2f pushed a commit that referenced this pull request Mar 19, 2020
* chore: update aws-ebs-csi-driver from 0.4.0 to 0.5.0

[Documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/v0.5.0/docs/README.md)

filename  | sha512 hash
--------- | ------------
[v0.5.0.zip](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.zip) | `c53327e090352a7f79ee642dbf8c211733f4a2cb78968ec688a1eade55151e65f1f97cd228d22168317439f1db9f3d2f07dcaa2873f44732ad23aaf632cbef3a`
[v0.5.0.tar.gz](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.tar.gz) | `ec4963d34c601cdf718838d90b8aa6f36b16c9ac127743e73fbe76118a606d41aced116aaaab73370c17bcc536945d5ccd735bc5a4a00f523025c8e41ddedcb8`

* Add a cmdline option to add extra volume tags ([#353](kubernetes-sigs/aws-ebs-csi-driver#353), [@jieyu](https://github.com/jieyu))
* Switch to use kustomize for manifest ([#360](kubernetes-sigs/aws-ebs-csi-driver#360), [@leakingtapan](https://github.com/leakingtapan))
* enable users to set ec2-endpoint for nonstandard regions ([#369](kubernetes-sigs/aws-ebs-csi-driver#369), [@amdonov](https://github.com/amdonov))
* Add standard volume type ([#379](kubernetes-sigs/aws-ebs-csi-driver#379), [@leakingtapan](https://github.com/leakingtapan))
* Update aws sdk version to enable EKS IAM for SA ([#386](kubernetes-sigs/aws-ebs-csi-driver#386), [@leakingtapan](https://github.com/leakingtapan))
* Implement different driver modes and AWS Region override for controller service ([#438](kubernetes-sigs/aws-ebs-csi-driver#438), [@rfranzke](https://github.com/rfranzke))
* Add manifest files for snapshotter 2.0 ([#452](kubernetes-sigs/aws-ebs-csi-driver#452), [@leakingtapan](https://github.com/leakingtapan))

* Return success if instance or volume are not found ([#375](kubernetes-sigs/aws-ebs-csi-driver#375), [@bertinatto](https://github.com/bertinatto))
* Patch k8scsi sidecars CVE-2019-11255 ([#413](kubernetes-sigs/aws-ebs-csi-driver#413), [@jnaulty](https://github.com/jnaulty))
* Handle mount flags in NodeStageVolume ([#430](kubernetes-sigs/aws-ebs-csi-driver#430), [@bertinatto](https://github.com/bertinatto))

* Run upstream e2e test suites with migration  ([#341](kubernetes-sigs/aws-ebs-csi-driver#341), [@wongma7](https://github.com/wongma7))
* Use new test framework for test orchestration ([#359](kubernetes-sigs/aws-ebs-csi-driver#359), [@leakingtapan](https://github.com/leakingtapan))
* Update to use 1.16 cluster with inline test enabled ([#362](kubernetes-sigs/aws-ebs-csi-driver#362), [@leakingtapan](https://github.com/leakingtapan))
* Enable leader election ([#380](kubernetes-sigs/aws-ebs-csi-driver#380), [@leakingtapan](https://github.com/leakingtapan))
* Update go mod and mount library ([#388](kubernetes-sigs/aws-ebs-csi-driver#388), [@leakingtapan](https://github.com/leakingtapan))
* Refactor NewCloud by pass in region ([#394](kubernetes-sigs/aws-ebs-csi-driver#394), [@leakingtapan](https://github.com/leakingtapan))
* helm: provide an option to set extra volume tags ([#396](kubernetes-sigs/aws-ebs-csi-driver#396), [@jieyu](https://github.com/jieyu))
* Allow override for csi-provisioner image ([#401](kubernetes-sigs/aws-ebs-csi-driver#401), [@gliptak](https://github.com/gliptak))
* Enable volume expansion e2e test for CSI migration ([#407](kubernetes-sigs/aws-ebs-csi-driver#407), [@leakingtapan](https://github.com/leakingtapan))
* Swith to use kops 1.16 ([#409](kubernetes-sigs/aws-ebs-csi-driver#409), [@leakingtapan](https://github.com/leakingtapan))
* Added tolerations for node support ([#420](kubernetes-sigs/aws-ebs-csi-driver#420), [@zerkms](https://github.com/zerkms))
* Update helm chart to better match available values and add the ability to add annotations ([#423](kubernetes-sigs/aws-ebs-csi-driver#423), [@krmichel](https://github.com/krmichel))
* [helm] Also add toleration support to controller ([#433](kubernetes-sigs/aws-ebs-csi-driver#433), [@jyaworski](https://github.com/jyaworski))
* Add ec2:ModifyVolume action ([#434](kubernetes-sigs/aws-ebs-csi-driver#434), [@zodiac12k](https://github.com/zodiac12k))
* Schedule the EBS CSI DaemonSet on all nodes by default ([#441](kubernetes-sigs/aws-ebs-csi-driver#441), [@pcfens](https://github.com/pcfens))

* bump chart version

* chore: bump liveness probe from 1.1.0 to 2.0.0

- Introduce V(5) on the health check begin/success log lines to allow filtering of these entries from logs. If you would like to retain these log entries the action required would be to set `-v==5` or higher for the livenessprobe container. ([#57](kubernetes-csi/livenessprobe#57), [@stefansedich](https://github.com/stefansedich))
- Deprecated "--connection-timeout" argument has been removed. ([#59](kubernetes-csi/livenessprobe#59), [@msau42](https://github.com/msau42))

- Fix nil pointer bug when driver responds with not ready ([#58](kubernetes-csi/livenessprobe#58), [@scuzhanglei](https://github.com/scuzhanglei))
- Migrated to Go modules, so the source builds also outside of GOPATH. ([#53](kubernetes-csi/livenessprobe#53), [@pohly](https://github.com/pohly))

* chore: bump csi external-provisioner from 1.3.0 to 1.4.0

All external-provisioner versions < 1.4.0 are deprecated and will stop
functioning in Kubernetes v1.20. See
[#323](kubernetes-csi/external-provisioner#323) and
[k/k#80978](kubernetes/kubernetes#80978) for more
details. Upgrade your external-provisioner to v1.4+ before Kubernetes v1.20.

None

- Fixes migration scenarios for Topology, fstype, and accessmodes for the kubernetes.io/gce-pd in-tree plugin ([#277](kubernetes-csi/external-provisioner#277), [@davidz627](https://github.com/davidz627))
- Checks if volume content source is populated if creating a volume from a snapshot source. ([#283](kubernetes-csi/external-provisioner#283), [@zhucan](https://github.com/zhucan))
- Fixes issue when SelfLink removal is turned on in Kubernetes. ([#323](kubernetes-csi/external-provisioner#323), [@msau42](https://github.com/msau42))
- CSI driver can return `CreateVolumeResponse` with size 0, which means unknown volume size.
In this case, Provisioner will use PVC requested size as PV size rather than 0 bytes ([#271](kubernetes-csi/external-provisioner#271), [@hoyho](https://github.com/hoyho))
- Fixed potential leak of volumes after CSI driver timeouts. ([#312](kubernetes-csi/external-provisioner#312), [@jsafrane](https://github.com/jsafrane))
- Fixes issue where provisioner provisions volumes for in-tree PVC's which have not been migrated ([#341](kubernetes-csi/external-provisioner#341), [@davidz627](https://github.com/davidz627))
- Send the CSI volume_id instead of  PVC Name to the csi-driver in volumeCreate when datasource  is PVC ([#310](kubernetes-csi/external-provisioner#310), [@Madhu-1](https://github.com/Madhu-1))
- Fixes nil pointer derefence in log when migration turned on ([#342](kubernetes-csi/external-provisioner#342), [@davidz627](https://github.com/davidz627))
- Handle deletion of CSI migrated volumes ([#273](kubernetes-csi/external-provisioner#273), [@ddebroy](https://github.com/ddebroy))
- Reduced logging noise of unrelated PVCs. Emit event on successful provisioning. ([#351](kubernetes-csi/external-provisioner#351), [@jsafrane](https://github.com/jsafrane))
- Added extra verification of source Snapshot and PersistentVolumeClaim before provisioning. ([#352](kubernetes-csi/external-provisioner#352), [@jsafrane](https://github.com/jsafrane))

* chore: bump attacher

* Fixed handling of ControllerUnpublish errors. The attacher will retry to ControllerUnpublish a volume after any error except for NotFound. (#168, @jsafrane)

* bump external-snapshotter from 1.1.0 to 1.2.2

Breaking Changes

* Changes the API group name for the fake VolumeSnapshot object to "snapshot.storage.k8s.io" to be in-sync with the group name of the real VolumeSnapshot object. As a result, the generated interfaces for clientset and informers of VolumeSnapshot are also changed from "VolumeSnapshot" to "Snapshot". (#123, @xing-yang)

New Features

* Adds Finalizer on the snapshot source PVC to prevent it from being deleted when a snapshot is being created from it. (#47, @xing-yang)

Other Notable Changes

* Add Status subresource for VolumeSnapshot. (#121, @zhucan)
* Cherry picks PR #138: Prebound snapshots will work correctly with CSI drivers that does not support ListSnasphots.(#156, @hakanmemisoglu)
* Cherry picks PR #172: Added extra verification of source PersistentVolumeClaim before creating snapshot.(#173, @xing-yang)

* bump external-resizer from 0.2.0 to 0.4.0

New Features

* Add prometheus metrics to CSI external-resizer under the /metrics endpoint. This can be enabled via the "--metrics-address" and "--metrics-path" options. (#67, @saad-ali)

Bug Fixes

* Avoid concurrent processing of same PVCs (#6, @mlmhl)
* Exit on CSI gRPC conn loss (#55, @ggriffiths)
* Verify claimref associated with PVs before resizing (#57, @gnufied)

Other Notable Changes

* Migrated to Go modules, so the source builds also outside of GOPATH. (#60, @pohly)

* feat(awsebscsiprovisioner): updated awsebscsiprovisioner flags

- updated args as mentioned in comments
- updated container versions as mentioned in the comments

D2IQ-64990 #comment updated awsebscsiprovisioner pod arg

* feat(awsebscsiprovisioner): added podAnnotations

- added statefulSet.podAnnotations feature
- added new roles and snapshotter-controller
- added more values to be setable

D2IQ-64992 #comment updated awsebscsiprovisioner to include statefulSet.podAnnotations

* fix: added replacing system-x-critial replacement

- this was added for being able to run the ct install / upgrade behaviour
so that we also can test with that priorityClassName set pods, that normally get a
system-node critial or system-cluster-critical priorityClassName set.
These only will be allowed to run in namespace kube-system and that for
we need to drop the priorityClassName here to null for our tests.
- separated lint and install, as otherwise lint would fail because of the sed changes
- exclude gcp-csi-driver [D2IQ-65765]

[D2IQ-65765]: https://jira.d2iq.com/browse/D2IQ-65765

Co-authored-by: Sebastian Brandt <793580+sebbrandt87@users.noreply.github.com>
mesosphere-teamcity pushed a commit that referenced this pull request Mar 19, 2020
…bs-csi-driver from 0.4.0 to 0.5.0 [Documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/v0.5.0/docs/README.md)  filename  | sha512 hash --------- | ------------ [v0.5.0.zip](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.zip) |  [v0.5.0.tar.gz](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/archive/v0.5.0.tar.gz) |   * Add a cmdline option to add extra volume tags ([#353](kubernetes-sigs/aws-ebs-csi-driver#353), [@jieyu](https://github.com/jieyu)) * Switch to use kustomize for manifest ([#360](kubernetes-sigs/aws-ebs-csi-driver#360), [@leakingtapan](https://github.com/leakingtapan)) * enable users to set ec2-endpoint for nonstandard regions ([#369](kubernetes-sigs/aws-ebs-csi-driver#369), [@amdonov](https://github.com/amdonov)) * Add standard volume type ([#379](kubernetes-sigs/aws-ebs-csi-driver#379), [@leakingtapan](https://github.com/leakingtapan)) * Update aws sdk version to enable EKS IAM for SA ([#386](kubernetes-sigs/aws-ebs-csi-driver#386), [@leakingtapan](https://github.com/leakingtapan)) * Implement different driver modes and AWS Region override for controller service ([#438](kubernetes-sigs/aws-ebs-csi-driver#438), [@rfranzke](https://github.com/rfranzke)) * Add manifest files for snapshotter 2.0 ([#452](kubernetes-sigs/aws-ebs-csi-driver#452), [@leakingtapan](https://github.com/leakingtapan))  * Return success if instance or volume are not found ([#375](kubernetes-sigs/aws-ebs-csi-driver#375), [@bertinatto](https://github.com/bertinatto)) * Patch k8scsi sidecars CVE-2019-11255 ([#413](kubernetes-sigs/aws-ebs-csi-driver#413), [@jnaulty](https://github.com/jnaulty)) * Handle mount flags in NodeStageVolume ([#430](kubernetes-sigs/aws-ebs-csi-driver#430), [@bertinatto](https://github.com/bertinatto))  * Run upstream e2e test suites with migration  ([#341](kubernetes-sigs/aws-ebs-csi-driver#341), [@wongma7](https://github.com/wongma7)) * Use new test framework for test orchestration ([#359](kubernetes-sigs/aws-ebs-csi-driver#359), [@leakingtapan](https://github.com/leakingtapan)) * Update to use 1.16 cluster with inline test enabled ([#362](kubernetes-sigs/aws-ebs-csi-driver#362), [@leakingtapan](https://github.com/leakingtapan)) * Enable leader election ([#380](kubernetes-sigs/aws-ebs-csi-driver#380), [@leakingtapan](https://github.com/leakingtapan)) * Update go mod and mount library ([#388](kubernetes-sigs/aws-ebs-csi-driver#388), [@leakingtapan](https://github.com/leakingtapan)) * Refactor NewCloud by pass in region ([#394](kubernetes-sigs/aws-ebs-csi-driver#394), [@leakingtapan](https://github.com/leakingtapan)) * helm: provide an option to set extra volume tags ([#396](kubernetes-sigs/aws-ebs-csi-driver#396), [@jieyu](https://github.com/jieyu)) * Allow override for csi-provisioner image ([#401](kubernetes-sigs/aws-ebs-csi-driver#401), [@gliptak](https://github.com/gliptak)) * Enable volume expansion e2e test for CSI migration ([#407](kubernetes-sigs/aws-ebs-csi-driver#407), [@leakingtapan](https://github.com/leakingtapan)) * Swith to use kops 1.16 ([#409](kubernetes-sigs/aws-ebs-csi-driver#409), [@leakingtapan](https://github.com/leakingtapan)) * Added tolerations for node support ([#420](kubernetes-sigs/aws-ebs-csi-driver#420), [@zerkms](https://github.com/zerkms)) * Update helm chart to better match available values and add the ability to add annotations ([#423](kubernetes-sigs/aws-ebs-csi-driver#423), [@krmichel](https://github.com/krmichel)) * [helm] Also add toleration support to controller ([#433](kubernetes-sigs/aws-ebs-csi-driver#433), [@jyaworski](https://github.com/jyaworski)) * Add ec2:ModifyVolume action ([#434](kubernetes-sigs/aws-ebs-csi-driver#434), [@zodiac12k](https://github.com/zodiac12k)) * Schedule the EBS CSI DaemonSet on all nodes by default ([#441](kubernetes-sigs/aws-ebs-csi-driver#441), [@pcfens](https://github.com/pcfens))  * bump chart version  * chore: bump liveness probe from 1.1.0 to 2.0.0  - Introduce V(5) on the health check begin/success log lines to allow filtering of these entries from logs. If you would like to retain these log entries the action required would be to set  or higher for the livenessprobe container. ([#57](kubernetes-csi/livenessprobe#57), [@stefansedich](https://github.com/stefansedich)) - Deprecated --connection-timeout argument has been removed. ([#59](kubernetes-csi/livenessprobe#59), [@msau42](https://github.com/msau42))  - Fix nil pointer bug when driver responds with not ready ([#58](kubernetes-csi/livenessprobe#58), [@scuzhanglei](https://github.com/scuzhanglei)) - Migrated to Go modules, so the source builds also outside of GOPATH. ([#53](kubernetes-csi/livenessprobe#53), [@pohly](https://github.com/pohly))  * chore: bump csi external-provisioner from 1.3.0 to 1.4.0  All external-provisioner versions < 1.4.0 are deprecated and will stop functioning in Kubernetes v1.20. See [#323](kubernetes-csi/external-provisioner#323) and [k/k#80978](kubernetes/kubernetes#80978) for more details. Upgrade your external-provisioner to v1.4+ before Kubernetes v1.20.  None  - Fixes migration scenarios for Topology, fstype, and accessmodes for the kubernetes.io/gce-pd in-tree plugin ([#277](kubernetes-csi/external-provisioner#277), [@davidz627](https://github.com/davidz627)) - Checks if volume content source is populated if creating a volume from a snapshot source. ([#283](kubernetes-csi/external-provisioner#283), [@zhucan](https://github.com/zhucan)) - Fixes issue when SelfLink removal is turned on in Kubernetes. ([#323](kubernetes-csi/external-provisioner#323), [@msau42](https://github.com/msau42)) - CSI driver can return  with size 0, which means unknown volume size. In this case, Provisioner will use PVC requested size as PV size rather than 0 bytes ([#271](kubernetes-csi/external-provisioner#271), [@hoyho](https://github.com/hoyho)) - Fixed potential leak of volumes after CSI driver timeouts. ([#312](kubernetes-csi/external-provisioner#312), [@jsafrane](https://github.com/jsafrane)) - Fixes issue where provisioner provisions volumes for in-tree PVC's which have not been migrated ([#341](kubernetes-csi/external-provisioner#341), [@davidz627](https://github.com/davidz627)) - Send the CSI volume_id instead of  PVC Name to the csi-driver in volumeCreate when datasource  is PVC ([#310](kubernetes-csi/external-provisioner#310), [@Madhu-1](https://github.com/Madhu-1)) - Fixes nil pointer derefence in log when migration turned on ([#342](kubernetes-csi/external-provisioner#342), [@davidz627](https://github.com/davidz627)) - Handle deletion of CSI migrated volumes ([#273](kubernetes-csi/external-provisioner#273), [@ddebroy](https://github.com/ddebroy)) - Reduced logging noise of unrelated PVCs. Emit event on successful provisioning. ([#351](kubernetes-csi/external-provisioner#351), [@jsafrane](https://github.com/jsafrane)) - Added extra verification of source Snapshot and PersistentVolumeClaim before provisioning. ([#352](kubernetes-csi/external-provisioner#352), [@jsafrane](https://github.com/jsafrane))  * chore: bump attacher  * Fixed handling of ControllerUnpublish errors. The attacher will retry to ControllerUnpublish a volume after any error except for NotFound. (#168, @jsafrane)  * bump external-snapshotter from 1.1.0 to 1.2.2  Breaking Changes  * Changes the API group name for the fake VolumeSnapshot object to snapshot.storage.k8s.io to be in-sync with the group name of the real VolumeSnapshot object. As a result, the generated interfaces for clientset and informers of VolumeSnapshot are also changed from VolumeSnapshot to Snapshot. (#123, @xing-yang)  New Features  * Adds Finalizer on the snapshot source PVC to prevent it from being deleted when a snapshot is being created from it. (#47, @xing-yang)  Other Notable Changes  * Add Status subresource for VolumeSnapshot. (#121, @zhucan) * Cherry picks PR #138: Prebound snapshots will work correctly with CSI drivers that does not support ListSnasphots.(#156, @hakanmemisoglu) * Cherry picks PR #172: Added extra verification of source PersistentVolumeClaim before creating snapshot.(#173, @xing-yang)  * bump external-resizer from 0.2.0 to 0.4.0  New Features  * Add prometheus metrics to CSI external-resizer under the /metrics endpoint. This can be enabled via the --metrics-address and --metrics-path options. (#67, @saad-ali)  Bug Fixes  * Avoid concurrent processing of same PVCs (#6, @mlmhl) * Exit on CSI gRPC conn loss (#55, @ggriffiths) * Verify claimref associated with PVs before resizing (#57, @gnufied)  Other Notable Changes  * Migrated to Go modules, so the source builds also outside of GOPATH. (#60, @pohly)  * feat(awsebscsiprovisioner): updated awsebscsiprovisioner flags  - updated args as mentioned in comments - updated container versions as mentioned in the comments  D2IQ-64990 #comment updated awsebscsiprovisioner pod arg  * feat(awsebscsiprovisioner): added podAnnotations  - added statefulSet.podAnnotations feature - added new roles and snapshotter-controller - added more values to be setable  D2IQ-64992 #comment updated awsebscsiprovisioner to include statefulSet.podAnnotations  * fix: added replacing system-x-critial replacement  - this was added for being able to run the ct install / upgrade behaviour so that we also can test with that priorityClassName set pods, that normally get a system-node critial or system-cluster-critical priorityClassName set. These only will be allowed to run in namespace kube-system and that for we need to drop the priorityClassName here to null for our tests. - separated lint and install, as otherwise lint would fail because of the sed changes - exclude gcp-csi-driver [D2IQ-65765]  [D2IQ-65765]: https://jira.d2iq.com/browse/D2IQ-65765  Co-authored-by: Sebastian Brandt <793580+sebbrandt87@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants