Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for blue/green deployments #1230

Merged
merged 5 commits into from
Apr 28, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: v2
name: spark-operator
description: A Helm chart for Spark on Kubernetes operator
version: 1.0.9
version: 1.0.10
appVersion: v1beta2-1.2.0-3.0.0
keywords:
- spark
Expand Down
17 changes: 10 additions & 7 deletions charts/spark-operator-chart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,24 +60,27 @@ The command removes all the Kubernetes components associated with the chart and
| imagePullSecrets | list | `[]` | Image pull secrets |
| ingressUrlFormat | string | `""` | Ingress URL format |
| istio.enabled | bool | `false` | When using `istio`, spark jobs need to run without a sidecar to properly terminate |
| labelSelectorFilter | string | `""` | A comma-separated list of key=value, or key labels to filter resources during watch and list based on the specified labels. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockNamespace | string | `""` | Optionally store the lock in another namespace. Defaults to operator's namespace |
| logLevel | int | `2` | Set higher levels for more verbose logging |
| metrics.enable | bool | `true` | Enable prometheus metric scraping |
| metrics.endpoint | string | `"/metrics"` | Metrics serving endpoint |
| metrics.port | int | `10254` | Metrics port |
| metrics.portName | string | `metrics` | Metrics port name |
| metrics.portName | string | `"metrics"` | Metrics port name |
| metrics.prefix | string | `""` | Metric prefix, will be added to all exported metrics |
| nameOverride | string | `""` | String to partially override `spark-operator.fullname` template (will maintain the release name) |
| nodeSelector | object | `{}` | Node labels for pod assignment |
| podAnnotations | object | `{}` | Additional annotations to add to the pod |
| podMonitor.enable | bool| `false` | Submit a prometheus pod monitor for operator's pod. Note that prometheus metrics should be enabled as well.|
| podMonitor | object | `{"enable":false,"jobLabel":"spark-operator-podmonitor","labels":{},"podMetricsEndpoint":{"interval":"5s","scheme":"http"}}` | Prometheus pod monitor for operator's pod. |
| podMonitor.enable | bool | `false` | If enabled, a pod monitor for operator's pod will be submitted. Note that prometheus metrics should be enabled as well. |
| podMonitor.jobLabel | string | `"spark-operator-podmonitor"` | The label to use to retrieve the job name from |
| podMonitor.labels | object | `{}` | Pod monitor labels |
| podMonitor.jobLabel | string | `spark-operator-podmonitor` | The label to use to retrieve the job name from |
| podMonitor.podMetricsEndpoint.scheme | string | `http` | Prometheus metrics endpoint scheme |
| podMonitor.podMetricsEndpoint.interval | string | `5s` | Interval at which metrics should be scraped |
| podMonitor.podMetricsEndpoint | object | `{"interval":"5s","scheme":"http"}` | Prometheus metrics endpoint properties. `metrics.portName` will be used as a port |
| podSecurityContext | object | `{}` | Pod security context |
| rbac.create | bool | `true` | Create and use `rbac` resources |
| rbac.create | bool | `false` | **DEPRECATED** use `createRole` and `createClusterRole` |
| rbac.createClusterRole | bool | `true` | Create and use RBAC `ClusterRole` resources |
| rbac.createRole | bool | `true` | Create and use RBAC `Role` resources |
| replicaCount | int | `1` | Desired number of pods, leaderElection will be enabled if this is greater than 1 |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resources | object | `{}` | Pod resource requests and limits |
Expand All @@ -89,10 +92,10 @@ The command removes all the Kubernetes components associated with the chart and
| serviceAccounts.sparkoperator.name | string | `""` | Optional name for the operator service account |
| sparkJobNamespace | string | `""` | Set this if running spark jobs in a different namespace than the operator |
| tolerations | list | `[]` | List of node taints to tolerate |
| webhook.cleanupAnnotations | object | `{"helm.sh/hook":"pre-delete, pre-upgrade","helm.sh/hook-delete-policy":"hook-succeeded"}` | The annotations applied to the cleanup job, required for helm lifecycle hooks |
| webhook.enable | bool | `false` | Enable webhook server |
| webhook.namespaceSelector | string | `""` | The webhook server will only operate on namespaces with this label, specified in the form key1=value1,key2=value2. Empty string (default) will operate on all namespaces |
| webhook.port | int | `8080` | Webhook service port |
| labelSelectorFilter | string | `""` | Set this if only operator watches spark jobs with certain labels are allowed |

## Maintainers

Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ spec:
volumes:
- name: webhook-certs
secret:
secretName: spark-webhook-certs
secretName: {{ include "spark-operator.fullname" . }}-webhook-certs
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/templates/rbac.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{{- if .Values.rbac.create }}
{{- if or .Values.rbac.create .Values.rbac.createClusterRole }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/templates/spark-rbac.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{{- if .Values.rbac.create }}
{{- if or .Values.rbac.create .Values.rbac.createRole }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
-H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" \
-H \"Accept: application/json\" \
-H \"Content-Type: application/json\" \
https://kubernetes.default.svc/api/v1/namespaces/{{ .Release.Namespace }}/secrets/spark-webhook-certs \
https://kubernetes.default.svc/api/v1/namespaces/{{ .Release.Namespace }}/secrets/{{ include "spark-operator.fullname" . }}-webhook-certs \
&& \
curl -ik \
-X DELETE \
Expand Down
8 changes: 7 additions & 1 deletion charts/spark-operator-chart/templates/webhook-init-job.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,5 +26,11 @@ spec:
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{- toYaml .Values.securityContext | nindent 10 }}
command: ["/usr/bin/gencerts.sh", "-n", "{{ .Release.Namespace }}", "-s", "{{ include "spark-operator.fullname" . }}-webhook", "-p"]
command: [
"/usr/bin/gencerts.sh",
"-n", "{{ .Release.Namespace }}",
"-s", "{{ include "spark-operator.fullname" . }}-webhook",
"-r", "{{ include "spark-operator.fullname" . }}-webhook-certs",
"-p"
]
{{ end }}
8 changes: 6 additions & 2 deletions charts/spark-operator-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,12 @@ nameOverride: ""
fullnameOverride: ""

rbac:
# -- Create and use `rbac` resources
create: true
# -- **DEPRECATED** use `createRole` and `createClusterRole`
create: false
# -- Create and use RBAC `Role` resources
createRole: true
# -- Create and use RBAC `ClusterRole` resources
createClusterRole: true

serviceAccounts:
spark:
Expand Down
16 changes: 14 additions & 2 deletions hack/gencerts.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

set -e
SCRIPT=`basename ${BASH_SOURCE[0]}`
RESOURCE_NAME="spark-webhook-certs"

function usage {
cat<< EOF
Expand All @@ -27,6 +28,7 @@ function usage {
-n | --namespace <namespace> The namespace where the Spark operator is installed.
-s | --service <service> The name of the webhook service.
-p | --in-pod Whether the script is running inside a pod or not.
-r | --resource-name The spark resource name that will hold the secret [default: $RESOURCE_NAME]
EOF
}

Expand Down Expand Up @@ -59,6 +61,16 @@ function parse_arguments {
shift 1
continue
;;
-r|--resource-name)
if [[ -n "$2" ]]; then
RESOURCE_NAME="$2"
else
echo "-r or --resource-name requires a value."
exit 1
fi
shift 2
continue
;;
-h|--help)
usage
exit 0
Expand Down Expand Up @@ -135,7 +147,7 @@ if [[ "$IN_POD" == "true" ]]; then
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "spark-webhook-certs",
"name": "'"$RESOURCE_NAME"'",
"namespace": "'"$NAMESPACE"'"
},
"data": {
Expand All @@ -162,7 +174,7 @@ if [[ "$IN_POD" == "true" ]]; then
;;
esac
else
kubectl create secret --namespace=${NAMESPACE} generic spark-webhook-certs --from-file=${TMP_DIR}/ca-key.pem --from-file=${TMP_DIR}/ca-cert.pem --from-file=${TMP_DIR}/server-key.pem --from-file=${TMP_DIR}/server-cert.pem
kubectl create secret --namespace=${NAMESPACE} generic ${RESOURCE_NAME} --from-file=${TMP_DIR}/ca-key.pem --from-file=${TMP_DIR}/ca-cert.pem --from-file=${TMP_DIR}/server-key.pem --from-file=${TMP_DIR}/server-cert.pem
fi

# Clean up after we're done.
Expand Down