Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#403 pipeline-invocation-service java17 upgrade #416

Merged
merged 1 commit into from
Oct 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions DRAFT_RELEASE_NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,24 @@ To start your aiSSEMBLE upgrade, update your project's pom.xml to use the 1.10.0
</parent>
```

### Split Data Records for the Spark Pipeline
If your spark pipeline is using `aissemble-data-records-separate-module` profile for your data records, you must add the `<version>` tag for
the `jackson-mapper-asl` dependency artifact in the root pom.xml file to enable the build.
```xml
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
+ <version>${version.jackson.mapper.asl}</version>
csun-cpointe marked this conversation as resolved.
Show resolved Hide resolved
</dependency>
```


## Conditional Steps

### For projects that have customized the Spark Operator Service Account permissions
The service account for the pipeline invocation service is now separated from spark operator and configured solely for the service.
If you added any custom configurations to the `sparkoperator` service account pertaining to the pipeline invocation service, you will need to migrate the related changes to the new `pipeline-invocation-service-sa`. Refer to Pipeline Invocation Helm Chart [README](https://github.com/boozallen/aissemble/blob/dev/extensions/extensions-helm/extensions-helm-pipeline-invocation/aissemble-pipeline-invocation-app-chart/README.md) for detail.

## Final Steps - Required for All Projects
### Finalizing the Upgrade
1. Run `./mvnw org.technologybrewery.baton:baton-maven-plugin:baton-migrate` to apply the automatic migrations
Expand Down
1 change: 1 addition & 0 deletions build-parent/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@
<version.awaitility>4.0.3</version.awaitility>
<version.plexus.util>3.5.1</version.plexus.util>
<version.jackson.mapper.asl>1.9.3</version.jackson.mapper.asl>
<version.exec.maven.plugin>3.4.1</version.exec.maven.plugin>
csun-cpointe marked this conversation as resolved.
Show resolved Hide resolved

<!-- Java EE Dependencies -->
<version.jakarta.cdi>4.0.1</version.jakarta.cdi>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi9/openjdk-11-runtime:1.20 AS builder
FROM registry.access.redhat.com/ubi9/openjdk-17-runtime:1.20 AS builder
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I: Although it's the same image, use a multistage build to save more than 20 mb in space.

USER root
RUN microdnf install -y openssl gzip && \
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -45,25 +45,10 @@ aissemble-spark-operator-chart:
| volumes | Volumes for the pod | No | `spark-logging=/tmp/spark-logging`, `ivy-cache=/home/spark/.ivy2` |
| volumeMounts | Volume Mounts for the pod | No | `spark-logging=/tmp/spark-logging`, `ivy-cache=/home/spark/.ivy2` |
| fullnameOverride | String to override release name | No | spark-operator |
| rbac.createClusterRole | See `Migrated Properties` | No | false |
| serviceAccounts.spark.name | Name for the spark service account | No | spark |
| serviceAccounts.sparkoperator.name | Name for the spark service account | No | sparkoperator |
| podSecurityContext | Pod security context | No | runAsUser: 185<br/>runAsGroup: 1000<br/>fsGroup: 1000<br/>fsGroupChangePolicy: "OnRootMismatch" |

## Migrated Properties
csun-cpointe marked this conversation as resolved.
Show resolved Hide resolved
The following properties have been migrated from the `spark-operator` subchart to the `aissemble-spark-operator-chart` chart.
Any required overrides should be cognisant of the alternate path. For example:

```yaml
aissemble-spark-operator-chart:
rbac:
createClusterRole: false
```

| Property | Description | Default |
|------------------------|-------------------------------------------------------------------------------|---------|
| rbac.createClusterRole | Create and use RBAC `ClusterRole` resources. Migrated to use modified rules. | true |

# Shared Ivy Cache

Spark uses [Ivy](https://ant.apache.org/ivy/) to resolve and download dependencies for Spark applications. By default,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,6 @@ spark-operator:
runAsGroup: 1000
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"

rbac:
# -- Create and use RBAC `ClusterRole` resources
# -- Set to false in order to enable overriding with our own RBAC template
createClusterRole: false

# volumes - Operator volumes
volumes:
- name: spark-logging
Expand All @@ -61,8 +55,4 @@ spark-operator:

sparkoperator:
# -- Optional name for the operator service account
name: "sparkoperator"

rbac:
# -- Set to True in order to enable overriding with our own RBAC template
createClusterRole: True
name: "sparkoperator"
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,17 @@ helm install pipeline-invocation-service oci://ghcr.io/boozallen/aissemble-pipel
**Note**: *the version should match the aiSSEMBLE project version.*

# Properties
| Property | Description | Required Override | Default |
|------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|-------------------|-----------------------------------------------------|
| ingress.apiVersion | k8s API version to use | No | networking.k8s.io/v1 |
| ingress.enabled | k8s Whether to enable ingress | No | false |
| ingress.kind | Type of kubernetes entity | No | Ingress |
| ingress.metadata.name | Name of the ingress | No | pipeline-invocation-service-web |
| ingress.metadata.annotations.kubernetes.io/ingress.class | Ingress class name | No | nginx |
| ingress.metadata.annotations.ingress.metadata.annotations.nginx.ingress.kubernetes.io/server-snippet | Custom configurations for the nginx ingress class | No | gunzip on; gzip on; gzip_proxied any; gzip_types *; |
| ingress.spec.rules.hosts | A list of hosts for ingress to support, each with their own path definition | No | |
| ingress.status | Load balancer IP if required | No | None |
| Property | Description | Required Override | Default |
|-------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|-------------------|-----------------------------------------------------|
| ingress.apiVersion | k8s API version to use | No | networking.k8s.io/v1 |
| ingress.enabled | k8s Whether to enable ingress | No | false |
| ingress.kind | Type of kubernetes entity | No | Ingress |
| ingress.metadata.name | Name of the ingress | No | pipeline-invocation-service-web |
| ingress.metadata.annotations.kubernetes.io/ingress.class | Ingress class name | No | nginx |
| ingress.metadata.annotations.nginx.ingress.kubernetes.io/server-snippet | Custom configurations for the nginx ingress class | No | gunzip on; gzip on; gzip_proxied any; gzip_types *; |
| ingress.spec.rules.hosts | A list of hosts for ingress to support, each with their own path definition | No | |
| ingress.status | Load balancer IP if required | No | None |
| rbac.createClusterRole | Create and use RBAC `ClusterRole` resources. | No | true |

# Quarkus Configuration

Expand All @@ -30,3 +31,5 @@ The following configuration of the service is provided. Additional configuratio
|---------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|
| kafka.bootstrap.servers | Specifies the kafka bootstrap server when using kafka for messaging | Any valid URI |
| mp.messaging.incoming.pipeline-invocation.* | Specifies and configures the smallrye connector to use. Supported connectors are `smallrye-amqp`, `smallrye-kafka`, `smallrye-mqtt`, and `smallrye-rabbitmq` | See xref:messaging-details.adoc[the Messaging documentation] for more details |


Original file line number Diff line number Diff line change
@@ -1,24 +1,22 @@
{{- /*
aiSSEMBLE Custom rbac.yaml

Required custom rbac.yaml file that grants the sparkoperator service account
Required custom rbac.yaml file that grants the pipeline-invocation-service service account
create, delete, and update access to the apigroup apiextensions.k8s.io.

This is necessary for the pipeline-invocation-service to create instances of the
SparkApplication CRD to submit pipelines to the Spark Operator for execution.
*/}}

{{- if or .Values.rbac.create .Values.rbac.createClusterRole }}
{{ if .Values.rbac.createClusterRole }}
{{- $serviceAccountName := (index .Values "aissemble-quarkus-chart" "deployment" "serviceAccountName") -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "spark-operator.fullname" (index .Subcharts "spark-operator") }}
name: {{ $serviceAccountName | default "pipeline-invocation-service" }}-clusterrole
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-delete-policy": hook-failed, before-hook-creation
"helm.sh/hook-weight": "-10"
labels:
{{- include "spark-operator.labels" (index .Subcharts "spark-operator") | nindent 4 }}
rules:
- apiGroups:
- ""
Expand All @@ -34,6 +32,7 @@ rules:
- configmaps
- secrets
verbs:
- list
- create
- get
- delete
Expand Down Expand Up @@ -75,6 +74,7 @@ rules:
resources:
- customresourcedefinitions
verbs:
- create
- get
- apiGroups:
- admissionregistration.k8s.io
Expand All @@ -97,7 +97,6 @@ rules:
- scheduledsparkapplications/finalizers
verbs:
- "*"
{{- if .Values.batchScheduler.enable }}
# required for the `volcano` batch scheduler
- apiGroups:
- scheduling.incubator.k8s.io
Expand All @@ -107,34 +106,29 @@ rules:
- podgroups
verbs:
- "*"
{{- end }}
{{ if .Values.webhook.enable }}
- apiGroups:
- batch
resources:
- jobs
verbs:
- delete
{{- end }}

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "spark-operator.fullname" (index .Subcharts "spark-operator") }}
name: {{ $serviceAccountName | default "pipeline-invocation-service" }}-clusterrole-binding
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-delete-policy": hook-failed, before-hook-creation
"helm.sh/hook-weight": "-10"
labels:
{{- include "spark-operator.labels" (index .Subcharts "spark-operator") | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "spark-operator.serviceAccountName" (index .Subcharts "spark-operator") }}
name: {{ $serviceAccountName | default "pipeline-invocation-service" }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "spark-operator.fullname" (index .Subcharts "spark-operator") }}
name: {{ $serviceAccountName | default "pipeline-invocation-service" }}-clusterrole
apiGroup: rbac.authorization.k8s.io
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{{- $serviceAccountName := (index .Values "aissemble-quarkus-chart" "deployment" "serviceAccountName") -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ $serviceAccountName | default "pipeline-invocation-service" }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
suite: Pipeline Invocation Service RBAC
templates:
- rbac.yaml
tests:
- it: Should contain ClusterRole document
documentIndex: 0
asserts:
- containsDocument:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
- it: Should contain ClusterRoleBinding document
documentIndex: 1
asserts:
- containsDocument:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
- it: Should be 2 documents in total
asserts:
- hasDocuments:
count: 2
- it: Do not contain any documents if options are disabled
set:
rbac:
createClusterRole: false
asserts:
- hasDocuments:
count: 0
- it: ClusterRole should include appropriate default values
documentIndex: 0
asserts:
- equal:
path: metadata.name
value: pipeline-invocation-service-sa-clusterrole
- it: ClusterRoleBinding should include appropriate default values
documentIndex: 1
release:
namespace: default
asserts:
- equal:
path: metadata.name
value: pipeline-invocation-service-sa-clusterrole-binding
- contains:
path: subjects
content:
kind: ServiceAccount
name: pipeline-invocation-service-sa
namespace: default
- equal:
path: roleRef.kind
value: ClusterRole
- equal:
path: roleRef.name
value: pipeline-invocation-service-sa-clusterrole
- equal:
path: roleRef.apiGroup
value: rbac.authorization.k8s.io
- it: Should set values appropriately for the cluster role binding
set:
aissemble-quarkus-chart:
deployment:
serviceAccountName: test
release:
namespace: default
documentIndex: 1
asserts:
- contains:
path: subjects
content:
kind: ServiceAccount
name: test
namespace: default
- equal:
path: metadata.name
value: test-clusterrole-binding
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
suite: Pipeline Invocation Service Account
templates:
- serviceaccount.yaml
tests:
- it: ServiceAccount should include appropriate default values
asserts:
- equal:
path: metadata.name
value: pipeline-invocation-service-sa

- it: Should set values appropriately for the service account
set:
aissemble-quarkus-chart:
deployment:
serviceAccountName: test
asserts:
- equal:
path: metadata.name
value: test
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ aissemble-quarkus-chart:
containerPort: 9000
protocol: TCP
restartPolicy: Always
serviceAccountName: sparkoperator
serviceAccountName: pipeline-invocation-service-sa
automountServiceAccountToken: true

supplementalVolumeMounts:
Expand Down Expand Up @@ -60,3 +60,5 @@ aissemble-quarkus-chart:
name: pipeline-invocation-service
port:
number: 8080
rbac:
createClusterRole: true
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.1.0</version>
<version>${version.exec.maven.plugin}</version>
<executions>
<execution>
<id>run tests</id>
Expand Down
3 changes: 1 addition & 2 deletions extensions/extensions-helm/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.1.0</version>
<version>${version.exec.maven.plugin}</version>
<executions>
<execution>
<id>run tests</id>
Expand Down Expand Up @@ -250,7 +250,6 @@
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<id>run tests</id>
Expand Down
Loading