KFP charms use Jinja2 templates in order to store manifests that are applied during its deployment. Those manifests are placed under src/templates
in each charm's directory. The process for updating them is:
- Install
kustomize
using the official documentation instructions - Clone Kubeflow manifests repo locally
cd
into the repo and checkout to the branch or tag of the target version.- Build the manifests with
kustomize
according to instructions in https://github.com/kubeflow/manifests?tab=readme-ov-file#kubeflow-pipelines. - Checkout to the branch or tag of the version of the current manifest
- Build the manifest with
kustomize
(see step 4) and save the file - Compare both files to spot the differences (e.g. using diff
diff kfp-manifests-vX.yaml kfp-manifests-vY.yaml > kfp-vX-vY.diff
)
Kfp-api uses also two images launcher
and driver
apart from its workload container one. Those are updated on every release but this change is not visible when comparing manifests. In order to update those, grab their sources from the corresponding comments in the config.yaml file and switch to the target version of that file. Then, use the new images to update the config options' default value.
Once the comparison is done, add any changes to the relevant aggregated ClusterRoles to the
templates/auth_manifests.yaml.j2
file and remember to:
- Use the current model as the namespace
- Use the application name in the name of any ClusterRoles, ClusterRoleBindings, or ServiceAccounts.
- Add to the labels
app: {{ app_name }}
Note that non-aggregated ClusterRoles are skipped due to deploying charms with the --trust
argument. Regarding CRDs that have updates, they are copied as they are into the corresponding charm's in a crds.yaml.j2
file while there can be changes in other resources as well e.g. secrets
or configMaps
.
- In order to copy kfp-profile-controller CRDs, follow instructions on the top of its crd_manifests.yaml.j2 file.
- We do not have a
cache-server
component, so related manifests are skipped. - We do not keep a
pipeline-runner
ServiceAccount
(and related manifests), since even though the api-server is configured to use it by default, the manifests update it to use a different one. - For argo related manifests, we only keep the
aggregate
ClusterRole
s. - Apart from changes shown in the
diff
above, kfp-api charm also requires updatingdriver-image
andlauncher-image
values in the config file. Source for those can be found in the charm's config.yaml file. - Changes for envoy charm may also be included in the aforementioned
diff
. - We do not keep a
pipeline-install-config
configMap
as upstream does, since charms define those configurations either in theirconfig.yaml
or directly in their pebble layer. However, we should pay attention to changes in thatconfigMap
's values since its values could be used in other places, using thevalueFrom
field in anenv
's definition.