-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot configure securityContext of sync and cleanup jobs #6545
Comments
I just want to confirm that, when creating the AppRepository in Kubeapps, that you did specify the sync-job pod template? Asking because I can see in the code that we do intend to use that as the basis for the pod template used by the cron job that's created: kubeapps/cmd/apprepository-controller/server/controller.go Lines 627 to 656 in 23dac7b
See https://kubeapps.dev/docs/latest/tutorials/managing-package-repositories/#modifying-the-synchronization-job for more info about customising the sync job pod template. Can you please paste your Thanks |
Sample values definition: apprepository:
containerSecurityContext:
runAsUser: 10001
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault Templated AppRepository: apiVersion: kubeapps.com/v1alpha1
kind: AppRepository
metadata:
labels:
app.kubernetes.io/instance: apps
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubeapps
helm.sh/chart: kubeapps-12.4.6
name: test-apprepo
namespace: test-apprepo
spec:
auth:
header:
secretKeyRef:
key: authorizationHeader
name: test-apprepo
ociRepositories:
- <redacted>
syncJobPodTemplate:
spec:
securityContext:
runAsUser: 10001
type: oci
url: <redacted> Only the |
Sorry I misread what you wrote, but I'll leave my previous comment. Yes we already modify the AppRepository
Did you mean that the |
Yes - the chart options try to provide simple options that the user can use for specific tasks, and do not currently allow specifying a general And you are absolutely correct: the specified |
Great, thanks for confirming! |
Well... it turned out not to be that simple :P, the details of the affected AppRepo are no longer available in the cluster (wherefore it is clean-up job). I have proposed an alternative approach in #6605. See what you all think! |
#6646) ### Description of the change While working on #6605, I noticed I had a bunch of unrelated changes. Instead of overloading the PR, I have extracted them into this one. The main changes are: - Apply the same fix in go Dockerfiles as we did for kubeapps-apis (avoid downloading linters if `lint=false`). It helps reduce the build time locally. - Remove some duplicated keys in the yamls, as we already discussed. - Add some missing apprepo-controller args in the tests, mainly just for completness. - Fix some tests in the apprepo-controller. Mainly just swapping some misplaced `get, want`. - Handle cronjob v1beta1 vs v1 in a missing place. - Pass the `podSecurityContext` and `containerSecurityContext` in the pod template properly. - Update a missing copyright header. - Fix wrong values.yaml metadata (mainly `ocicatalog/ociCatalog`) ### Benefits Get the aforementioned issues solved. ### Possible drawbacks N/A (not adding new logic here) ### Applicable issues - (partially) related to ##6545 ### Additional information N/A --------- Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
### Description of the change Even if the sync jobs were added a security context (by means of each AppRepo CRD), this information was not available for Cleanup jobs. This is mainly due to the fact that those jobs are spun up once a NotFound error is thrown when fetching an AppRepo. However, Kubernetes does have a native approach for dealing with these scenarios: finalizers. In #6605 we proposed a simplistic workaround based on adding more params to the controller... but as suggested in #6605 (comment), moving to finalizers is a better long-term solution. ### Benefits Cleanup jobs are now handled within an existing AppRepo context... meaning we have all the syncJobTemplate available to be used (ie, securityPolicies and so on). ### Possible drawbacks When dealing with finalizers in the past I often found it really annoying when they get stuck and prevent the resource to get deleted. I wonder if we should add some info in the FAQ on how to manually remove the finalizers. Additionally, and this might be something important: for the AppRepo controller to be able to `update` AppRepos in other namespaces != kubeapps.... (to add the finalizer) it now needs extra RBAC. Before we were just granting `...-appprepos-read`.. but now we would need to grant `...-write` as well...and I'm not sure we really want to do so. WDYT, @absoludity ? Another idea is using an admission policy... but not sure again if we want to deal with that... ~(I haven't modified the RBAC yet in this PR)~ Changes have been performed finally ### Applicable issues - fixes #6545 ### Additional information This PR is based on top of #6646, but the main change to review is 6e70910 The rest is just moving code into separate files, mostly. Also, I have been taking a look at `kubebuilder` to create a new controller, based on the `sigs.k8s.io/controller-runtime` rather than on the workqueues we currently have. While it is pretty easy to start with ([see quickstart](https://book.kubebuilder.io/quick-start)), I think it is adding too much complexity (using kustomize, adding rbac proxies, prometheus metrics, etc... I also quickly tried the k8s codegen scripts, but ran into some issues with my setup... but perhaps it's the best option. IMO, at some point we should start thinking about moving towards a new state-of-the-art k8s controller boilerplate. --------- Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
HI @RGPosadas, we ended up performing some refactor on the AppRepository controller, so instead of being the controller the responsible of creating clean-up jobs, it is a finalizer who will handle that. That means all the information already available in the AppRepo CR (like the pod template), can now be inherit by the clean-up jobs. In short: those jobs will have the security context set! Feel free to use our dev chart (not Bitnami's, but the one in this repo) and the latest Thanks again for reporting the issue! |
@antgamdia Thanks for the quick work! Hopefully I'll have time to test it out before the official release to report back 🚀 |
Summary
Sync
andcleanup
AppRepo jobs are currently not configureable to pass in container/pod SecurityContext.Background and rationale
We are in the middle of PSP to PSA migration.
Currently, all core kubeapps components allow us to configure the container/pod SecurityContext - except for the Sync and Cleanup apprepository jobs.
Since we cannot configure the SecurityContext, the sync and cleanup jobs will be blocked from spawning pods if the namespace that kubeapps is running on has the
restricted
PodSecurity standard.Description/Acceptance Criteria
Allow users to configure container/pod SecurityContext of the
sync
andcleanup
jobs.Extra Info
For apprepos that are defined as initialRepos, we are able to patch the
AppRepository
manifests, which allows us to define container/pod securityContext for the sync jobs. However, we do not have the same ability for the cleanup jobs.The text was updated successfully, but these errors were encountered: