-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use kubebuilder 1.16 instead of the code generator directly for CRDs #16
Comments
So, even we have CRDs, but our current usage is quite simple - we do not really need a CRD controller, but just get and update the CRDs in K8s API. The CRDs are mainly for exposing Antrea runtime information to users and UI for debugging puroposes. |
we may build CRDs in future which might not be as simple as this.. probably better to switch considering future? |
This is still very much a work-in-progress, I'm opening this PR to gather feedback on the approach. For someone deleting Antrea, the steps would be as follows: * `kubectl delete -f <path to antrea.yml>` * `kubectl apply -f <path to antrea-cleanup.yml>` * check that job has completed with `kubectl -n kube-system get jobs` * `kubectl delete -f <path to antrea-cleanup.yml>` The cleanup manifest creates a DaemonSet that will perform the necessary deletion tasks on each Node. After the tasks have been completed, the "status" is reported to the cleanup controller through a custom resource. Once the controller has received enough statuses (or after a timeout of 1 minutes) the controller job completes and the user can delete the cleanup manifest. Known remaining items: * place cleanup binaries (antrea-cleanup-agent and antrea-cleanup-controller) in a separate docker image to avoid increasing the size of the main Antrea docker image * generate manifest with kustomize? * find a way to test this as part of CI? * update documentation * additional cleanup tasks: as of now we only take care of deleting the OVS bridge * place cleanup CRD in non-default namespace * use kubebuilder instead of the code generator directly (related to antrea-io#16); we probably want to punt this task to a future PR. See antrea-io#181
This is still very much a work-in-progress, I'm opening this PR to gather feedback on the approach. For someone deleting Antrea, the steps would be as follows: * `kubectl delete -f <path to antrea.yml>` * `kubectl apply -f <path to antrea-cleanup.yml>` * check that job has completed with `kubectl -n kube-system get jobs` * `kubectl delete -f <path to antrea-cleanup.yml>` The cleanup manifest creates a DaemonSet that will perform the necessary deletion tasks on each Node. After the tasks have been completed, the "status" is reported to the cleanup controller through a custom resource. Once the controller has received enough statuses (or after a timeout of 1 minutes) the controller job completes and the user can delete the cleanup manifest. Known remaining items: * place cleanup binaries (antrea-cleanup-agent and antrea-cleanup-controller) in a separate docker image to avoid increasing the size of the main Antrea docker image * generate manifest with kustomize? * find a way to test this as part of CI? * update documentation * additional cleanup tasks: as of now we only take care of deleting the OVS bridge * place cleanup CRD in non-default namespace * use kubebuilder instead of the code generator directly (related to antrea-io#16); we probably want to punt this task to a future PR. See antrea-io#181
This issue is stale because it has been open 180 days with no activity. Remove stale label or comment, or this will be closed in 180 days |
No description provided.
The text was updated successfully, but these errors were encountered: