-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce cascading deletion #1570
Comments
This is something I was confused with about OpenShift as well because when you delete a deployment it doesn't delete the ReplicaSet either when kubectl does. Deleting the ReplicaSet seems to clean up the pods though. I'm less clear on services though. Kubectl won't delete services if you delete a Deployment. Why do you think services should be deleted as well? One of the problems with Kubernetes is there is no sense of an "app" to delete. It's all just a bunch of loosely coupled components so there's not a good generic way to know what to delete. |
@ianlewis I was relieved when deleting the ReplicaSet also deletes the pods. I've tried, for fun before, setting replicas to 100 I actually think the Services could remain so they can be reused/reattached or duplicated. I've often find myself going through the process of I think you described it perfectly, an But that's okay, because in a future version of kubernetes/dashboard:
After deployment, and after changing any of the options above, kubernetes will just automatically re-deploy with those new changes Currently, if I want to create any new changes through the UI, the best way is to remember what I want changed, delete the Deployment, ReplicaSet, and Services, and then re-deploy with the new settings. It's very tedious, and loses a few UX points |
We are currently discussing ideas to make it easier to use but defining what an "app" is is hard. Kubernetes itself doesn't really have the idea of an "app" because these things are complex and user definable, objects can be reused across apps etc. It's a lower level framework or set of components other things can build on top of. As for the Deployment, the ReplicaSet should really get cleaned up by the Deployment controller in core Kubernetes. Right now it's cheating and deleting it for you from the kubectl client. |
@maciaszczykm @floreks @ianlewis Does cascade delete work? I thought it works, meaning that this bug is fixed. |
I'll verify. |
@bryk Checked on latest master, it doesn't remove pods. After deleting deployment only replica set was deleted automatically. All pods stayed in the cluster. Moreover, Dashboard crashes when trying to enter on pod page, because its creator doesn't exist. In the latest stable
I guess Dashboard has 2 bugs here. |
Hmm... What K8s cluster version do you have? I thought that as of 1.5 cascade delete works for all objects |
|
That's sad. Could we perhaps implement this on our side? |
If cascade is enabled and it doesn't delete pods when deleting a deployment then that's a server side bug AFAICT. We should have a server bug if we can reproduce it. FWIW this bug says 1.6 ¯_(ツ)_/¯ |
1.6 is out so the dashboard can rely on the GC now |
what is the progress about cascade delete? |
@Kargakis I just tried to cascade-delete a deployment created on 1.6 cluster. It deleted replica sets, but pods stayed there. Is this a bug or WAI? |
I think we still need to add the flag to tell it to cascading delete. |
It's a bug. @caesarxuchao has a PR open kubernetes/kubernetes#44058 |
Yes, it's a known issue. Setting DeleteOptions.PropagationPolicy="Foreground" will delete both the replicasets and the pods. |
Based on latest master branch of client-go, I wrote a simple program to test
@caesarxuchao is the behavior against v1.5.7 as expected? |
@Huang-Wei Latest code won't work with 1.5.x because api objects have changed between 1.5.x and 1.6.x. This object is in different package now and older apiserver won't know how to deserialize it. There will be probably some kind of version mismatch or not recognized error. Also on |
Thanks @floreks for the clarification. I'm still on latest client-go, and it's a pain to change the version to v2.0.0. To adapt to k8s 1.5, I tried the solution given at kubernetes/client-go#50 (comment):
It works, a little dumb though... |
@caesarxuchao Do you know how to configure |
Same here for the Jobs. No way to delete the related pods |
Thanks @floreks |
i am faced with same problem, like |
Issue details
Deleting Deployment should delete Replica Sets and Services
Environment
Steps to reproduce
Observed result
Expected result
Comments
I'm assuming with the way K8s's container orchestration is meant to work, when a new Deployment is created, it spawns all the underlying tasks (Replica Sets, Pods, and Services). So shouldn't removing the Deployment also remove all tasks created by it?
The text was updated successfully, but these errors were encountered: