Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce cascading deletion #1570

Closed
naisanza opened this issue Jan 15, 2017 · 25 comments
Closed

Introduce cascading deletion #1570

naisanza opened this issue Jan 15, 2017 · 25 comments

Comments

@naisanza
Copy link

naisanza commented Jan 15, 2017

Issue details

Deleting Deployment should delete Replica Sets and Services

Environment
Dashboard version: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
Kubernetes Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Operating system: Linux k8s-master01 4.8.0-22-generic #24-Ubuntu SMP Sat Oct 8 09:15:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Node.js version:
Go version: 1.7.4
Steps to reproduce
  1. Create new Deployment
  2. Delete Deployment
Observed result
  1. Deployment deleted
  2. Replica Sets, Pods, and Services remain
Expected result
  1. All related entities to the Deployment cleaned up and removed (Replica Sets, Pods, and Services)
Comments

I'm assuming with the way K8s's container orchestration is meant to work, when a new Deployment is created, it spawns all the underlying tasks (Replica Sets, Pods, and Services). So shouldn't removing the Deployment also remove all tasks created by it?

@naisanza naisanza changed the title Deleting Deployment removes Replica Sets, Pods, and Services Deleting Deployment should remove Replica Sets, Pods, and Services Jan 15, 2017
@ianlewis
Copy link
Contributor

This is something I was confused with about OpenShift as well because when you delete a deployment it doesn't delete the ReplicaSet either when kubectl does. Deleting the ReplicaSet seems to clean up the pods though.

I'm less clear on services though. Kubectl won't delete services if you delete a Deployment. Why do you think services should be deleted as well? One of the problems with Kubernetes is there is no sense of an "app" to delete. It's all just a bunch of loosely coupled components so there's not a good generic way to know what to delete.

@naisanza
Copy link
Author

naisanza commented Jan 20, 2017

@ianlewis I was relieved when deleting the ReplicaSet also deletes the pods. I've tried, for fun before, setting replicas to 100

I actually think the Services could remain so they can be reused/reattached or duplicated. I've often find myself going through the process of kubectl --namespace ns exec pod /bin/bash -i and then doing a netstat -ant to figure out what ports that particular service has decided to listen on, then deleting that deployment and recreating it with an External Service that can reach that listening port

I think you described it perfectly, an "app". A deployment should be that "app". A deployment will deploy any docker image with replica set to "1". Most times, the deployment will fail when not provided with sufficient environment variables (i.e. "kylemanna/openvpn").

But that's okay, because in a future version of kubernetes/dashboard:

  • You can change a replica slider after deployment
  • You can add environment variables after deployment
  • You can add port forwarding (External Service) after deployment

After deployment, and after changing any of the options above, kubernetes will just automatically re-deploy with those new changes

Currently, if I want to create any new changes through the UI, the best way is to remember what I want changed, delete the Deployment, ReplicaSet, and Services, and then re-deploy with the new settings. It's very tedious, and loses a few UX points

@ianlewis
Copy link
Contributor

We are currently discussing ideas to make it easier to use but defining what an "app" is is hard. Kubernetes itself doesn't really have the idea of an "app" because these things are complex and user definable, objects can be reused across apps etc. It's a lower level framework or set of components other things can build on top of.

As for the Deployment, the ReplicaSet should really get cleaned up by the Deployment controller in core Kubernetes. Right now it's cheating and deleting it for you from the kubectl client.

@bryk
Copy link
Contributor

bryk commented Mar 8, 2017

@maciaszczykm @floreks @ianlewis Does cascade delete work? I thought it works, meaning that this bug is fixed.

@maciaszczykm
Copy link
Member

I'll verify.

@maciaszczykm maciaszczykm self-assigned this Mar 8, 2017
@maciaszczykm
Copy link
Member

maciaszczykm commented Mar 8, 2017

@bryk Checked on latest master, it doesn't remove pods. After deleting deployment only replica set was deleted automatically. All pods stayed in the cluster. Moreover, Dashboard crashes when trying to enter on pod page, because its creator doesn't exist.

In the latest stable kubectl:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.851+6d92abdc0a2d35", GitCommit:"6d92abdc0a2d352fcb0e884ad6bf14c6d702bc0a", GitTreeState:"clean", BuildDate:"2017-03-06T11:38:24Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl get all
NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.0.0.1     <none>        443/TCP   1d

$ kubectl create -f deployment.yaml
deployment "nginx-deployment" created

$ kubectl get all
NAME                                   READY     STATUS              RESTARTS   AGE
po/nginx-deployment-4234284026-0z8tq   0/1       ContainerCreating   0          2s
po/nginx-deployment-4234284026-5g4n7   1/1       Running             0          2s
po/nginx-deployment-4234284026-x727j   1/1       Running             0          2s

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.0.0.1     <none>        443/TCP   1d

NAME                      KIND
deploy/nginx-deployment   Deployment.v1beta1.apps

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-4234284026   3         3         3         2s

$ kubectl delete deployment nginx-deployment --cascade=true
deployment "nginx-deployment" deleted

$ kubectl get all
NAME                                   READY     STATUS    RESTARTS   AGE
po/nginx-deployment-4234284026-0z8tq   1/1       Running   0          11s
po/nginx-deployment-4234284026-5g4n7   1/1       Running   0          11s
po/nginx-deployment-4234284026-x727j   1/1       Running   0          11s

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.0.0.1     <none>        443/TCP   1d

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-4234284026   3         3         3         11s

I guess Dashboard has 2 bugs here.

@bryk
Copy link
Contributor

bryk commented Mar 8, 2017

Hmm... What K8s cluster version do you have? I thought that as of 1.5 cascade delete works for all objects

@maciaszczykm
Copy link
Member

@bryk

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-alpha.0.851+6d92abdc0a2d35", GitCommit:"6d92abdc0a2d352fcb0e884ad6bf14c6d702bc0a", GitTreeState:"clean", BuildDate:"2017-03-06T11:38:24Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

@bryk
Copy link
Contributor

bryk commented Mar 8, 2017

That's sad. Could we perhaps implement this on our side?

@maciaszczykm maciaszczykm modified the milestone: v1.6.0 Mar 8, 2017
@ianlewis
Copy link
Contributor

ianlewis commented Mar 9, 2017

If cascade is enabled and it doesn't delete pods when deleting a deployment then that's a server side bug AFAICT. We should have a server bug if we can reproduce it.

FWIW this bug says 1.6 ¯_(ツ)_/¯
kubernetes/kubernetes#40014

@ianlewis
Copy link
Contributor

ianlewis commented Mar 9, 2017

AFACT, when using the API, cascading delete requires an extra parameter. We should add a checkbox to the delete confirm dialog for cascading delete that defaults to true and provide that option to the API when the checkbox is checked.

delete

@0xmichalis
Copy link

1.6 is out so the dashboard can rely on the GC now

@maciaszczykm maciaszczykm changed the title Deleting Deployment should remove Replica Sets, Pods, and Services Introduce cascading deletion Apr 7, 2017
@maciaszczykm maciaszczykm added this to the 2017 roadmap milestone Apr 7, 2017
@zouyee
Copy link
Member

zouyee commented Apr 20, 2017

what is the progress about cascade delete?

@bryk
Copy link
Contributor

bryk commented Apr 20, 2017

@Kargakis I just tried to cascade-delete a deployment created on 1.6 cluster. It deleted replica sets, but pods stayed there.

Is this a bug or WAI?

@ianlewis
Copy link
Contributor

I think we still need to add the flag to tell it to cascading delete.

@0xmichalis
Copy link

It's a bug. @caesarxuchao has a PR open kubernetes/kubernetes#44058

@caesarxuchao
Copy link
Member

Yes, it's a known issue. Setting DeleteOptions.PropagationPolicy="Foreground" will delete both the replicasets and the pods.

@maciaszczykm maciaszczykm removed their assignment Apr 28, 2017
@Huang-Wei
Copy link
Member

Based on latest master branch of client-go, I wrote a simple program to test DeleteOptions.PropagationPolicy="Foreground". Here is the testing result:

  • against kubernetes v1.5.7, it doesn't work - deleting a deployment will still leave the dependent RepliaSets, Pods
  • against kubernetes v1.6.3, it works - deleting a deployment will get dependent stuff cleaned

@caesarxuchao is the behavior against v1.5.7 as expected?

@floreks
Copy link
Member

floreks commented Jun 15, 2017

@Huang-Wei Latest code won't work with 1.5.x because api objects have changed between 1.5.x and 1.6.x. This object is in different package now and older apiserver won't know how to deserialize it. There will be probably some kind of version mismatch or not recognized error. Also on client-go page you can see compatibility matrix. We are trying to keep it up to date so some api objects may differ but common stuff will work.

@Huang-Wei
Copy link
Member

Thanks @floreks for the clarification. I'm still on latest client-go, and it's a pain to change the version to v2.0.0. To adapt to k8s 1.5, I tried the solution given at kubernetes/client-go#50 (comment):

  • update deployment to set its replicas to 0 - hence deleting a deployment will get dependent pods cleaned
  • then manually delete dependent replicasets

It works, a little dumb though...

@floreks
Copy link
Member

floreks commented Jul 3, 2017

@caesarxuchao Do you know how to configure DeleteOptions to force pod deletion together with job? Currently set delete propagation Foreground does not work for jobs and OrphanDependants are deprecated.

@joan38
Copy link

joan38 commented Aug 21, 2017

Same here for the Jobs. No way to delete the related pods

@floreks
Copy link
Member

floreks commented Aug 22, 2017

@joan38 #2176

@joan38
Copy link

joan38 commented Aug 22, 2017

Thanks @floreks

@Davidrjx
Copy link

Davidrjx commented Sep 9, 2021

i am faced with same problem, like kubectl delete deploy xxx can not delete cascading resource as rs and pods but scale down can, very confusing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests