Skip to content
This repository has been archived by the owner on May 25, 2023. It is now read-only.

Add phase/conditions into PodGroup.Status #521

Closed
3 tasks done
k82cn opened this issue Dec 28, 2018 · 15 comments
Closed
3 tasks done

Add phase/conditions into PodGroup.Status #521

k82cn opened this issue Dec 28, 2018 · 15 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Milestone

Comments

@k82cn
Copy link
Contributor

k82cn commented Dec 28, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

Description:

Currently, kube-batch only raised Unschedulable event for its status; it's better to include phase and conditions in PodGroup.Status to give more detail.

Tasks

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 28, 2018
@k82cn k82cn added this to the v0.4 milestone Dec 28, 2018
@k82cn k82cn added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Dec 28, 2018
@Zyqsempai
Copy link
Contributor

@k82cn I would like to work on this one, but can you please provide more details.
As far as I can see, currently we have that type of status for PodGroup:
https://github.com/kubernetes-sigs/kube-batch/blob/master/pkg/apis/scheduling/v1alpha1/types.go#L50
You want to extend PodGroupStatus struct and add few fields for additional info?

@k82cn
Copy link
Contributor Author

k82cn commented Dec 30, 2018

You want to extend PodGroupStatus struct and add few fields for additional info?

Yes, we need to add PodGroupPhase and Conditions at least; so the operator/controller knows where it is in its lifecycle. We can refer to PodStatus :)

@Zyqsempai
Copy link
Contributor

Got it, I will take care of it!

@Zyqsempai
Copy link
Contributor

@k82cn Hey, so I opened PR, but I still have some questions, I am not sure how and where we will fill up those new fields, also, I created them based on same Pod.Status fields, not sure that every value is acceptable for PodGroup.

@k82cn
Copy link
Contributor Author

k82cn commented Dec 31, 2018

I am not sure how and where we will fill up those new fields

kube batch will update after each scheduling cycle.

@k82cn
Copy link
Contributor Author

k82cn commented Jan 2, 2019

@Zyqsempai , just go through your PR; I think I need to give a design doc for this, including the feature interaction with cluster autoscaler; so you can create related PR if you're interesting in. I'll create the design doc asap :)

@Zyqsempai
Copy link
Contributor

@k82cn , Great, will be happy to work on it, but what about original PR, should we merge it, or wait for related PR based on your design doc?

@k82cn
Copy link
Contributor Author

k82cn commented Jan 3, 2019

but what about original PR, should we merge it,

For your PR, I think that's a good start; I'll also add some comments according to the design doc. So I'd suggest to update your PR when design doc is ready. I'll try to get doc ready this week and ask you and @Jeffwan to review

@Jeffwan
Copy link
Contributor

Jeffwan commented Jan 3, 2019

@k82cn I think for short term, I will only update podCondition for those pods that fails in predicate action. If that case, cluster autoscaler will scale up right count of new nodes. It would be great to see your doc and work on long term support.

@k82cn
Copy link
Contributor Author

k82cn commented Jan 7, 2019

@Zyqsempai , would you help to update your PRs according to https://github.com/kubernetes-sigs/kube-batch/blob/master/doc/design/podgroup-status.md :)

@k82cn
Copy link
Contributor Author

k82cn commented Jan 7, 2019

I think for short term, I will only update podCondition for those pods that fails in predicate action. If that case, cluster autoscaler will scale up right count of new nodes.

I'm ok with that; we can cover regular case firstly. For gang-scheduling/coscheduling case, we'll discuss it later :)

@Zyqsempai
Copy link
Contributor

@k82cn Done;)

@k82cn
Copy link
Contributor Author

k82cn commented Jan 10, 2019

@Zyqsempai , are you going to work on the other items, e.g. updating status accordingly? If not, someone from our team may help on it :)

@Zyqsempai
Copy link
Contributor

@k82cn Will be glad to, can you point me a little bit? I can't find the proper place where status should be updated.

@k82cn
Copy link
Contributor Author

k82cn commented Jan 25, 2019

/close as PodGroup status is done

@k82cn k82cn closed this as completed Jan 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

No branches or pull requests

4 participants