Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: scheduler (11/): add more scheduler logic #413

Merged
merged 6 commits into from
Jul 4, 2023

Conversation

michaelawyu
Copy link
Contributor

Description of your changes

This PR is part of the PRs that implement the Fleet workload scheduling.

It features more scheduling logic for PickAll type CRPs.

I have:

  • Run make reviewable to ensure this PR is ready for review.

How has this code been tested

  • Unit tests

Special notes for your reviewer

To control the size of the PR, certain unit tests are not checked in; they will be sent in a separate PR.

pkg/scheduler/framework/score.go Outdated Show resolved Hide resolved
pkg/scheduler/framework/score.go Show resolved Hide resolved
pkg/scheduler/framework/framework.go Show resolved Hide resolved
pkg/scheduler/framework/framework.go Show resolved Hide resolved
pkg/scheduler/framework/frameworkutils.go Outdated Show resolved Hide resolved
pkg/scheduler/framework/frameworkutils.go Outdated Show resolved Hide resolved
pkg/scheduler/framework/frameworkutils.go Show resolved Hide resolved
pkg/scheduler/framework/frameworkutils.go Outdated Show resolved Hide resolved
pkg/scheduler/framework/score_test.go Outdated Show resolved Hide resolved
) (toCreate, toUpdate, toDelete []*fleetv1beta1.ClusterResourceBinding, err error) {
// Pre-allocate with a reasonable capacity.
toCreate = make([]*fleetv1beta1.ClusterResourceBinding, 0, len(picked))
toUpdate = make([]*fleetv1beta1.ClusterResourceBinding, 0, 20)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious why 20?

Copy link
Contributor Author

@michaelawyu michaelawyu Jul 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ryan! It's just a number that's larger enough than the max. number of clusters in use by most interviewees as reported by the CNCF survey (10 IRC) but not too large to over-allocate memory. The reason it's added here is that it feels a little bit overkill to do some counting just for the purpose of allocating arrays.

zhiying-lin
zhiying-lin previously approved these changes Jul 4, 2023
Copy link
Contributor

@zhiying-lin zhiying-lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LTGM :)

@michaelawyu michaelawyu merged commit 89ad5bc into Azure:main Jul 4, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants