Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Master spot instances #54735

Merged
merged 44 commits into from
Oct 7, 2019
Merged

Master spot instances #54735

merged 44 commits into from
Oct 7, 2019

Conversation

bryceml
Copy link
Contributor

@bryceml bryceml commented Sep 25, 2019

What does this PR do?

It uses spot instances if they are available at the moment

Previous Behavior

We always used on-demand instances

New Behavior

It attempts to use a spot instance before trying an on-demand instance

Commits signed with GPG?

Yes

@bryceml bryceml requested a review from a team as a code owner September 25, 2019 04:24
@ghost ghost requested a review from dwoz September 25, 2019 04:24
@bryceml bryceml changed the base branch from develop to master September 25, 2019 04:24
Copy link
Collaborator

@s0undt3ch s0undt3ch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably do this in sre jenkins? Specially now that we have a master branch off of 2019.2.1?

@bryceml
Copy link
Contributor Author

bryceml commented Sep 25, 2019

Yeah, we'll have to do it there too. We'll need it here too for PR's.

@bryceml
Copy link
Contributor Author

bryceml commented Sep 25, 2019

I realized we also need to reduce the timeout from 6 hours to something like 5 hours and 45 minutes so that we can still attempt to download artifacts if it times out.

@bryceml
Copy link
Contributor Author

bryceml commented Sep 25, 2019

re-run full all

@bryceml bryceml force-pushed the master_spot_instances branch 2 times, most recently from 23a14eb to fcae32f Compare September 27, 2019 06:13
@bryceml
Copy link
Contributor Author

bryceml commented Sep 27, 2019

@s0undt3ch
centos7 tcp died
fedora-29 and amazon-2 had a salt-minion or salt-master not start up for the multi-master test
centos7 couldn't finish in time
centos6 matches the current 2019.2.1 branch tests.

Can we extend the startup time for the multi-master test and then just leave centos 7 out of spot instances and leave it on a c5.xlarge or something?

@s0undt3ch
Copy link
Collaborator

I'd say yes to all accounts.
For multimaster, look at tests/multimaster/__init__.py

@Akm0d
Copy link
Contributor

Akm0d commented Oct 1, 2019

re-run debian9

@Akm0d
Copy link
Contributor

Akm0d commented Oct 6, 2019

re-run opensuse15

@Akm0d
Copy link
Contributor

Akm0d commented Oct 6, 2019

re-run centos7

@Akm0d
Copy link
Contributor

Akm0d commented Oct 6, 2019

re-run fedora29

@Akm0d
Copy link
Contributor

Akm0d commented Oct 6, 2019

re-run macosxmojave

@Akm0d
Copy link
Contributor

Akm0d commented Oct 6, 2019

re-run windows2019

@dwoz dwoz merged commit 01b9405 into saltstack:master Oct 7, 2019
@bryceml bryceml deleted the master_spot_instances branch October 7, 2019 17:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants