Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop looping if all workers have died #238

Merged

Conversation

timj
Copy link
Contributor

@timj timj commented Oct 4, 2017

If the workers are crashing and the restart limit has been met, we need to stop listening for events and trigger an internal error. This is related to #45

Thanks for submitting a PR, your contribution is really appreciated!

Here's a quick checklist that should be present in PRs:

  • Make sure to include reasonable tests for your change if necessary

  • We use towncrier for changelog management, so please add a news file into the changelog folder following these guidelines:

    • Name it $issue_id.$type for example 588.bugfix;

    • If you don't have an issue_id change it to the PR id after creating it

    • Ensure type is one of removal, feature, bugfix, vendor, doc or trivial

    • Make sure to use full sentences with correct case and punctuation, for example:

      Fix issue with non-ascii contents in doctest text files.
      

@timj
Copy link
Contributor Author

timj commented Oct 4, 2017

Do you have tests that test failure conditions? I'll be happy to add a test if I get some examples for how to test if pytest has exited with bad status.

if not self._active_nodes:
# If everything has died stop looping
self.triggershutdown()
raise RuntimeError("No active nodes")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i propose something more along the lines of unexpectedly no active workers available so people can understand the error more easyly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@timj timj force-pushed the u/timj/exit-when-all-dead branch from 59fea51 to 4541fed Compare October 4, 2017 17:04
@timj
Copy link
Contributor Author

timj commented Oct 4, 2017

I'm not entirely sure why one test failed this time. It passed before and I only changed the error message. Does this happen often?

@nicoddemus
Copy link
Member

I triggered the build again and it has passed, seemed like a fluke.

@timj about the test, can you reproduce the problem into an isolated test file that can be run normally using pytest? If so, we can add an integration test in acceptance_test that uses testdir to do a black box test using your test file and check its output for the expected behavior.

@timj
Copy link
Contributor Author

timj commented Oct 4, 2017

I have a test file that crashes. I'll see if I can add that.

@nicoddemus
Copy link
Member

nicoddemus commented Oct 4, 2017

Hmm just managed a simple test which reproduces it:

import os
os._exit(1)

This hangs for me on master and raises the appropriate error on your fork. 👍

If the workers are crashing and the restart limit has been met,
we need to stop listening for events and trigger an internal
error.
@timj timj force-pushed the u/timj/exit-when-all-dead branch from 4541fed to 9e59d07 Compare October 5, 2017 00:59
@timj
Copy link
Contributor Author

timj commented Oct 5, 2017

I've added that test.

Copy link
Member

@nicoddemus nicoddemus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks!

@RonnyPfannschmidt RonnyPfannschmidt merged commit 7a5efcd into pytest-dev:master Oct 5, 2017
@RonnyPfannschmidt
Copy link
Member

good work, thanks 👍

@nicoddemus
Copy link
Member

Let's release 1.20.1 after #233 gets merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants