-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop looping if all workers have died #238
Stop looping if all workers have died #238
Conversation
Do you have tests that test failure conditions? I'll be happy to add a test if I get some examples for how to test if pytest has exited with bad status. |
xdist/dsession.py
Outdated
if not self._active_nodes: | ||
# If everything has died stop looping | ||
self.triggershutdown() | ||
raise RuntimeError("No active nodes") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i propose something more along the lines of unexpectedly no active workers available
so people can understand the error more easyly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
59fea51
to
4541fed
Compare
I'm not entirely sure why one test failed this time. It passed before and I only changed the error message. Does this happen often? |
I triggered the build again and it has passed, seemed like a fluke. @timj about the test, can you reproduce the problem into an isolated test file that can be run normally using |
I have a test file that crashes. I'll see if I can add that. |
Hmm just managed a simple test which reproduces it: import os
os._exit(1) This hangs for me on master and raises the appropriate error on your fork. 👍 |
If the workers are crashing and the restart limit has been met, we need to stop listening for events and trigger an internal error.
4541fed
to
9e59d07
Compare
I've added that test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks!
good work, thanks 👍 |
Let's release |
If the workers are crashing and the restart limit has been met, we need to stop listening for events and trigger an internal error. This is related to #45
Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs:
Make sure to include reasonable tests for your change if necessary
We use towncrier for changelog management, so please add a news file into the
changelog
folder following these guidelines:Name it
$issue_id.$type
for example588.bugfix
;If you don't have an issue_id change it to the PR id after creating it
Ensure type is one of
removal
,feature
,bugfix
,vendor
,doc
ortrivial
Make sure to use full sentences with correct case and punctuation, for example: