Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to run in parallel only some of the tests ? #325

Closed
david-fliguer-ws opened this issue Aug 1, 2018 · 12 comments
Closed

Is it possible to run in parallel only some of the tests ? #325

david-fliguer-ws opened this issue Aug 1, 2018 · 12 comments
Labels

Comments

@david-fliguer-ws
Copy link

Hi I am new regarding using the pytest-xdist, I was wondering if there is any way to run pytest with the xdist plugin (-n parameter) but that it affects only some of the tests ? What if I group all those tests on a specific class ?

Just for the record I use also the pytest html plugin and I would like the results of both the tests that I want to run parallel and the ones that I don't in the same html report

Thanks in advance !!!

@nicoddemus
Copy link
Member

Hi @david-fliguer-ws,

One approach is to apply a custom mark to the tests you want to run in parallel (say @pytest.mark.parallel) and then run only those tests with xdist:

$ pytest -n auto -m parallel

Hope this helps.

@david-fliguer-ws
Copy link
Author

david-fliguer-ws commented Aug 2, 2018

Well that is a good solution but in that way only the tests that are marked as parallel will run, that is not what I am looking for.

What I want is let's say that we have 3 tests (test_one,test_two,test_three), I want test_one and test_two to run in parallel between them and then to run test_three, or the other way around (First test_three and then test_one and test_two in parallel)

I know that is probably a bad practice and that I should develop my tests in a way that all of them can run in parallel without any dependency between them but until I do that (I need to change some of my existing code) I am trying to find a workaround

That solution that you gave might be a workaround for me if I am able to perform two separate runs but see the results in one HTML report (I use pytest-html plugin), something like:

pytest -n auto -m parallel && pytest -n auto -m not_parallel --html=report.html

Anyway any ideas are welcome

Thanks !!!

@dsully
Copy link

dsully commented Oct 31, 2018

+1 - I could use this functionality as well.

@nicoddemus
Copy link
Member

Oh sorry, I should be more clear, my suggestion was to make two separate runs, as you realized later.

@jhawarkita
Copy link

@david-fliguer-ws Did you find a solution to your question? I am also looking for same approach, where I run a single test first and then run few tests in parallel.

@drsquidop
Copy link

drsquidop commented Oct 10, 2019

+1
@david-fliguer-ws, if I understand the question, we would like to be able to only run a subset of tests via pytest. Right now, all tests are run no matter what file_or_dir argument is passed on the command line. If one calls pytest --rsync . -d sub_dir/, we would expect only the tests in sub_dir to be executed.

@austinbutler
Copy link

This seems to be a duplicate of #385.

@boatcoder
Copy link

I think what you are asking for is this:
@pytest.mark.xdist_group(name="stripe_api")

https://pytest-xdist.readthedocs.io/en/latest/distribution.html?highlight=group#running-tests-across-multiple-cpus

--dist loadgroup: Tests are grouped by the xdist_group mark. Groups are distributed to available workers as whole units. This guarantees that all tests with same xdist_group name run in the same worker.

This will let some of your tests run without parallelism while the other run with parallelism

@nedrebo
Copy link

nedrebo commented Jul 27, 2023

This would also be helpful for me. I got thousand of pytests and usually want max parallelism when executing them, but a few tests are known to need a lot of system resources, for example, GPU memory. If too many such tests run simultaneously, I'll get OOM.

I want to be able to mark such tests with a temporarily lower max parallelism than the global max. Let's say I know a test uses up to 8GB of GPU memory. Then I would need something like this:

@pytest.mark.xdist_max_prallism(count=host_gpu_memory() / 8)

The case from the original poster would be solved with the following:

@pytest.mark.xdist_max_prallism(count=1)

@AymanElsayeed
Copy link

I'm trying to achieve a similar goal. Can i schedule a worker to run last?

@RonnyPfannschmidt
Copy link
Member

closing this issue as it turns into a melting pot of scope extensions/extra ideas while nobody is working on a implementable scope of the key problem problem

@RonnyPfannschmidt RonnyPfannschmidt closed this as not planned Won't fix, can't repro, duplicate, stale Dec 13, 2023
@boatcoder
Copy link

#385 does provide clues on how you can do this. I have tests that validate our interactions with stripe, those tests have a ratelimit that parallelism causes to fail. I have 3 stripe "groups" and I can run 3 test in parallel against stripe while the rest of the workers churn thru all the other tests. You can come up with something similar. Make a group of memory hogs and let those run serially while all the other parallel runners deal with the other tests. You have the tools in the options in pytest, you just need the creativity to use those tools to get what you want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

10 participants