🚀 Feature: Way to setup/teardown one resource per MOCHA_WORKER_ID #4953
Labels
status: wontfix
typically a feature which won't be added, or a "bug" which is actually intended behavior
type: feature
enhancement proposal
Is your feature request related to a problem or a nice-to-have?? Please describe.
I wanted to start using parallel mode. In serial mode I have a
before()
root hook that initializes the database and anafter()
root hook that does some teardown.To use parallel mode I want to fan out into one database per
MOCHA_WORKER_ID
, so if I have two jobs, I just need to setup/teardown a database for MOCHA_WORKER_ID=0 once and setup/teardown a database for MOCHA_WORKER_ID=1 once.The same goes for isolating redis instances used by my tests. Would be nice if each worker could just start its own redis container, and stop it at the end.
It feels like this should have been easy. Instead, it's been a complete hassle. Even with code changes, I can't figure out any clean way to do it.
Describe the solution you'd like
I'd like to be able to use
--file
(or something) to pass files withbefore
/after
root hooks to every worker, and have them run once per worker. This would have made migrating to parallel mode a total breeze.The only options Mocha gives me right now are once per file (root hook plugins) or once only (global fixtures).
Describe alternatives you've considered
I can wrap
beforeAll
withonce()
to initialize a resource associated with the worker process.However, I can't wrap
afterAll
withonce()
because that will run after the first test file, not after all test files in the worker.I even tried making each test file
import
the file with mybefore
/after
hooks. But they only seemed to get attached to tests in the first file in parallel mode. It's hard to imagine a reason why it has to behave that way.Then I thought about making a global fixture that initializes all the databases, but I don't see any documented way to ask mocha how many jobs it will run from the global fixture...argh!
The text was updated successfully, but these errors were encountered: