This is an addendum to SLURM_README.md
and contains some small notes on how to run conformance tests for the Phoenix cluster.
For Singularity, the Singularity cache directories must be configured as mentioned.
No extra configuration should be necessary.
Additional note to:
However, if running the script under an srun launched interactive terminal, this will not work.
This is likely because
miniwdl_slurm
runs srun itself instead of sbatch, and nested srun's dont appear to work (likely something related to the nested srun asking for more resources beyond the scope of the parent srun). On Phoenix, this can be avoided by running the scripts from mustard, emerald, crimson, or razzmatazz. (The Phoenix head node can also be used, but this is not probably not recommended.)
There are Slurm tips for Toil that may be helpful.
On certain clusters (ex: Phoenix), Toil may try to create a jobstore at a location not accessible by all workers. --jobstore-path
can be used to control the jobstore parent directory. It can be an absolute path or a path relative to run.py
.
python run.py --runner toil-wdl-runner --id tut01 --toil-args="--batchSystem=slurm --batchLogsDir ./slurm_logs --clean=always" --jobstore-path=/path/to/shared_dir"