This pipeline processes sequencing data from Massively Parallel Reporter Assays (MPRA) to create count tables for candidate sequences tested in the experiment.
MPRAsnakeflow is built on top of Snakemake.
- Max Schubach (@visze), Berlin Institute of Health at Charité -- Universitätsklinikum Berlin, Computational Genome Biology Group
You can find an extensive documentation here
We have tutorial created in jupyter notebooks to run MPRAsnakeflow locally or within colab.
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above). Here is a very short description of the usage. Please look at the documentation for more comprehensive usage.
Clone this repository to your local system, into the place where you have access on your compute nodes. It does not necessarily be the the same folder where you start your analysis, but it can.
Create or adjust the config/example_config.yaml
in the repository to your needs to configure the workflow execution. When running on a cluster environment you need a special exccecutor plugin, e.g. like SLURM, and use an adapted workflow profile (original profiles/default/config.yaml
) to set the correct values (like slurm partitions).
Install Snakemake (recommended version >= 8.x) using conda or mamba (recommended installation via miniforge):
mamba create -c bioconda -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation.
Activate the conda environment:
mamba activate snakemake
Test your configuration by performing a dry-run via
snakemake --software-deployment-method conda --configfile config.yaml -n
Execute the workflow locally via
snakemake --software-deployment-method conda --cores $N --configfile config.yaml --workflow-profile profiles/default
using $N
cores or run it in a cluster environment (here SLURM) via the slurm excecutor plugin,
snakemake --software-deployment-method conda --executor slurm --cores $N --configfile config.yaml --workflow-profile profiles/default
Please note that profiles/default/config.yaml
has to be adapted to your needs (like partition names).
For snakemake 7.x this might work too using slurm sbatch (but depricated in newer snakemake versions:
snakemake --use-conda --configfile config.yaml --cluster "sbatch --nodes=1 --ntasks={cluster.threads} --mem={cluster.mem} -t {cluster.time} -p {cluster.queue} -o {cluster.output}" --jobs 100 --cluster-config config/sbatch.yaml
Please note that the log folder of the cluster environment has to be generated first, e.g:
mkdir -p logs
For other cluster environments please check the Snakemake documentation nad look for other exccecutor plugins and adapt accodingly.
If you not only want to fix the software stack but also the underlying OS, use
snakemake --sdm apptainer,conda --cores $N --configfile config.yaml --workflow-profile profiles/default
in combination with any of the modes above. This will use a pre-build singularity container of MPRAsnakeflow with the conda ens installed in.
It is also possible to run the workflow in a different folder so that the results get stored not in the MPRAsnakeflow folder. Here you have to specify the snakefile path, like
snakemake --sdm conda --configfile yourConfigFile.yaml --snakefile <path/to/MPRAsnakeflow>/MPRAsnakeflow/workflow/Snakefile --cores $N --workflow-profile <path/to/MPRAsnakeflow>/profiles/default
See the Snakemake documentation for further details.
This part still works but it is outdated. Use the QC report, see documentation. After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html --configfile config.yaml
This report can, e.g., be forwarded to your collaborators.
Whenever you change something, don't forget to commit the changes back to your github copy of the repository:
git commit -a
git push
Whenever you want to synchronize your workflow copy with new developments from upstream, do the following.
- Once, register the upstream repository in your local copy:
git remote add -f upstream git@github.com:snakemake-workflows/MPRAsnakeflow.git
orgit remote add -f upstream https://github.com/snakemake-workflows/MPRAsnakeflow.git
if you do not have setup ssh keys. - Update the upstream version:
git fetch upstream
. - Create a diff with the current version:
git diff HEAD upstream/master workflow > upstream-changes.diff
. - Investigate the changes:
vim upstream-changes.diff
. - Apply the modified diff via:
git apply upstream-changes.diff
. - Carefully check whether you need to update the config files:
git diff HEAD upstream/master config
. If so, do it manually, and only where necessary, since you would otherwise likely overwrite your settings and samples.
In case you have also changed or added steps, please consider contributing them back to the original repository:
- Fork the original repo to a personal or lab account.
- Clone the fork to your local system, to a different place than where you ran your analysis.
- Copy the modified files from your analysis to the clone of your fork, e.g.,
cp -r workflow path/to/fork
. Make sure to not accidentally copy config file contents or sample sheets. Instead, manually update the example config files if necessary. - Commit and push your changes to your fork.
- Create a pull request against the original repository.
Test cases are in the subfolder .test
. They are automatically executed via continuous integration with Github Actions.