Snakemake-based CutNtag and run pipeline to be run in our PBS-based HPC using singularity containers. The singularity image that is used to run this pipeline is created from this docker container.
The following files are located inside the folder configuration
. In this folder you will find the files with raw data paths (units.tsv
), sample metadata (samples.tsv
), cluster configuration (cluster.yaml
) and pipeline parameters -alignment, peak calling...- (config.yaml
).
Paths to raw data are located in the file units.tsv
. The file has the following structure:
sample | lane | fq1 | fq2 |
---|---|---|---|
name_of_sample | name_of_lane_or_resequencing | path/to/forward.fastq | path/to/reverse.fastq |
-
The first field correspond to the sample name. This field has to be the same as the sample name that is specified in the
samples.tsv
file (see below). It is recommended to NOT use underscores in the name of the samples, dashed are prefered. I still don't understand why sometimes I get errors if I use them so before fixing I strongly recommend to use dashes instead. -
The second field corresponds to
lane
. The idea of this field is to group fastq files corresponding to the same sample (or to samples that have to be merged). For example, if 1 sample arrived in 2 different lanes from a PE experiment, in total there will be 4 fastqs (2 forward and 2 reverse). In this case, one should enter the same sample 2 times, putting in thelane
field the corresponding lanes (lane1 and lane2, for example). Actually one can write any word in this field, the idea is to group fastqs from the same sample. All the entries with the same name in thesample
field with differentlane
will be merged in the same fastq. Here an example of how it would be with 1 sample that arrived in 2 lanes:
sample | lane | fq1 | fq2 |
---|---|---|---|
foo | lane1 | path/to/forward_lane1.fastq | path/to/reverse_lane1.fastq |
foo | lane2 | path/to/forward_lane2.fastq | path/to/reverse_lane2.fastq |
Here I am using lane1 and lane2 for consistency and making things more clear, but the following would also work:
sample | lane | fq1 | fq2 |
---|---|---|---|
foo | potato | path/to/forward_lane1.fastq | path/to/reverse_lane1.fastq |
foo | checazzo | path/to/forward_lane2.fastq | path/to/reverse_lane2.fastq |
- Finally the last 2 fields
fq1
andfq2
correspond to the paths to the fastq files.fq1
is the FORWARD read andfq2
the REVERSE. The order is very important because they will be sent in that order to the aligner.
All metadata and information regarding every sample is located in samples.tsv
. The file has the following structure:
NAME | INPUT | SPIKE | AB | USER | GENOME | RUN | IS_INPUT |
---|---|---|---|---|---|---|---|
name_of_sample | input_to_use | If the sample contains spikein. true or false | antibody | user | versione of genome (i.e: mm10) | run of the sequencing | if the sample is an input |
-
For every sample, the
NAME
field has to contain exactly the same name that was written in thesample
column of theunits.tsv
. -
The
INPUT
field contains the name of the input corresponding to the given sample. It has to be the name of the input written in the fieldssample
andNAME
fromunits.tsv
andsamples.tsv
. -
SPIKE
: TRUE or FALSE based on if the sample contains spike-in or not. -
AB
,USER
,RUN
: Metadata corresponding to each sample. If there's nothing to fill I usuallt write an X. -
GENOME
: Version of the genome used for the alignment. It will be used for peak annotation with ChIPseeker. Right now the accepted values are mm9, mm10, hg19 and hg38. -
IS_INPUT
: The options are TRUE or FALSE. If the sample is an input set it to TRUE. Also, in case the sample is an input sequenced just to calculate the ratio sample/spike-in that won't be used to call peaks, set it to TRUE.
In the root folder of this repository (I say this because there's in another folder a file with the same name) there is the file config.yaml
. This files contains the configuration of the software and parameters used in the pipeline. Modify them as you wish. Check always that you are using the correct genome files corresponding to the version that you want to use. Also check the effective genome size that is used by deeptools to calculate the GC bias.
cluster.yaml
contains the per rule cluster parameters (ncpus, ram, walltime...). It can be modified as desired. In the future I want to remove this file in favour of the new snakemake profiles system (see below), but I still need to understand a little bit better how it works and how to properly do the migration.
In Snakemake 4.1 snakemake profiles were introduced. They are supposed to substitute the classic cluster.json file and make the execution of snakemake more simple. The parameters that will be passed to snakemake (i.e: --cluster, --use-singularity...) now are inside a yaml file (config.yaml
) inside the profile folder (in the case of this repository is snakemake_profile
). The config.yaml
inside snakemake_profile
contains the parameters passed to snakemake. So if you were executing snakemake as snakemake --cluster qsub --use-singularity
the new config.yaml
would be like this:
cluster: qsub
use-singularity: true
Once you have all the configuration files as desired, it's time to execute the pipeline. For that you have to execute the execute_pipeline.sh
script, followed by the name of the rule that you want to execute. If any rule is given it will automatically execute the rule all
(which would execute the standard pipeline). Examples:
./execute_pipeline.sh all
is equivalent to
./execute_pipeline.sh
If you want to obtain also broad peaks...
./execute_pipeline.sh all_broad
At the end of the Snakefile
you will find all the possible target rules and their corresponding output files.
- Migrate 100% to snakemake profiles and stop using the
cluster.yaml
configuration.