Spfy: Platform for predicting subtypes from E.coli whole genome sequences, and builds graph data for population-wide comparative analyses.
Published as: Le,K.K., Whiteside,M.D., Hopkins,J.E., Gannon,V.P.J., Laing,C.R. Spfy: an integrated graph database for real-time prediction of bacterial phenotypes and downstream comparative analyses. Database (2018) Vol. 2018: article ID bay086; doi:10.1093/database/bay086
Live: https://lfz.corefacility.ca/superphy/spfy/
- Install Docker (& Docker-Compose separately if you're on Linux, link). mac/windows users have Compose bundled with Docker Engine.
git clone --recursive https://github.com/superphy/spfy.git
cd spfy/
docker-compose up
- Visit http://localhost:8090
- Eat cake 🍰
ECTyper:
PanPredic:
Docker Image for Conda:
Comparing different population groups:
Runtimes of subtyping modules:
- If you wish to only create rdf graphs (serialized as turtle files):
- First install miniconda and activate the environment from https://raw.githubusercontent.com/superphy/docker-flask-conda/master/app/environment.yml
- cd into the app folder (where RQ workers typically run from):
cd app/
- Run savvy.py like so:
python -m modules/savvy -i tests/ecoli/GCA_001894495.1_ASM189449v1_genomic.fna
where the argument after the-i
is your genome (FASTA) file.
The ontology for Spfy is available at:
https://raw.githubusercontent.com/superphy/backend/master/app/scripts/spfy_ontology.ttl
It was generated using
https://raw.githubusercontent.com/superphy/backend/master/app/scripts/generate_ontology.py
with shared functions from Spfy's backend code. If you wish to run it,
do: 1. cd app/
2. python -m scripts/generate_ontology
which will
put the ontology in app/
You can generate a pretty diagram from the .ttl file using http://www.visualdataweb.de/webvowl/
Note
currently setup for just .fna files
You can bypass the front-end website and still enqueue subtyping jobs by:
- First, mount the host directory with all your genome files to
/datastore
in the containers.
For example, if you keep your files at
/home/bob/ecoli-genomes/
, you'd edit thedocker-compose.yml
file and replace:volumes: - /datastorewith:
volumes: - /home/bob/ecoli-genomes:/datastore
- Then take down your docker composition (if it's up) and restart it
docker-compose down docker-compose up -d
- Drop and shell into your webserver container (though the worker containers would work too) and run the script.
docker exec -it backend_webserver_1 sh python -m scripts/sideload exit
Note that reisdues may be created in your genome folder.
Dock er Imag e | Port s | Name s | Des crip tion |
---|---|---|---|
back end- rq | 80/t cp, 443/ tcp | back end_wor ker_1 | the main redi s queu e work ers |
back end- rq-b laze grap h | 80/t cp, 443/ tcp | back end_wor ker- blaz egra ph-i ds_ 1 | this hand les spfy ID gene rati on for the blaz egra ph data base |
back end | 0.0. 0.0: 8000 ->80 /tcp , 443/ tcp | back end_web -ngi nx-u wsgi _1 | the flas k back end whic h hand les enqu euei ng task s |
supe rphy /bla zegr aph: 2.1. 4-in fere ncin g | 0.0. 0.0: 8080 ->80 80/t cp | back end_bla zegr aph_1 | Blaz egra ph Data base |
redi s:3. 2 | 6379 /tcp | back end_red is_ 1 | Redi s Data base |
reac tapp | 0.0. 0.0: 8090 ->50 00/t cp | back end_rea ctap p_1 | fron t-en d to spfy |
The superphy/backend-rq:2.0.0
image is scalable: you can create as
many instances as you need/have processing power for. The image is
responsible for listening to the multiples
queue (12 workers) which
handles most of the tasks, including RGI
calls. It also listens to
the singles
queue (1 worker) which runs ECTyper
. This is done as
RGI
is the slowest part of the equation. Worker management in
handled in supervisor
.
The superphy/backend-rq-blazegraph:2.0.0
image is not scalable: it
is responsible for querying the Blazegraph database for duplicate
entries and for assigning spfyIDs in sequential order. It's functions
are kept as minimal as possible to improve performance (as ID generation
is the one bottleneck in otherwise parallel pipelines); comparisons are
done by sha1 hashes of the submitted files and non-duplicates have their
IDs reserved by linking the generated spfyID to the file hash. Worker
management in handled in supervisor
.
The superphy/backend:2.0.0
which runs the Flask endpoints uses
supervisor
to manage inner processes: nginx
, uWsgi
.
- We are currently running Blazegraph version 2.1.4. If you want to run
Blazegraph separately, please use the same version otherwise there
may be problems in endpoint urls / returns (namely version 2.1.1).
See #63
Alternatively, modify the endpoint accordingly under
database['blazegraph_url']
in/app/config.py
Steps required to add new modules are documented in the Developer Guide.