Documentation | Paper | Web Interface
EpiLearn is a Python machine learning toolkit for epidemic data modeling and analysis. We provide numerous features including:
- Implementation of Epidemic Models
- Simulation of Epidemic Spreading
- Visualization of Epidemic Data
- Unified Pipeline for Epidemic Tasks
For more machine models in epidemic modeling, feel free to check out our curated paper list Awesome-Epidemic-Modeling-Papers.
To install the latest version, please use "pip install epilearn==0.0.15" --- 11/21/2024
EpiLearn is currently updating. We will release a new version very soon! --- 11/13/2024
If you have any suggestions, please feel free to click the feedback button on top and join our slack channel!
If you experience any issues, please don’t hesitate to open a GitHub Issue. We will do our best to address it within three business days. You are also warmly invited to join our User Slack Channel for more efficient communication. Alternatively, reaching out to us via email is also perfectly fine!
git clone https://github.com/Emory-Melody/EpiLearn.git
cd EpiLearn
conda create -n epilearn python=3.9
conda activate epilearn
python setup.py install
pip install epilearn
EpiLearn also requires pytorch>=1.20, torch_geometric and torch_scatter. For cpu version, we simply use pip install torch, pip install torch_geometric and pip install torch_scatter. For the GPU version, please refer to Pytorch, PyG and torch_scatter.
We provide a quick tutorial of EpiLearn in Google Colab. A more completed tutorial can be found in our documentation, including pipelines, simulations and other utilities. For more examples, please refer to the example folder. For the overall framework of EpiLearn, please check our paper.
Below we also offer a quick start on how to use EpiLearn for forecast and detection tasks.
from epilearn.models.SpatialTemporal.STGCN import STGCN
from epilearn.data import UniversalDataset
from epilearn.utils import transforms
from epilearn.tasks.forecast import Forecast
# initialize settings
lookback = 12 # inputs size
horizon = 3 # predicts size
# load toy dataset
dataset = UniversalDataset()
dataset.load_toy_dataset()
# Adding Transformations
transformation = transforms.Compose({
"features": [transforms.normalize_feat()],
"graph": [transforms.normalize_adj()]})
dataset.transforms = transformation
# Initialize Task
task = Forecast(prototype=STGCN,
dataset=None,
lookback=lookback,
horizon=horizon,
device='cpu')
# Training
result = task.train_model(dataset=dataset,
loss='mse',
epochs=50,
batch_size=5,
permute_dataset=True)
# Evaluation
evaluation = task.evaluate_model()
from epilearn.models.Spatial.GCN import GCN
from epilearn.data import UniversalDataset
from epilearn.utils import transforms
from epilearn.tasks.detection import Detection
# initialize settings
lookback = 1 # inputs size
horizon = 2 # predicts size; also seen as number of classes
# load toy dataset
dataset = UniversalDataset()
dataset.load_toy_dataset()
# Adding Transformations
transformation = transforms.Compose({
" features": [],
" graph": []})
dataset.transforms = transformation
# Initialize Task
task = Detection(prototype=GCN,
dataset=None,
lookback=lookback,
horizon=horizon,
device='cpu')
# Training
result = task.train_model(dataset=dataset,
loss='ce',
epochs=50,
batch_size=5)
# Evaluation
evaluation = task.evaluate_model()
Our web application is deployed online using streamlit. But it also can be initiated using:
python -m streamlit run interface/app.py to activate the interface
If you find this work useful, please cite: EpiLearn: A Python Library for Machine Learning in Epidemic Modeling
@article{liu2024epilearn,
title={EpiLearn: A Python Library for Machine Learning in Epidemic Modeling},
author={Liu, Zewen and Li, Yunxiao and Wei, Mingyang and Wan, Guancheng and Lau, Max SY and Jin, Wei},
journal={arXiv e-prints},
pages={arXiv--2406},
year={2024}
}
Some algorithms are adopted from the papers' implmentation and the original links can be easily found on top of each file. We also appreciate the datasets from various sources, which will be highlighted in the dataset file.
Thanks to their great work and contributions!