This project are closed. New versions and much more are there: PiePline
Neural networks training pipeline based on PyTorch. Designed to standardize training process and accelerate experiments.
- Core is about 2K lines, covered by tests, that you don't need to write again
- Flexible and customizable training process
- Checkpoints management and train process resuming (source and target device independent)
- Metrics processing and visualization by builtin (tensorboard, Matplotlib) or custom monitors
- Training best practices (e.g. learning rate decaying and hard negative mining)
- Metrics logging and comparison (DVC compatible)
- MNIST classification - notebook, file, Kaggle kernel
- Segmentation - notebook, file
- Resume training process - file
import torch
from neural_pipeline.builtin.monitors.tensorboard import TensorboardMonitor
from neural_pipeline.monitoring import LogMonitor
from neural_pipeline import DataProducer, TrainConfig, TrainStage,\
ValidationStage, Trainer, FileStructManager
from somethig import MyNet, MyDataset
fsm = FileStructManager(base_dir='data', is_continue=False)
model = MyNet().cuda()
train_dataset = DataProducer([MyDataset()], batch_size=4, num_workers=2)
validation_dataset = DataProducer([MyDataset()], batch_size=4, num_workers=2)
train_config = TrainConfig(model, [TrainStage(train_dataset),
ValidationStage(validation_dataset)], torch.nn.NLLLoss(),
torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.5))
trainer = Trainer(train_config, fsm, torch.device('cuda:0')).set_epoch_num(50)
trainer.monitor_hub.add_monitor(TensorboardMonitor(fsm, is_continue=False))\
.add_monitor(LogMonitor(fsm))
trainer.train()
This example of training MyNet on MyDataset with vizualisation in Tensorflow and with metrics logging for further experiments comparison.
pip install neural-pipeline
pip install tensorboardX matplotlib
pip install -U git+https://github.com/toodef/neural-pipeline