This page provides basic tutorials about the usage of MMFlow. For installation instructions, please see install.md.
It is recommended to symlink the dataset root to $MMFlow/data
.
Please follow the corresponding guidelines for data preparation.
- FlyingChairs
- FlyingThings3d_subset
- FlyingThings3d
- Sintel
- KITTI2015
- KITTI2012
- FlyingChairsOcc
- ChairsSDHom
- HD1K
We provide testing scripts to evaluate a whole dataset (Sintel, KITTI2015, etc.), and provide some high-level APIs and scripts to estimate flow for images or a video easily.
We provide scripts to run demos. Here is an example to predict the optical flow between two adjacent frames.
-
python demo/image_demo.py ${IMAGE1} ${IMAGE2} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_DIR} \ [--out_prefix] ${OUTPUT_PREFIX} [--device] ${DEVICE}
Optional arguments:
--out_prefix
: The prefix for the output results including flow file and visualized flow map.--device
: Device used for inference.
Example:
Assume that you have already downloaded the checkpoints to the directory
checkpoints/
, and output will be saved in the directoryraft_demo
.python demo/image_demo.py demo/frame_0001.png demo/frame_0002.png \ configs/raft/raft_8x2_100k_mixed_368x768.py \ checkpoints/raft_8x2_100k_mixed_368x768.pth raft_demo
-
python demo/video_demo.py ${VIDEO} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${OUTPUT_FILE} \ [--gt] ${GROUND_TRUTH} [--device] ${DEVICE}
Optional arguments:
--gt
: The video file of ground truth for input video. If specified, the ground truth will be concatenated predicted result as a comparison.--device
: Device used for inference.
Example:
Assume that you have already downloaded the checkpoints to the directory
checkpoints/
, and output will be save asraft_demo.mp4
.python demo/video_demo.py demo/demo.mp4 \ configs/raft/raft_8x2_100k_mixed_368x768.py \ checkpoints/raft_8x2_100k_mixed_368x768.pth \ raft_demo.mp4 --gt demo/demo_gt.mp4
You can use the following commands to test a dataset, and more information is in tutorials/1_inference.
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
Optional arguments:
--out_dir
: Directory to save the output results. If not specified, the flow files will not be saved.--fuse-conv-bn
: Whether to fuse conv and bn, this will slightly increase the inference speed.--show_dir
: Directory to save the visualized flow maps. If not specified, the flow maps will not be saved.--eval
: Evaluation metrics, e.g., "EPE".--cfg-option
: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file. For example, '--cfg-option model.encoder.in_channels=6'.
Examples:
Assume that you have already downloaded the checkpoints to the directory checkpoints/
.
Test PWC-Net on Sintel clean and final sub-datasets without saving predicted flow files and evaluating the EPE.
python tools/test.py configs/pwcnet/pwcnet_ft_4x1_300k_sintel_384x768.py \
checkpoints/pwcnet_8x1_sfine_sintel_384x768.pth --eval EPE
You can use the train script to launch training task with a single GPU, and more information in tutorials/2_finetune
python tools/train.py ${CONFIG_FILE} [optional arguments]
Optional arguments:
--work-dir
: Override the working directory specified in the config file.--load-from
: The checkpoint file to load weights from.--resume-from
: Resume from a previous checkpoint file.--no-validate
: Whether not to evaluate the checkpoint during training.--seed
: Seed id for random state in python, numpy and pytorch to generate random numbers.--deterministic
: If specified, it will set deterministic options for CUDNN backend.--cfg-options
: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file. For example, '--cfg-option model.encoder.in_channels=6'.
Difference between resume-from
and load-from
:
resume-from
loads both the model weights and optimizer status, and the epoch/iter is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.
load-from
only loads the model weights and the training epoch/iter starts from 0. It is usually used for finetuning.
Here is an example to train PWC-Net.
python tools/train.py configs/pwcnet/pwcnet_ft_4x1_300k_sintel_384x768.py --work-dir work_dir/pwcnet
We provide some tutorials for users: