# investigate backbone (input shape and output shape), I will update more backbone for experiments (next step: efficient net B0-B7)
cd flame/core/model/backbone
python resnet.py --version <resnet18 -> resnet110> --pretrained <if use pretrained weight>
python densenet.py --version <densenet121 -> densenet201> --pretrained <if use pretrained weight>
# investigate fpn (input shape and output shape)
cd flame/core/model/
python fpn.py
# investigate head (input shape and output shape)
cd flame/core/model/head
python efficient_head.py
python head.py
# investigate anchor generator (input shape and output shape)
cd flame/core/model/
python anchor_generator.py
- Focal Loss: for computating loss of classification head.
- Smooth L1: for computating loss of regression head.
This is my technical note for mAP here borrowed heavily knowledge in awsome repo.
- Training: training with Focal loss on train_set and evaluating with Focal Loss on both train_set (again) and valid_set.
CUDA_VISIBLE_DEVICES=<gpu indice> python -m flame configs/PASCAL/pascal_training.yaml
- Evaluation: evaluating with mAP metric and visualizing all predictions for test_set.
CUDA_VISIBLE_DEVICES=<gpu indice> python -m flame configs/PASCAL/pascal_testing.yaml
<In progress ...>
<In progress ...>
<In progress ...>