Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
MINGtoMING committed Jul 29, 2024
1 parent b6db227 commit 423aee8
Show file tree
Hide file tree
Showing 12 changed files with 101 additions and 67 deletions.
10 changes: 5 additions & 5 deletions configs/rtdetrv2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ FPS。而且,混合精度训练策略的使用使得训练速度提高了 15%

| Model | Epoch | Backbone | Input shape | $AP^{val}$ | $AP^{val}_{50}$ | Params(M) | FLOPs(G) | T4 TensorRT FP16(FPS) | Pretrained Model | config |
|:--------------------:|:-----:|:---------:|:-----------:|:----------:|:---------------:|:---------:|:--------:|:---------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| *RT-DETRv2-R18-dsp | 120 | ResNet-18 | 640 | 47.4 | 64.8 | 20 | 60 | 217 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r18vd_dsp_3x_coco.pdparams) | [config](./rtdetrv2_r18vd_dsp_3x_coco.yml) |
| *RT-DETRv2-R34-dsp | 120 | ResNet-34 | 640 | 49.2 | 67.2 | 31 | 92 | 161 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r34vd_dsp_1x_coco.pdparams) | [config](./rtdetrv2_r34vd_dsp_1x_coco.yml) |
| *RT-DETRv2-R50-m-dsp | 84 | ResNet-50 | 640 | 51.3 | 69.7 | 36 | 100 | 145 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r50vd_m_dsp_3x_coco.pdparams) | [config](./rtdetrv2_r50vd_m_dsp_3x_coco.yml) |
| *RT-DETRv2-R50-dsp | 72 | ResNet-50 | 640 | 52.8 | 71.3 | 42 | 136 | 108 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r50vd_dsp_1x_coco.pdparams) | [config](./rtdetrv2_r50vd_dsp_1x_coco.yml) |
| *RT-DETRv2-R18-dsp | 36 | ResNet-18 | 640 | 47.4 | 64.8 | 20 | 60 | 217 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r18vd_dsp_3x_coco.pdparams) | [config](./rtdetrv2_r18vd_dsp_3x_coco.yml) |
| *RT-DETRv2-R34-dsp | 12 | ResNet-34 | 640 | 49.2 | 67.2 | 31 | 92 | 161 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r34vd_dsp_1x_coco.pdparams) | [config](./rtdetrv2_r34vd_dsp_1x_coco.yml) |
| *RT-DETRv2-R50-m-dsp | 36 | ResNet-50 | 640 | 51.3 | 69.7 | 36 | 100 | 145 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r50vd_m_dsp_3x_coco.pdparams) | [config](./rtdetrv2_r50vd_m_dsp_3x_coco.yml) |
| *RT-DETRv2-R50-dsp | 12 | ResNet-50 | 640 | 52.8 | 71.3 | 42 | 136 | 108 | [download](https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r50vd_dsp_1x_coco.pdparams) | [config](./rtdetrv2_r50vd_dsp_1x_coco.yml) |

**注意事项:**

Expand Down Expand Up @@ -129,7 +129,7 @@ paddle2onnx --model_dir=./output_inference/rtdetrv2_r50vd_6x_coco/ \
<details>
<summary>3. 转换成TensorRT(可选) </summary>

- 确保TensorRT的版本>=8.5.1
- 基础模型请确保TensorRT的版本>=8.5.1,离散采样模型支持TensorRT的版本==8.4甚至一些更早的版本
- TRT推理可以参考[RT-DETR](https://github.com/lyuwenyu/RT-DETR)的部分代码或者其他网络资源

```shell
Expand Down
2 changes: 1 addition & 1 deletion configs/rtdetrv2/_base_/rtdetrv2_r50vd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ ema_filter_no_grad: True
hidden_dim: 256
use_focal_loss: True
eval_size: [640, 640]
reset_norm_param_attr: True


DETR:
Expand All @@ -22,7 +23,6 @@ ResNet:
depth: 50
variant: d
norm_type: bn
norm_decay: 0.
freeze_at: 0
return_idx: [1, 2, 3]
num_stages: 4
Expand Down
16 changes: 9 additions & 7 deletions configs/rtdetrv2/_base_/rtdetrv2_reader.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ worker_num: 4
TrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {prob: 0.5}
- RandomExpand: {fill_value: [0., 0, 0.]}
- RandomDistort: {prob: 0.8}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomCrop: {prob: 0.8}
- RandomFlip: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 1}
Expand All @@ -18,11 +18,13 @@ TrainReader:
drop_last: true
collate_batch: false
use_shared_memory: true
transform_schedulers: # [start_epoch, stop_epoch)
- RandomDistort: {start_epoch: 0, stop_epoch: 71}
- RandomExpand: {start_epoch: 0, stop_epoch: 71}
- RandomCrop: {start_epoch: 0, stop_epoch: 71}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 71}
transform_schedulers:
# [start_epoch, stop_epoch) | [0, stop_epoch) | [start_epoch, inf)
- RandomDistort: {stop_epoch: 71}
- RandomExpand: {stop_epoch: 71}
- RandomCrop: {stop_epoch: 71}
- Resize: {start_epoch: 71}
- BatchRandomResize: {stop_epoch: 71}


EvalReader:
Expand Down
7 changes: 3 additions & 4 deletions configs/rtdetrv2/rtdetrv2_r18vd_120e_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet18_vd_

ResNet:
depth: 18
norm_decay: 0.
freeze_at: -1
freeze_norm: false

Expand Down Expand Up @@ -44,6 +43,6 @@ TrainReader:
- BboxXYXY2XYWH: {}
- Permute: {}
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 117}
- RandomExpand: {start_epoch: 0, stop_epoch: 117}
- RandomCrop: {start_epoch: 0, stop_epoch: 117}
- RandomDistort: {stop_epoch: 117}
- RandomExpand: {stop_epoch: 117}
- RandomCrop: {stop_epoch: 117}
7 changes: 3 additions & 4 deletions configs/rtdetrv2/rtdetrv2_r18vd_dsp_3x_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r18vd_120e_

ResNet:
depth: 18
norm_decay: 0.
freeze_at: -1
freeze_norm: false

Expand Down Expand Up @@ -45,6 +44,6 @@ TrainReader:
- BboxXYXY2XYWH: {}
- Permute: {}
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 33}
- RandomExpand: {start_epoch: 0, stop_epoch: 33}
- RandomCrop: {start_epoch: 0, stop_epoch: 33}
- RandomDistort: {stop_epoch: 33}
- RandomExpand: {stop_epoch: 33}
- RandomCrop: {stop_epoch: 33}
10 changes: 5 additions & 5 deletions configs/rtdetrv2/rtdetrv2_r34vd_120e_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ResNet34_

ResNet:
depth: 34
norm_decay: 0.
freeze_at: -1
freeze_norm: false

Expand Down Expand Up @@ -44,7 +43,8 @@ OptimizerBuilder:

TrainReader:
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 117}
- RandomExpand: {start_epoch: 0, stop_epoch: 117}
- RandomCrop: {start_epoch: 0, stop_epoch: 117}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 117}
- RandomDistort: {stop_epoch: 117}
- RandomExpand: {stop_epoch: 117}
- RandomCrop: {stop_epoch: 117}
- Resize: {start_epoch: 117}
- BatchRandomResize: {stop_epoch: 117}
10 changes: 5 additions & 5 deletions configs/rtdetrv2/rtdetrv2_r34vd_dsp_1x_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/rtdetrv2_r34vd_120e_

ResNet:
depth: 34
norm_decay: 0.
freeze_at: -1
freeze_norm: false

Expand Down Expand Up @@ -45,7 +44,8 @@ OptimizerBuilder:

TrainReader:
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 10}
- RandomExpand: {start_epoch: 0, stop_epoch: 10}
- RandomCrop: {start_epoch: 0, stop_epoch: 10}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 10}
- RandomDistort: {stop_epoch: 10}
- RandomExpand: {stop_epoch: 10}
- RandomCrop: {stop_epoch: 10}
- Resize: {start_epoch: 10}
- BatchRandomResize: {stop_epoch: 10}
9 changes: 5 additions & 4 deletions configs/rtdetrv2/rtdetrv2_r50vd_dsp_1x_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ OptimizerBuilder:

TrainReader:
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 10}
- RandomExpand: {start_epoch: 0, stop_epoch: 10}
- RandomCrop: {start_epoch: 0, stop_epoch: 10}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 10}
- RandomDistort: {stop_epoch: 10}
- RandomExpand: {stop_epoch: 10}
- RandomCrop: {stop_epoch: 10}
- Resize: {start_epoch: 10}
- BatchRandomResize: {stop_epoch: 10}
9 changes: 5 additions & 4 deletions configs/rtdetrv2/rtdetrv2_r50vd_m_7x_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,8 @@ OptimizerBuilder:

TrainReader:
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 81}
- RandomExpand: {start_epoch: 0, stop_epoch: 81}
- RandomCrop: {start_epoch: 0, stop_epoch: 81}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 81}
- RandomDistort: {stop_epoch: 81}
- RandomExpand: {stop_epoch: 81}
- RandomCrop: {stop_epoch: 81}
- Resize: {start_epoch: 81}
- BatchRandomResize: {stop_epoch: 81}
9 changes: 5 additions & 4 deletions configs/rtdetrv2/rtdetrv2_r50vd_m_dsp_3x_coco.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ OptimizerBuilder:

TrainReader:
transform_schedulers:
- RandomDistort: {start_epoch: 0, stop_epoch: 33}
- RandomExpand: {start_epoch: 0, stop_epoch: 33}
- RandomCrop: {start_epoch: 0, stop_epoch: 33}
- BatchRandomResize: {start_epoch: 0, stop_epoch: 33}
- RandomDistort: {stop_epoch: 33}
- RandomExpand: {stop_epoch: 33}
- RandomCrop: {stop_epoch: 33}
- Resize: {start_epoch: 33}
- BatchRandomResize: {stop_epoch: 33}
43 changes: 20 additions & 23 deletions ppdet/data/reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@ class Compose(object):
def __init__(self, transforms, num_classes=80):
self.transforms = transforms
self.transforms_cls = []
self.skip_transforms_cls = []
for t in self.transforms:
for k, v in t.items():
op_cls = getattr(transform, k)
Expand All @@ -54,27 +53,29 @@ def __init__(self, transforms, num_classes=80):

self.transforms_cls.append(f)

def _update_skip_transforms_cls(self, data):
def _update_transforms_cls(self, data):
if 'transform_schedulers' in data:
self.skip_transforms_cls.clear()
transform_schedulers = data['transform_schedulers']
curr_epoch = data['curr_epoch']
for trans_op in self.transforms_cls:
trans_op_name = trans_op.__class__.__name__
for t in transform_schedulers:
def is_valid(op):
op_name = op.__class__.__name__
for t in data['transform_schedulers']:
for k, v in t.items():
if trans_op_name == k:
if op_name == k:
# [start_epoch, stop_epoch)
if curr_epoch < v['start_epoch'] or curr_epoch >= v['stop_epoch']:
self.skip_transforms_cls.append(trans_op)
start_epoch = v.get('start_epoch', 0)
if start_epoch > data['curr_epoch']:
return False
stop_epoch = v.get('stop_epoch', float('inf'))
if stop_epoch <= data['curr_epoch']:
return False
return True

return filter(is_valid, self.transforms_cls)
else:
return self.transforms_cls

def __call__(self, data):
self._update_skip_transforms_cls(data)

for f in self.transforms_cls:
if f in self.skip_transforms_cls:
continue

transforms_cls = self._update_transforms_cls(data)
for f in transforms_cls:
try:
data = f(data)
except Exception as e:
Expand All @@ -93,12 +94,8 @@ def __init__(self, transforms, num_classes=80, collate_batch=True):
self.collate_batch = collate_batch

def __call__(self, data):
self._update_skip_transforms_cls(data[0])

for f in self.transforms_cls:
if f in self.skip_transforms_cls:
continue

transforms_cls = self._update_transforms_cls(data[0])
for f in transforms_cls:
try:
data = f(data)
except Exception as e:
Expand Down
36 changes: 35 additions & 1 deletion ppdet/engine/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,12 @@ def __init__(self, cfg, mode='train'):
m._epsilon = 1e-3 # for amp(fp16)
m._momentum = 0.97 # 0.03 in pytorch

#normalize params for deploy
# reset norm param attr for setting them in optimizer
if cfg['reset_norm_param_attr']:
self.model = self.reset_norm_param_attr(
self.model, weight_attr=None, bias_attr=None)

# normalize params for deploy
if 'slim' in cfg and cfg['slim_type'] == 'OFA':
self.model.model.load_meanstd(cfg['TestReader'][
'sample_transforms'])
Expand Down Expand Up @@ -1450,3 +1455,32 @@ def setup_metrics_for_loader():
imshow_lanes(img, lanes, out_file=out_file)

return results

def reset_norm_param_attr(self, layer, **kwargs):
if isinstance(layer, (nn.BatchNorm2D, nn.LayerNorm, nn.GroupNorm)):
src_state_dict = layer.state_dict()
if isinstance(layer, nn.BatchNorm2D):
layer = nn.BatchNorm2D(
num_features=layer._num_features,
momentum=layer._momentum,
epsilon=layer._epsilon,
**kwargs)
elif isinstance(layer, nn.LayerNorm):
layer = nn.LayerNorm(
normalized_shape=layer._normalized_shape,
epsilon=layer._epsilon,
**kwargs)
else:
layer = nn.GroupNorm(
num_groups=layer._num_groups,
num_channels=layer._num_channels,
epsilon=layer._epsilon,
**kwargs)
layer.set_state_dict(src_state_dict)
else:
for name, sublayer in layer.named_children():
new_sublayer = self.reset_norm_param_attr(sublayer, **kwargs)
if new_sublayer is not sublayer:
setattr(layer, name, new_sublayer)

return layer

0 comments on commit 423aee8

Please sign in to comment.