Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Add docs and README for SPVCNN #2372

Merged
merged 13 commits into from
Apr 4, 2023
Merged
58 changes: 33 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li>
<li>DLA (CVPR'2018)</li>
<li>MinkResNet (CVPR'2019)</li>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
</ul>
</td>
<td>
Expand Down Expand Up @@ -212,6 +213,11 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
</ul>
</td>
<td>
<li><b>Outdoor</b></li>
<ul>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/spvcnn">SPVCNN (ECCV'2020)</a></li>
</ul>
<li><b>Indoor</b></li>
<ul>
<li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li>
Expand All @@ -226,31 +232,33 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
</tbody>
</table>

| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: |
| SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ |
| FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| 3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| PV-RCNN | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | MinkUNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :------: |
| SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ |
| FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| 3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| PV-RCNN | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MinkUNet | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| SPVCNN | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |

**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.

Expand Down
58 changes: 33 additions & 25 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
<li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li>
<li>DLA (CVPR'2018)</li>
<li>MinkResNet (CVPR'2019)</li>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
</ul>
</td>
<td>
Expand Down Expand Up @@ -193,6 +194,11 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
</ul>
</td>
<td>
<li><b>室外</b></li>
<ul>
<li><a href="configs/minkunet">MinkUNet (CVPR'2019)</a></li>
<li><a href="configs/spvcnn">SPVCNN (ECCV'2020)</a></li>
</ul>
<li><b>室内</b></li>
<ul>
<li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li>
Expand All @@ -207,31 +213,33 @@ MMDetection3D 是一个基于 PyTorch 的目标检测开源工具箱,下一代
</tbody>
</table>

| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: |
| SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ |
| FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| 3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| PV-RCNN | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | MinkUNet |
| :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :------: |
| SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ |
| FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| 3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
| SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
| PV-RCNN | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| MinkUNet | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| SPVCNN | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |

**注意:**[MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/zh_cn/model_zoo.md) 支持的基于 2D 检测的 **300+ 个模型,40+ 的论文算法**在 MMDetection3D 中都可以被训练或使用。

Expand Down
44 changes: 44 additions & 0 deletions configs/spvcnn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution

> [Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution ](https://arxiv.org/abs/2007.16100)

<!-- [ALGORITHM] -->

## Abstract

Self-driving cars need to understand 3D scenes efficiently and accurately in order to drive safely. Given the limited hardware resources, existing 3D perception models are not able to recognize small instances (e.g., pedestrians, cyclists) very well due to the low-resolution voxelization and aggressive downsampling. To this end, we propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch. With negligible overhead, this point-based branch is able to preserve the fine details even from large outdoor scenes. To explore the spectrum of efficient 3D models, we first define a flexible architecture design space based on SPVConv, and we then present 3D Neural Architecture Search (3D-NAS) to search the optimal network architecture over this diverse design space efficiently and effectively. Experimental results validate that the resulting SPVNAS model is fast and accurate: it outperforms the state-of-the-art MinkowskiNet by 3.3%, ranking 1st on the competitive SemanticKITTI leaderboard. It also achieves 8x computation reduction and 3x measured speedup over MinkowskiNet with higher accuracy. Finally, we transfer our method to 3D object detection, and it achieves consistent improvements over the one-stage detection baseline on KITTI.

<div align=center>
<img src="https://user-images.githubusercontent.com/72679458/226509154-80c27d8e-c138-426a-b92e-72846997b5b3.png" width="800"/>
</div>

## Introduction

We implement SPVCNN with TorchSparse backend and provide the result and checkpoints on SemanticKITTI datasets.
sunjiahao1999 marked this conversation as resolved.
Show resolved Hide resolved

## Results and models

### SemanticKITTI

| Method | Lr schd | Mem (GB) | mIoU | Download |
| :--------: | :-----: | :------: | :--: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| SPVCNN-W16 | 15e | 3.9 | 61.9 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w16_8xb2-15e_semantickitti_20230321_011645-a2734d85.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w16_8xb2-15e_semantickitti_20230321_011645.log) |
| SPVCNN-W20 | 15e | 4.2 | 62.7 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w20_8xb2-15e_semantickitti_20230321_011649-519e7eff.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w20_8xb2-15e_semantickitti_20230321_011649.log) |
| SPVCNN-W32 | 15e | 5.4 | 64.3 | [model](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w32_8xb2-15e_semantickitti_20230308_113324-f7c0c5b4.pth) \| [log](https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w32_8xb2-15e_semantickitti_20230308_113324.log) |

**Note:** We follow the implementation in SPVNAS original [repo](https://github.com/mit-han-lab/spvnas) and W16\\W20\\W32 indicates different number of channels.

**Note:** Due to TorchSparse backend, the model performance is relatively dependent on random seeds, and if random seeds are not specified the model performance will be different (± 1.5 mIoU).
sunjiahao1999 marked this conversation as resolved.
Show resolved Hide resolved

## Citation

```latex
@inproceedings{tang2020searching,
title={Searching efficient 3d architectures with sparse point-voxel convolution},
author={Tang, Haotian and Liu, Zhijian and Zhao, Shengyu and Lin, Yujun and Lin, Ji and Wang, Hanrui and Han, Song},
booktitle={Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXVIII},
pages={685--702},
year={2020},
organization={Springer}
}
```
57 changes: 57 additions & 0 deletions configs/spvcnn/metafile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
Collections:
- Name: SPVCNN
Metadata:
Training Techniques:
- AdamW
Architecture:
- SPVCNN
Paper:
URL: https://arxiv.org/abs/2007.16100
Title: 'Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution'
README: configs/spvcnn/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/1.1/mmdet3d/models/backbones/spvcnn_backone.py#L22
Version: v1.1.0rc4

Models:
- Name: spvcnn_w16_8xb2-15e_semantickitti
In Collection: SPVCNN
Config: configs/spvcnn/spvcnn_w16_8xb2-15e_semantickitti.py
Metadata:
Training Data: SemanticKITTI
Training Memory (GB): 3.9
Training Resources: 8x A100 GPUs
Results:
- Task: 3D Semantic Segmentation
Dataset: SemanticKITTI
Metrics:
mIOU: 61.7
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w16_8xb2-15e_semantickitti_20230321_011645-a2734d85.pth

- Name: spvcnn_w20_8xb2-15e_semantickitti
In Collection: SPVCNN
Config: configs/spvcnn/spvcnn_w20_8xb2-15e_semantickitti.py
Metadata:
Training Data: SemanticKITTI
Training Memory (GB): 4.2
Training Resources: 8x A100 GPUs
Results:
- Task: 3D Semantic Segmentation
Dataset: SemanticKITTI
Metrics:
mIOU: 62.9
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w20_8xb2-15e_semantickitti_20230321_011649-519e7eff.pth

- Name: spvcnn_w32_8xb2-15e_semantickitti
In Collection: SPVCNN
Config: configs/spvcnn/spvcnn_w32_8xb2-15e_semantickitti.py
Metadata:
Training Data: SemanticKITTI
Training Memory (GB): 5.4
Training Resources: 8x A100 GPUs
Results:
- Task: 3D Semantic Segmentation
Dataset: SemanticKITTI
Metrics:
mIOU: 64.3
Weights: https://download.openmmlab.com/mmdetection3d/v1.0.0_models/spvcnn/spvcnn_w32_8xb2-15e_semantickitti_20230308_113324-f7c0c5b4.pth
1 change: 1 addition & 0 deletions model-index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,4 @@ Import:
- configs/votenet/metafile.yml
- configs/pv_rcnn/metafile.yml
- configs/fcaf3d/metafile.yml
- configs/spvcnn/metafile.yml