Skip to content

neuralsim: 3D surface reconstruction and simulation based on 3D neural rendering.

License

Notifications You must be signed in to change notification settings

PJLab-ADG/neuralsim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

35 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

neuralsim

3D surface reconstruction and simulation based on 3D neural rendering.

This repository primarily addresses two topics:

  • Efficient and detailed reconstruction of implicit surfaces across different scenarios.
  • Multi-object implicit surface reconstruction, manipulation, and multi-modal sensor simulation.
    • With particular focus on autonomous driving datasets.

TOC

Implicit surface is all you need !

Single-object / multi-object / indoor / outdoor / large-scale surface reconstruction and multi-modal sensor simulation

πŸš€ Object surface reconstruction in minutes !
Input: posed images without mask
Get started: neus_in_10_minutes
Credits: Jianfei Guo
teaser_training_bmvs_gundam
πŸš€ Outdoor surface reconstruction in minutes !
Input: posed images without mask
Get started: neus_in_10_minutes
Credits: Jianfei Guo
teaser_training_bmvs_village_house
πŸš€ Indoor surface reconstruction in minutes !
Input: posed images, monocular cues
Get started: neus_in_10_minutes#indoor
Credits: Jianfei Guo
πŸš— Categorical surface reconstruction in the wild !
Input: multi-instance multi-view categorical images
[To be released 2023.09]
Credits: Qiusheng Huang, Jianfei Guo, Xinyang Li
πŸ›£οΈ Street-view surface reconstruction in 2 hours !
Input: posed images, monocular cues (and optional LiDAR)
Get started: streetsurf
Credits: Jianfei Guo, Nianchen Deng
teaser_seg100613_0.5.mp4
(Refresh if video won't play)
πŸ›£οΈ Street-view multi-modal sensor simulation !
Using reconstructed asset-bank
Get started: streetsurf#lidarsim
Credits: Jianfei Guo, Xinyu Cai, Nianchen Deng
teaser_seg405841_lidar_0.5.mp4
(Refresh if video won't play)
πŸ›£οΈ Street-view multi-object surfaces reconstruction in hours !
Input: posed images, LiDAR, 3D tracklets
Get started:
Credits: Jianfei Guo, Nianchen Deng
teaser_seg767010_mix_4x3_0.5.mp4
(Refresh if video won't play)
πŸ›£οΈ Street-view multi-dynamic-object surfaces reconstruction in hours !
πŸš€ Support dynamic pedestrians, cyclists, etc.
Credits: Jianfei Guo
teaser_demo_965_render.mp4
(Refresh if video won't play)
πŸ›£οΈ Street-view scenario editing !
Using reconstructed asset-bank
Credits: Jianfei Guo, Nianchen Deng
teaser_seg767010_manipulate.mp4
(Refresh if video won't play)
πŸ›£οΈ Street-view light editing ... (WIP)

Highlighted implementations

Methods πŸš€ Get started Official / Un-official Notes, major difference from paper, etc.
StreetSurf readme Official - LiDAR loss improved
NeuralSim readme Official - support foreground categories: vehicles, pedestrians, cyclists, etc.
- support arbitrary unannotated dynamic objects
- support decomposition of camera lens dirt, lens flares, etc.
NeuS in minutes readme Un-official - support object-centric datasets as well as indoor datasets
- fast and stable convergence without needing mask
- support using NGP / LoTD or MLPs as fg&bg representations
- large pixel batch size (4096) & pixel error maps
NGP with LiDAR readme Un-official - using Urban-NeRF's LiDAR loss

Updates

  • 2024-02-12 [v0.6.0]
    • Major overhaul on temporal logics & support timestamps interpolation mode
    • Support using EmerNeRF for non-annotated objects.
  • 2023-09-23 [v0.5.2]
    • Support generative/shared permutohedral lattice for batched/dynamic/batched-dynamic objects. :fire: Significantly reduces the memory and time usage for training a multi-object scene: one street with dozens of vehicles and dozens of pedestrians can be trained in 3 hours on a single RTX3090!
  • 2023-08-22 [v0.5.1] Finish StreetSurf open-source
  • 2023-01-10 πŸš€ Release first public post for the neuralsim system (Chinese ver.).
  • 2022-11-17 [v0.4.2] Totally refactored to packed-info based volume buffers, for both single object training (named StreetSurf later) and multi-object training.
  • 2022-08-08 [v0.3.0] Totally refactored to scene-graph management with frustum culling, major overhaul on speed performance
  • 2022-07-04 [v0.2.0]
    • Major overhaul on street-view training (data loading, LoTD repr., sky/mask, sparsity & regularizations.)
    • Major overhaul on multi-object rendering (batched query & BufferComposeRenderer)
  • 2022-02-08 [v0.1.0] Support single-object training (NeuS/NeRF) and multi-object training (fg=DIT-NeuS/Template-NeRF, bg=NeRF/NeuS/ACORN-NeuS)
  • 2021-10-18 first commit

Highlights

πŸ› οΈ Multi-object volume rendering

Code: app/renderers/buffer_compose_renderer.py

> Scene graph structure

Code: app/resources/scenes.py app/resources/nodes.py

To streamline the organization of assets and transformations, we adopt the concept of generic scene graphs used in modern graphics engines like magnum.

Any entity that possesses a pose or position is considered a node. Certain nodes are equipped with special functionalities, such as camera operations or drawable models (i.e. renderable assets in AssetBank).

scene_graph

Real-data scene graph Real-data frustum culling
vis_scene_graph vis_frustum_culling

> Efficient and universal

We provide a universal implementation of multi-object volume rendering that supports any kind of methods built for volume rendering, as long as a model can be queried with rays and can output opacity_alpha, depth samples t, and other optional fields like rgb, nablas, features, etc.

This renderer is efficient mainly due to:

  • Frustum culling
  • Occupancy-grid-based single / batched ray marching and pack merging implemented with pack_ops
  • (optional) Batched / indiced inference of LoTD

The figure below depicts the idea of the whole rendering process.

We ray-march every model first, then sort the samples with different model sources on each ray to jointly volume render multiple objects.

multi_object_volume_render

> Support dynamic (non-rigid) categories and allow un-annotated dynamics

We also support efficient neural surface reconstruction for pedestrians, cyclists and other dynamic / non-rigid categories.

  • Representation:
    • For static background, we use StreetSurf / Block-StreetSurf (WIP).
    • For categorical foreground objects (Vehicle, Pedestrian, Cyclists)
      • For all categorical foreground objects, we use shared NeuS-based representation with permutohedral-lattice-based hash-encodings, which are much faster and more friendly to higher dimensional inputs. See in [generative_permuto_neus.py]
      • (Rigid) Vehicles: We use shared NeuS models with position (3D) + instance latent (1~4D) as input
      • (Non-rigid / dynamic) Pedestrians and Cyclists: We use position (3D) + temporal embedding (1D) + instance latent (1~4D) as input.
    • For un-annotated objects (A dog walking by, waving flags, plastic bags...), we use EmerNeRF (only the dynamic part) See in [emernerf.py]
    • For camera effects (lens flares, dirt or rain drop on lens), we use a separate layer of learned offset image for per-cam per-frame images.
  • Raymarching:
    • Along with the multi-stage occ-grid-based raymarching strategy in the background object (StreetSurf), we also accumulate multi-instance (and multi-frame) occupancy grids for foreground objects to accelerate the raymarching process of foreground objects.
    • See in [here]: We have implemented multi-instance (batched) / multi-time (dynamic) / batched & dynamic occupancy grid marching.
  • Framework
    • See in [scene.py]: We have implemented two ways for freezing a scene graph at a specific time (or multiple batched time). You can either use the frame-indexing mode, in which the scene graph is frozen at one or multiple frame indices, or you can use the timestamp-interpolation mode, in which the scene graph is frozen at one or multiple timestamps. Both modes support feeding per-ray timestamps to the network.
Multi-instance occ. grids
accumulated in training
Multi-timestamp occ. grids
accumulated in training
occ_grid_batched occ_grid_dynamic
Multi-instance & multi-frame occupancy grids accumulated in training
x-axis for different instances of pedestrians
y-axis for different timestamps for one pedestrian
occ_grid_batched_dynamic

Robust reconstruction in the wild

> Pose estimation for ego motion and other objects

Accomplished by the Attribute implementation in [nr3d_lib/attributes]

> Camera effect disentanglement

Lens flare Lens dirt / raindrop
lens_flare_9385013 lens_dirt_1009661

🏦 Editable assetbank

Code: code_multi/tools/manipulate.py (WIP)

Given that different objects are represented by unique networks (for categorical or shared models, they have unique latents or embeddings), it's possible to explicitly add, remove or modify the reconstructed assets in a scene.

We offer a toolkit for performing such scene manipulations. Some of the intriguing edits are showcased below.

πŸ’ƒ Let them dance ! πŸ”€ Multi-verse 🎨 Change their style !
teaser_seg767010_manipulate.mp4
(Refresh if video won't play)
teaser_seg767010_multiverse_1.mp4
(Refresh if video won't play)
teaser_seg767010_style_4x3_2.mp4
(Refresh if video won't play)
Credits to Qiusheng Huang and Xinyang Li.

Please note, this toolkit is currently in its early development stages and only basic edits have been released. Stay tuned for updates, and contributions are always welcome :)

πŸ“· Multi-modal sensor simulation

> LiDARs

Code: app/resources/observers/lidars.py

Get started:

Credits to Xinyu Cai's team work, we now support simulation of various real-world LiDAR models.

The volume rendering process is guided by our reconstructed implicit surface scene geometry, which guarantees accurate depths. More details on this are in our StreetSurf paper section 5.1.

> Cameras

Code: app/resources/observers/cameras.py

We now support pinhole camera, standard OpenCV camera models with distortion, and an experimental fisheye camera model.

Usage

Installation

First, clone with submodules:

git clone https://github.com/pjlab-ADG/neuralsim --recurse-submodules -j8 ...

Then, cd into nr3d_lib and refer to nr3d_lib/README.md for the following steps.

code_single Single scene

  • Object-centric scenarios (indoor / outdoor, with / without mask)
  • Street-view or autonomous driving scenarios

Please refer to code_single/README.md

code_multi Multi-object scene

  • Different categories of foreground objects & background objects, joint rendering and decomposed reconstruction
  • Generic unsupervised dynamic / static decomposition

Please refer to code_multi/README.md

Roadmap & TODOs

  • Unofficial implementation of unisim
  • Release our methods on multi-object reconstruction for autonomous driving
  • Release our methods on large-scale representation and neus
  • Factorization of embient light and object textures
  • Dataloaders for more autonomous driving datasets (KITTI, NuScenes, Waymo v2.0, ZOD, PandarSet)

Pull requests and collaborations are warmly welcomed πŸ€—! Please follow our code style if you want to make any contribution.

Feel free to open an issue or contact Jianfei Guo (ffventus@gmail.com) or Nianchen Deng (dengnianchen@pjlab.org.cn) if you have any questions or proposals.

Acknowledgements & citations

  • nr3d_lib Containing most of our basic modules and operators
  • LiDARSimLib LiDAR models
  • StreetSurf Our recent paper studying street-view implicit surface reconstruction
@article{guo2023streetsurf,
  title = {StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views},
  author = {Guo, Jianfei and Deng, Nianchen and Li, Xinyang and Bai, Yeqi and Shi, Botian and Wang, Chiyu and Ding, Chenjing and Wang, Dongliang and Li, Yikang},
  journal = {arXiv preprint arXiv:2306.04988},
  year = {2023}
}
  • [WIP] Our paper on multi-object reconstruction & re-simulation
  • NeuS Most of our methods are derived from NeuS
@inproceedings{wang2021neus,
	title={NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction},
	author={Wang, Peng and Liu, Lingjie and Liu, Yuan and Theobalt, Christian and Komura, Taku and Wang, Wenping},
	booktitle={Proc. Advances in Neural Information Processing Systems (NeurIPS)},
	volume={34},
	pages={27171--27183},
	year={2021}
}

About

neuralsim: 3D surface reconstruction and simulation based on 3D neural rendering.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages