Skip to content

Modules, operators and utilities for 3D neural rendering in single-object, multi-object, categorical and large-scale scenes.

License

Notifications You must be signed in to change notification settings

PJLab-ADG/nr3d_lib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

17 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

nr3d_lib

Modules, operators and utilities for 3D neural rendering in single-object, multi-object, categorical and large-scale scenes.

Pull requests and collaborations are warmly welcomed πŸ€—! Please follow our code style if you want to make any contribution.

Feel free to open an issue or contact Jianfei Guo at ffventus@gmail.com if you have any questions or proposals.

Installation

Requirements

  • python >= 3.8
  • Pytorch >= 1.10 && !=1.12 &&
    • also works for pytorch >= 2.
  • CUDA dev >= 10.0
    • need to match the major CUDA version that your Pytorch built with

An example of our platform (python=3.8, pytorch=1.11, cuda=11.3 / 11.7):

conda create -n nr3d python=3.8
conda activate nr3d
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
  • pytorch_scatter
conda install pytorch-scatter -c pyg
  • other pip packages
pip install opencv-python-headless kornia imagesize omegaconf addict \
  imageio imageio-ffmpeg scikit-image scikit-learn pyyaml pynvml psutil \
  seaborn==0.12.0 trimesh plyfile ninja icecream tqdm plyfile tensorboard \
  torchmetrics

One-liner install

cd to the nr3d_lib directory, and then: (Notice the trailing dot .)

pip install -v .

πŸ“Œ NOTE: For pytorch>=2.2, c++17 standard is required --- in this case, you can run

USE_CPP17=1 pip install -v .
Optional functionalities
  • Visualization

    • pip install open3d vedo==2023.4.6 mayavi
  • tiny-cuda-nn backends

    • pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch or
    • pip install git+https://github.com/PJLab-ADG/NeuS2_TCNN/#subdirectory=bindings/torch which supports double-backward of fused_mlp (still buggy)
  • GUI support (Experimental)

    • # opengl
      pip install pyopengl
      
      # imgui
      pip install imgui
      
      # glumpy
      pip install git+https://github.com/glumpy/glumpy.git@46a7635c08d3a200478397edbe0371a6c59cd9d7#egg=glumpy
      
      # pycuda
      git clone https://github.com/inducer/pycuda
      cd pycuda
      ./configure.py --cuda-root=/usr/local/cuda --cuda-enable-gl
      python setup.py install

Main components

  • LoTD Levels of Tensorial Decomposition
  • pack_ops Pack-wise operations for packed tensors
  • occ_grids Occupancy accelerates ray marching
  • attributes Unified API framework for scene node attributes
  • fields Implicit representations

πŸ“Œ [LoTD]: Levels of Tensorial Decomposition

  • Code: models/grids/lotd
  • Supported scenes:
    • Single scene
    • Batched / categorical scene;
    • Large-scale scene
  • Main feature
    • Support different layer using different types
    • Support different layer using different widths (n_feats)
    • All types support cuboid resolutions
    • All types support forward, first-order gradients and second-order gradients
    • All types support batched encoding: inference with batched inputs or batch_inds
    • All types support large-scale scene representation
  • Supported LoTD Types and calculations of forward, gradients (dLd[]) and second-order gradients (d(dLdx)d[])
πŸš€ All implemented with Pytorch-CUDA extension dimension forward dL
dparam
dL
dx
d(dLdx)
d(param)
d(dLdx)
d(dLdy)
d(dLdx)
dx
Dense 2-4 βœ… βœ… βœ… βœ… βœ… βœ…
Hash
hash-grids in NGP
2-4 βœ… βœ… βœ… βœ… βœ… βœ…
VectorMatrix or VM
Vector-Matrix in TensoRF
3 βœ… βœ… βœ… βœ… βœ… βœ…
VecZMatXoY
modified from TensoRF
using only xoy mat and z vector.
3 βœ… βœ… βœ… βœ… βœ… βœ…
CP
CP in TensoRF
2-4 βœ… βœ… βœ… βœ… βœ… βœ…
NPlaneSum
"TriPlane" in EG3D
3-4 βœ… βœ… βœ… βœ… βœ… βœ…
NPlaneMul 3-4 βœ… βœ… βœ… βœ… βœ… βœ…
  • A demo config yaml with all cubic resolution:
lod_res:     [32,    64,    128, 256, 512, 1024, 2048, 4096]
lod_n_feats: [4,     4,     8,   4,   2,   16,    8,    4]
lod_types:   [Dense, Dense, VM,  VM,  VM,  CP,   CP,   CP]
  • A demo config yaml with all cuboid resolution (usually auto-computed in practice):
lod_res:  [[144, 56, 18], [199, 77, 25], [275, 107, 34], [380, 148, 47], [525, 204, 65], [726, 282, 91], [1004, 390, 126], [1387, 539, 174]]
lod_n_feats: [4, 4, 4, 4, 2, 2, 2, 2]
lod_types: [Dense, Dense, Hash, Hash, Hash, Hash, Hash, Hash]
log2_hashmap_size: 19

πŸ“Œ [pack_ops]: Pack-wise operations for packed tensors

Check out docs/pack_ops.md for more!

Code: render/pack_ops

pack_ops_overview

πŸ“Œ [occ_grids] Occupancy accelerates ray marching

Code: render/raymarch/occgrid_raymarch.py

This part is primarily borrowed and modified from nerfacc

  • Support single scene
  • Support batched / categorical scene
  • Support large-scale scene

Highlight implementations

πŸ“Œ [attributes]: Unified API framework for scene node attributes

Code: models/attributes

We extend pytorch.Tensor to represent common types of data involved in 3D neural rendering, e.g. transforms (SO3, SE3) and camera models (pinhole, OpenCV, fisheye), in order to eliminate concerns for tensor shapes, different variants and gradients and only expose common APIs regardless of the underlying implementation.

attr_transform attr_camera

These data types could have multiple variants but with the same way to use. For example, SE3 can be represented by RT matrices, 4x4 matrix, or exponential coordinates, and let alone the different representations of the underlying SO3 (quaternions, axis-angles, Euler angles...) when using RT as SE3. But when it comes to usage, the APIs are the same, e.g. transform(), rotate(), mat_3x4(), mat_4x4(), inv(), default transform, etc. In addition, there could also be complex data prefix like [4,4] or [B,4,4] or [N,B,4,4] etc. Once implemented under our framework and settings, you need only care about the APIs and can forget all the underlying calculations and tensor shape rearrangements.

You can check out models/attributes/transform.py for better understanding. Another example is models/attributes/camera_param.py.

Most of the basic pytorch.Tensor operations are implemented for Attr and AttrNested, e.g. slicing (support arbitrary slice with : and ...), indexing, .to() , .clone(), .stack(), .concat(). Gradient flows and nn.Parameters(), nn.Buffer() are also kept / supported if needed.

πŸ“Œ [fields]: Implicit representations

fields: single scene

Code: models/fields

fields_conditional: conditional / categorical / generative fields

Code: models/fields_conditional

fields_forest: large-scale multi-continuous-block fields

Code: models/fields_forest

Other highlights

  • plot 2d & 3d plotting tools for developers
  • models/importance.py errormap update & 2D importance sampling (inverse 2D cdf sampling); modified from NGP and re-implemented in PyTorch
    • An example tensorboard errormap entry at the middle of training: (from top to bottom: GT image, predicated image, accumulated errormap)
      errormap

TODO

  • Release batched ray marching
  • Release LoTD-Growers and Style-LoTD-NeuS
  • Release large-scale representation, large-scale ray marching and large-scale neus
  • Implement dmtet
  • Implement permuto-SDF
  • Basic examples & tutorials
    • How to use single / batched / large-scale LoTD
    • Example on batched ray marching & batched LoTD inference
    • Example on efficient multi-stage hierarchical sampling based on occupancy grids

Acknowledgements

Citation

If you find this library useful, please cite our paper introducing pack_ops, cuboid hashgrids and efficient neus rendering.

@article{guo2023streetsurf,
  title = {StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views},
  author = {Guo, Jianfei and Deng, Nianchen and Li, Xinyang and Bai, Yeqi and Shi, Botian and Wang, Chiyu and Ding, Chenjing and Wang, Dongliang and Li, Yikang},
  journal = {arXiv preprint arXiv:2306.04988},
  year = {2023}
}

About

Modules, operators and utilities for 3D neural rendering in single-object, multi-object, categorical and large-scale scenes.

Resources

License

Stars

Watchers

Forks

Packages

No packages published