Skip to content

pandas9/nerf_volumetric

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Radiance Fields Volumetric Rendering

Representing Scenes as Neural Radiance Fields for View Synthesis

The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time.

Source: Keras

nerf-volumetric

Requirements

CUDA GPU is required for generating data
pip install -r requirements.txt

Usage

Run NeRF

Inside nerf.py adjust with desired settings

if __name__ == "__main__":
    Inference(
        epochs=200,
        _file='./fern',
        data_type='llff'
    )

Data

Generating Data

NeRF requires poses for images, to generate poses run python imgs2poses.py <your_folder> script uses COLMAP to run structure from motion to get 6-DoF camera poses and near/far depth bounds for the scene. For installing COLMAP check colmap.github.io/install.html Inside folder location make sure to have images/ folder containing all of your images.

After COLMAP is finished it will output poses_bounds.npy and sparse/ folder containing necessary data for NeRF.

Pregenerated Data

If you do not wish to use LLFF you can pass data_type='npz', _file='my_data.npz' and use .npz file containing images, poses and focal.

Check Pregenerated Data (synthetic data is not supported)

Acknowledgements

https://keras.io/examples/vision/nerf/
https://arxiv.org/abs/2003.08934
https://github.com/bmild/nerf
https://github.com/3b1b/manim
https://www.mathworks.com/help/vision/ug/camera-calibration.html
https://github.com/colmap/colmap
https://github.com/fyusion/llff

Releases

No releases published

Packages

No packages published

Languages