Representing Scenes as Neural Radiance Fields for View Synthesis
The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time.
CUDA GPU is required for generating data
pip install -r requirements.txt
Inside nerf.py
adjust with desired settings
if __name__ == "__main__":
Inference(
epochs=200,
_file='./fern',
data_type='llff'
)
NeRF requires poses for images, to generate poses run python imgs2poses.py <your_folder>
script uses COLMAP to run structure from motion to get 6-DoF camera poses and near/far depth bounds for the scene. For installing COLMAP check colmap.github.io/install.html
Inside folder location make sure to have images/ folder containing all of your images.
After COLMAP is finished it will output poses_bounds.npy
and sparse/
folder containing necessary data for NeRF.
If you do not wish to use LLFF you can pass data_type='npz', _file='my_data.npz'
and use .npz file containing images, poses and focal.
Check Pregenerated Data (synthetic data is not supported)
https://keras.io/examples/vision/nerf/
https://arxiv.org/abs/2003.08934
https://github.com/bmild/nerf
https://github.com/3b1b/manim
https://www.mathworks.com/help/vision/ug/camera-calibration.html
https://github.com/colmap/colmap
https://github.com/fyusion/llff