Skip to content

[ECCV22] BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering (Jittor)

License

Notifications You must be signed in to change notification settings

city-super/BungeeNeRF-Jittor

Repository files navigation

BungeeNeRF

This repository contains the code release (Jittor) for BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering, aka CityNeRF.

scheme

Abstract

Neural Radiance Field (NeRF) has achieved outstanding performance in modeling 3D objects and controlled scenes, usually under a single scale. In this work, we focus on multi-scale cases where large changes in imagery are observed at drastically different scales. This scenario vastly exists in real-world 3D environments, such as city scenes, with views ranging from satellite level that captures the overview of a city, to ground level imagery showing complex details of an architecture; and can also be commonly identified in landscape and delicate minecraft 3D models. The wide span of viewing positions within these scenes yields multi-scale renderings with very different levels of detail, which poses great challenges to neural radiance field and biases it towards compromised results. To address these issues, we introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales. Starting from fitting distant views with a shallow base block, as training progresses, new blocks are appended to accommodate the emerging details in the increasingly closer views. The strategy progressively activates high-frequency channels in NeRF’s positional encoding inputs and successively unfold more complex details as the training proceeds. We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources (city models, synthetic, and drone captured data) and its support for high-quality rendering in different levels of detail.

Installation

We recommend using Anaconda to set up the environment. Run the following commands:

git clone https://github.com/city-super/BungeeNeRF.git; cd BungeeNeRF
conda create --name bungee_jittor python=3.7; conda activate bungee_jittor
conda install pip; pip install --upgrade pip
pip install -r requirements.txt
mkdir data

Installation of Jittor can be found here.

Data

Two pre-processed data can be download from: Google Drive. Unzip multiscale_google_56Leonard.zip and multiscale_google_Transamerica.zip to data dir. These two folders contain rendered images and processed camera poses. We also offer two .eps files that can be loaded into Google Earth Studio. You can adjust camera trajectory and render the most updated views for yourself. The appearance of cities are always updated in GES :). We recommend reading 3D Camera Export and FAQs for camera configuration and permitted usages.

panel

Exported 3D tracking data (.json) format {"name": xxxx, "width": xxxx, "height": xxxx, "numFrames": xxxx, "durationSeconds": 56.3, "cameraFrames": [ { "position": { "x": xxx, "y": xxx, "z": xxx }, "rotation": { "x": xxx, "y": xxx, "z": xxx }, "coordinate": { "latitude": xx, "longitude": xx, "altitude": xxx }, "fovVertical": xx }, ... ], "trackPoints": []}

Some notes on processing camera poses exported from Google Earth Studio:

  • Use the local coordinate when exporting the .json file (i.e. ECEF coordinate system) by setting a track point at the center of the scene. The "position" entry and "rotation" entry give camera poses.
  • To get the rotation matrix, use the "rotation" entry in the exported .json file and consider applying x'=-x, y'=180-y, z'=180+z to get Euler angles.
  • For the ease of computing the ray-sphere intersection of different cities, we further transfer the coordinate to ENU coordinate system. Check out this function.
  • Scale down the whole scene to lie within a period (e.g. [-pi, pi]) to be effectively represented by the positional encoding.
  • H, W, fov can be directly read from the exported .json file and used to compute focal.

Feel free to contact authors if you have any question about the data.

Running

To run experiments, use:

python run_bungee.py --config configs/EXP_CONFIG_FILE

The training starts from the furthest scale, with cur_stage=0. After an ideal amount of training you can switch to the next training stage by specifying cur_stage=1, which will include one finer scale into the training set; and start training from a previous stage checkpoint specified with --ft_path:

python run_bungee.py --config configs/EXP_CONFIG_FILE --cur_stage 1 --ft_path PREV_CKPT_PATH

Rendering

To render views, use:

python run_bungee.py --config configs/EXP_CONFIG_FILE --render_test

Citation

@inproceedings{xiangli2022bungeenerf,
    title={BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering},
    author={Xiangli, Yuanbo and Xu, Linning and Pan, Xingang, and Zhao, Nanxuan and Rao, Anyi and Theobalt, Christian and Dai, Bo and Lin, Dahua},
    booktitle = {The European Conference on Computer Vision (ECCV)}, 
    year={2022}
}

About

[ECCV22] BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering (Jittor)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages