Skip to content

This repository is the official implementation for the paper “REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices”.

License

Notifications You must be signed in to change notification settings

MARVELOUSJI/REFRAME

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices (ECCV 2024)

paper

Chaojie Ji, Yufeng Li, Yiyi Liao

Keywords: Reflective surface · Real-time rendering · Mobile device

Abstracts: This work tackles the challenging task of achieving real-time novel view synthesis for reflective surfaces across various scenes. Existing real-time rendering methods, especially those based on meshes, often have subpar performance in modeling surfaces with rich view-dependent appearances. Our key idea lies in leveraging meshes for rendering acceleration while incorporating a novel approach to parameterize view-dependent information. We decompose the color into diffuse and specular, and model the specular color in the reflected direction based on a neural environment map. Our experiments demonstrate that our method achieves comparable reconstruction quality for highly reflective surfaces compared to state-of-the-art offline methods, while also efficiently enabling real-time rendering on edge devices such as smartphones.

Our project page can be seen at https://xdimlab.github.io/REFRAME/.

📖 Table Of Contents

🏠 Installation

A suitable conda environment named REFRAME can be created and activated with:

# clone this repository
git clone https://github.com/MARVELOUSJI/REFRAME

# create new anaconda environment and activate it
conda create -n REFRAME python=3.8
conda activate REFRAME

#install pytorch 
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

#install nvdiffrast
git clone https://github.com/NVlabs/nvdiffrast.git
cd nvdiffrast
python -m pip install .

#install tiny-cuda-nn
cd ../
sudo apt-get install build-essential git

#export cuda path (change the cuda version of your own)
export PATH="/usr/local/cuda-11.3/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.3/lib64:$LD_LIBRARY_PATH"

git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn
cmake . -B build
cmake --build build --config RelWithDebInfo -j
cd bindings/torch
python setup.py install

#install the rest package
cd ../../../REFRAME
pip install -r requirements.txt

For more details on tiny-cuda-nn and nvdiffrast, you can visit tiny-cuda-nn and nvdiffrast.

🖼️ Initialization (Dataset and Initial mesh)

  1. NeRF Synthetic Dataset
  • You can download the NeRF Synthetic dataset from their project page.
  • We use NeRF2Mesh to obtain our initial mesh for the NeRF Synthetic dataset.
  • Or you can download our initial mesh used in the paper from here.
  • For hotdog and ship scene, you need to adjust the --scale to 0.7. For other scenes, the default scale (0.8) works fine.
  1. Shiny Blender Dataset
  • You can download the Shiny Blender dataset from their project page.
  • We use Ref-NeuS to obtain our initial mesh for the Shiny Blender dataset.
  • Or you can download our initial mesh used in the paper from here.
  • Note that for Ref-NeuS initial mesh, you need an additional file 'points_of_interest.ply' in the dataset path (see Ref-NeuS's repo for details) and set --refneus 1. The ply is also included in the zip of the Shiny Blender dataset initial mesh above but named after the dataset. You need to download it and change the name to 'points_of_interest.ply' and place it in the dataset path. As follows:
+-- ShinyBlender
|   +-- helmet
|       +-- test
|       +-- train
|       +-- points_of_interest.ply
|       +-- transforms_test.json
|       +-- transforms_train.json
    +-- toaster
  1. Real Captured Dataset
  • You can download the Real Captured dataset from their project page.
  • We use NeRF2Mesh and Neuralangelo to obtain our initial mesh for the Real Captured dataset.
  • Since the initial mesh of the Real Captured dataset in our paper is of poor quality and has large memory overhead. Therefore, we do not provide the initial mesh for the Real Captured dataset.
  1. Self Captured Dataset
  • Users can capture scenes on their own. We support both blender and colmap formats.
  • We recommend users to follow NeRF2Mesh to process their data for training.

💻 Usage

#Training with default setting
python main.py --datadir your_dataset_path --initial_mesh your_initial_mesh --run_name experiment_name

#Training without environment learner (directly optimizing a feature map.) Should speed up training process but loss of quality.
python main.py --datadir your_dataset_path --initial_mesh your_initial_mesh --run_name experiment_name --wenvlearner 0

#Testing and UVmapping (Default setting)
python main.py --datadir your_dataset_path --initial_mesh your_trained_mesh --run_name experiment_name --shader_path trained_shader --test 1 --uvmap 1 

#Testing and UVmapping (Training without environment learner)
python main.py --datadir your_dataset_path --initial_mesh your_trained_mesh --run_name experiment_name --wenvlearner 0 --shader_path trained_shader --envmap_path trained_envmap --test 1 --uvmap 1 

📈 Results

Quantitative Comparison. Baseline comparisons of the rendering quality on three different datasets. Red represents the optimal, orange represents the second best, and yellow represents the third.

Qualitative Comparison. Our method achieves optimal rendering quality in most scenes and provides better modeling of reflective appearance compared to the baselines.

📋 Citation

If our work is useful for your research, please consider citing:

@article{ji2024reframe,
  title={REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices},
  author={Ji, Chaojie and Li, Yufeng and Liao, Yiyi},
  journal={arXiv preprint arXiv:2403.16481},
  year={2024}
}

✨ Acknowledgement

📧 Contact

If you have any questions, please feel free to reach out at jichaojie@zju.edu.cn.

About

This repository is the official implementation for the paper “REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices”.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages