Skip to content

tflsguoyu/svbrdf-diff-renderer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MaterialGAN: Reflectance Capture using a Generative SVBRDF Model

Yu Guo, Cameron Smith, Miloš Hašan, Kalyan Sunkavalli and Shuang Zhao.

In ACM Transactions on Graphics (SIGGRAPH Asia 2020).

[Paper] [Code] [Supplemental Materials] [Poster] [Fastforward on Siggraph Asia 2020 (Video)(Slides)] [Presentation on Siggraph Asia 2020 (Video)(Slides)] [Dataset(38)] [Dataset_Zhou(76)]

Quick start

1. Python dependencies (prefer to use pip install)

torch, torchvision, opencv-python, matplotlib, tqdm, pupil-apriltags(for data capture), mitsuba(for envmap rendering)

Tested on,

  1. MacOS, python3.11, pytorch2.5.1, CPU
  2. Windows10/11, python3.11, pytorch2.5.1, CUDA12.4

Notes, pupil-apriltags installation will be failed in python3.12. If you don't want to use our data capture method, you could skip this and choose python>=3.12.

2. Pretrained MaterialGAN model

The model weights will be automatically downloaded to the folder ckp when you run the scripts.

3.Quick try

python run.py

We provide a captured image set (data/yellow_box-17.0-0.1/raw/*.jpg), and corresponding JSON files. The generated results will be in the folder data/yellow_box-17.0-0.1/optim_latent/1024/, including generated SVBRDF maps (nom.png , dif.png, spe.png, rgh.png), re-rendered target images (0*.png) and relighting video in an environment map (vid.gif).

4. Usage

To optimize SVBRDF maps, we need several images with different lighting and a corresponding JSON file, which has all the information included. If you use our dataset, all the JSON files are provided. If you want to capture new data, see below instruction. The JSON file will be generated automatically.

See run.py for more details.

Real captured Dataset

1. Capture your own data with a smartphone

Click to see more details

Steps:

  1. Print "fig/tag36h11_print.png" on a solid paper with proper size and crop the center area.
  2. Measure size(in cm unit) with a ruler, see the red arrow line in the below figure.
  3. Place it on the material you want to capture and make the paper as flat as possible.
  4. Turn on the camera flashlight and capture images from different views.
  5. Create a data folder for captured images. We provide an example here, data/yellow_box-17.0-0.1/raw.
  6. Run the script in run.py.
    gen_targets_from_capture(Path("data/yellow_box-17.0-0.1"), size=17.0, depth=0.1)
  7. The generated target images are located in data/yellow_box-17.0-0.1/target and the corresponding JSON files are generated as well.

The size here is the number you measured from step 2; depth is the distance (in cm unit) between the marker plane and the material plane. For example, if you attach the markers on thick cardboard, you should use a larger depth.

Tips:

  1. All markers should be captured and in focus and the letter A should be facing up.
  2. It's better to capture during the night or in a dark room and turn off other lights.
  3. It's better to see the highlights in the cropped area.
  4. Change camera mode to manual, and keep the white balance and focal length the same during the captures.
  5. .heic image format is not supported now. Convert it to .png/.jpg first.
  6. Preferred capturing order: highlight in topleft -> top -> topright -> left -> center -> right -> bottomleft -> bottom -> bottomright. See images in data/yellow_box/raw as references.

2. The [Dataset(38)] used in this paper

The dataset includes corresponding JSON files. We put our results here as a reference, and you can also generate the results using our code from run.py.

optim_ganlatent(material_dir / "optim_latent_256.json", 256, 0.02, [1000, 10, 10], "auto")
optim_perpixel(material_dir / "optim_pixel_256_to_512.json", 512, 0.01, 20, tex_init="textures")
optim_perpixel(material_dir / "optim_pixel_512_to_1024.json", 1024, 0.01, 20, tex_init="textures")

For most of the cases, we use auto mode as the initialization. The results are shown below (input photos and output texture maps (1024x1024)),

           

           

Click to see more results

           

           

           

           

           

           

           

           

           

           

      

For some specular materials, the highlights are baked in the roughness maps. Using a lower roughness as initialization (ckp = ["ckp/latent_const_W+_256.pt", "ckp/latent_const_N_256.pt"]) will give better results,

           

We also provide the code in run.py to generate novel-view renderings with an environment map,

render_envmap(material_dir / "optim_latent/1024", 256)

   

3. The [Dataset_Zhou(76)] from Xilong Zhou with our JSON files.

The results are optimized by MaterialGAN,

           

           

Click to see more results

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

           

 

Citation

If you find this work useful for your research, please cite:

@article{Guo:2020:MaterialGAN,
  title={MaterialGAN: Reflectance Capture using a Generative SVBRDF Model},
  author={Guo, Yu and Smith, Cameron and Ha\v{s}an, Milo\v{s} and Sunkavalli, Kalyan and Zhao, Shuang},
  journal={ACM Trans. Graph.},
  volume={39},
  number={6},
  year={2020},
  pages={254:1--254:13}
}

Known issues

  • If generated Albedo map is too bright (overexposure), which means the light intensity (fixed during optimization) used here is too small. Please go to JSON files and change light_pow accordingly, for example, change it to 10000. Default value 1500 is an estimation of the flashlight on smartphones.

Contacts

Welcome to report bugs and leave comments (Yu Guo: tflsguoyu@gmail.com)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages