Skip to content

cheng-chi/NeuralTracking

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Non-Rigid Tracking (NeurIPS 2020)

This repository contains the code for the NeurIPS 2020 paper Neural Non-Rigid Tracking, where we introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction.

By enabling gradient back-propagation through a weighted non-linear least squares solver, we are able to learn correspondences and confidences in an end-to-end manner such that they are optimal for the task of non-rigid tracking.

Under this formulation, correspondence confidences can be learned via self-supervision, informing a learned robust optimization, where outliers and wrong correspondences are automatically down-weighted to enable effective tracking.

Installation

You can either choose to setup your enviromment locally or use a docker with all dependencies.

Setup Locally

Set up the conda enviromment

After cloning this repo, cd into it and create a conda environment with (hopefully) all required packages:

conda env create --file resources/env.yml

Install some C++ extensions (csrc)

Then, activate the environment and install some c++ dependencies:

conda activate nnrt
cd csrc
python setup.py install
cd ..

Use Docker

Specify Repository Path To Mount

After cloning the repo, 'cd' into it and modify start_nnrt.sh with your repository absolute path for variable LOCAL_SRC_DIR.

For Example, In start_nnrt.sh

# Varibles to edit - NNRT Repo absolute path
LOCAL_SRC_DIR=/home/user_name/Repositories/NeuralTracking

Run Docker

Run the following command:

sh start_nnrt.sh

The repository folder will be mounted in /workspace/local_station/

I just want to try it on two frames!

If you just want to get a feeling of the whole approach at inference time, you can run

python example_viz.py

to run inference on a couple of source and target frames that you can already find at example_data. For this, you'll be using a model checkpoint that we also provide at experiments.

Within the Open3D viewer, you can view the following by pressing these keys:

  • S: view the source RGB-D frame
  • O: given the source RGB-D frame, toggle between the complete RGB-D frame and the foreground object we're tracking
  • T: view the target RGB-D frame
  • B: view both the target RGB-D frame and the source foreground object
  • C: toggle source-target correspondences
  • W: toggle weighted source-target correspondences (the more red, the lower the correspondence's weight)
  • A: (after having pressed B) align source to target
  • ,: rotate the camera once around the scene
  • ;: move the camera around while visualizing the correspondences from different angles
  • Z: reset source object after having aligned with A

Data

The raw image data and flow alignments can be obtained at the DeepDeform repository.

The additionally generated graph data can be downloaded using this link.

Both archives are supposed to be extracted in the same directory.

If you want to generate data on your own, also for a new sequence, you can specify frame pair and run:

python create_graph_data.py

Train

You can run

./run_train.sh

to train a model. Adapt options.py with the path to the dataset. You can initialize with a pretrained model by setting the use_pretrained_model flag.

To reproduce our complete approach, training proceeds in four stages. To this end, for each stage you will have to set the following variables in options.py:

  1. mode = "0_flow" and model_name = "chairs_things".
  2. mode = "1_solver" and model_name = "<best checkpoint from step 1>".
  3. mode = "2_mask" and model_name = "<best checkpoint from step 2>".
  4. mode = "3_refine" and model_name = "<best checkpoint from step 3>".

Each stage should be run for around 30k iterations, which corresponds to about 10 epochs.

Evaluate

You can run

./run_generate.sh

to run inference on a specified split (train, val or test). Within ./run_generate.sh you can specify your model's directory name and checkpoint name (note that the path to your experiments needs to be defined in options.py, by setting workspace.)

This script will predict both graph node deformation and dense deformation of the foreground object's points.

Next, you can run

./run_evaluate.sh

to compute the Graph Error 3D (graph node deformation) and EPE 3D (dense deformation).

Or you can also run

./run_generate_and_evaluate.sh

to do both sequencially.

Known Issues

Undefined symbol when building cpp extension

Citation

If you find our work useful in your research, please consider citing:

@article{
bozic2020neuraltracking,
title={Neural Non-Rigid Tracking},
author={Aljaz Bozic and Pablo Palafox and Michael Zoll{\"o}fer and Angela Dai and Justus Thies and Matthias Nie{\ss}ner},
booktitle={NeurIPS},
year={2020}
}

Related work

Some other related work on non-rigid tracking by our group:

License

The code from this repository is released under the MIT license, except where otherwise stated (i.e., pwcnet.py, Eigen).

About

Official implementation for the NeurIPS 2020 paper Neural Non-Rigid Tracking.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.1%
  • C++ 26.0%
  • Other 0.9%