PyTorch implementation for training continuous convolutional neural fields to represent dense correspondence across objects, and using these descriptor fields to mimic demonstrations of a pick-and-place task on a robotic system.
This is the reference implementation for our paper:
Ethan Chun, Yilun Du, Tomás Lozano-Perez, Leslie Pack Kaelbling
git clone --recursive git@github.com:elchun/lndf_robot.git
cd lndf_robot
If using conda, we recommend python 3.6 although newer python distributions are likely to work as well.
pip install -e .
If you have installed similar packages before, adding the --no-cache-dir
flag may
make pip install faster.
pip install -e . --no-cache-dir
Setup additional tools (Franka Panda inverse kinematics -- unnecessary if not using simulated robot for evaluation):
cd pybullet-planning/pybullet_tools/ikfast/franka_panda
python setup.py
cd ../../../../
Install with pip or see additional instructions here. Replace ${CUDA}
with the appropriate version (cpu, cu102, cu113, cu116). Generally, the torch
and cuda version of torch-scatter should match those of your pytorch installation. E.g. the output of torch.__version__
For example, the authors installed with pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu102.html
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+${CUDA}.html
(this script must be sourced in each new terminal where code from this repository is run)
source lndf_env.sh
./scripts/download_weights.sh
./scripts/download_demos.sh
./scripts/download_obj_data.sh
cd src/ndf_robot/eval
Then open and run quickstart_demo.ipynb
. If you are running on a remote server, see
here for
reference on how to run the jupyter notebook.
./scripts/download_demo_weights.sh
./scripts/download_demos.sh
./scripts/download_obj_data.sh
cd src/ndf_robot/eval
CUDA_VISIBLE_DEVICES=0 python3 evaluate_general.py --config_fname {your_config}
The configuration files are located in the eval_configs
directory. Include the .yml
extension when specifying the config file.
For example, evaluating a grasping experiment may use the following command:
CUDA_VISIBLE_DEVICES=0 python3 evaluate_general.py --config_fname lndf_grasp.yml
You may create additional experiments by viewing the tutorial.yml
config file
in src/ndf_robot/eval/eval_configs
and creating an appropriate experiment in
src/ndf_robot/eval/experiments
.
Navigate to the training
directory inside ndf_robot
. Run or modify the
shell script train.sh
to train the convolutional occupancy network.
Two main scripts control how the model is trained: train_conv_occupancy_net.py
and losses.py
.
The shell script calls train_conv_occupancy_net.py
which allows the user
to specify the model and loss function to use. These loss functions are defined
in losses.py
. A user may use the given loss functions or test additional loss functions
To evaluate a new set of weights, the s_vis_correspondance.py
script is
convenient. By setting the model_path and obj_model variables, you can compare
the latent representation of an unrotated and rotated version of the
same object. This allows you to check how strongly SE(3) equivariance
is enforced.
Four main loss functions are defined in losses.py
. The first
two occupancy_net losses are used for training a standard occupancy network
or a convolutional occupancy network. The latter two contrastive loss functions
implement the distance based loss function which we describe in our publication.
We find that the contrastive_cross_entropy
loss performs the best, while the
contrastive_l2
loss works but produces lower success rates.
Both contrastive loss functions select a sample point,
Contrastive Cross Entropy
In contrastive_cross_entropy
loss, we enforce this similarity by
calculating the euclidean distance between the sample and target points
(assuming both are on the same mug). We then produce a probability distribution
where the probability assigned to a target point is
where
Contrastive L2
In contrastive_l2
loss, we also calculate the euclidean distance between
the sample and target points. However, for each target point
Where
We calculate then drive the cosine similarity of the latent code of
Navigate to the demonstrations
directory inside ndf_robot
. Use the
label_demos.py
script to log new demonstrations. The current evaluator uses
a slightly different file format than the output of label_demos.py
so you
must run the convert_demos.py
script to convert the demonstrations into the
new file format.
- Add or modify models: Use the
descriptions
directory to modify or add new models. - Modify optimizer: See both the deep or geometric optimizer classes in the
opt
directory.
Download all the object data assets
./scripts/download_obj_data.sh
Run data generation
cd src/ndf_robot/data_gen
python shapenet_pcd_gen.py \
--total_samples 100 \
--object_class mug \
--save_dir test_mug \
--rand_scale \
--num_workers 2
More information on dataset generation can be found here.
Make sure you have downloaded all the object data assets (see Data Generation section)
Run teleoperation pipeline
cd src/ndf_robot/demonstrations
python label_demos.py --exp test_bottle --object_class bottle --with_shelf
More information on collecting robot demonstrations can be found here.
If you find our paper or this code useful in your work, please cite our paper:
@article{chundu2023lndf,
title = {Local Neural Descriptor Fields: Locally Conditioned Object Representations for Manipulation},
author = {Chun, Ethan and Du, Yilun and Simeonov, Anthony and Lozano-Perez, Tomas and Kaelbling, Leslie},
journal = {arXiv preprint arXiv:2302.03573},
year = {2023}
}
Parts of this code were built upon the implementations found in the convolutional occupancy networks repo, the neural descriptor field repo, the occupancy networks repo and the vector neurons repo. Check out their projects as well!