VISMA stands for Visual Inertial Semantic MApping and contains both RGB videos and inertial measurements for developing object-level (semantic) mapping systems.
We gave a live demo of our system at CVPR 2016 followed by a CVPR 2017 paper, where objects are modeled as 3D bounding boxes with semantic labels attached.
In the follow-up ECCV 2018 paper, the system has been further improved to model fine-grained object shapes as polygon meshes.
If you find VISMA or this repo useful and use them in your work, please cite the following papers:
@inproceedings{feiS18,
title = {Visual-Inertial Object Detection and Mapping},
author = {Fei, X. and Soatto, S.},
booktitle = {Proceedings of the European Conference on Computer Vision},
year = {2018}
}
@inproceedings{dongFS17,
author = {Dong, J., Fei, X., and Soatto, S.},
title = {Visual Inertial Semantic Scene Representation for 3D Object Detection},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year = {2017}
}
Data is available in dropbox here.
Dependencies
OpenCV
: Image I/O and processing. Easy way is to install OpenCV via your favorite package manager.Eigen
: Linear algebra and matrix manipulation. Install via package manager or build from source.Protobuf
: Utilities for protocol buffer. Install via package manager.abseil-cpp
: Utilities from google. No need to build this manually, since the repo isadd_subdirectory
-ed into the main build script.jsoncpp
: I/O for json files. No need to build this manually, since the repo isadd_subdirectory
-ed into the main build script.
Once all the requirements are met, make a build directory, enter that directory and cmake ..
followed by make
.
To build the evaluation code, you need the following extra dependecies:
Open3D
: For point cloud manipulation and visualization.libigl
: Mesh I/O and geometry processing. This is a header only library, clone the repo into thirdparty directory as libigl.
We provide a version of Open3D in thirdparty directory. First, go to thirdparty/Open3D and follow the instruction on http://www.open3d.org/docs/getting_started.html#ubuntu
to build, i.e.:
util/scripts/install-deps-ubuntu.sh
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=../
make -j
Then, set build_evaluation
to True
in CMakeLists.txt and build.
Raw data (RGB video and inertial measurements with time stamps) are stored in rosbags. You can run your favorite visual-inertial or visual SLAM to get camera poses.
In addition to the raw data, we also provide the following preprocessed data:
- RGB images extracted from rosbags (
XXX.png
) - Associated camera pose at each time instant from our SLAM system (
dataset
) - Edge map (
XXX.edge
) and - Object bounding boxes (
XXX.bbox
)
Except the RGB images, all the other data are encoded according to protocols/vlslam.proto
to avoid custom I/O. This being said, though the data loading example is written in C++, it will not be hard to parse data in other programming frameworks, say, python.
We ran ElasticFusion on RGB-D collected by a Kinect to get a (pseudo)ground truth reconstruction to which our semantic reconstruction is compared.
Folder RGBD
contains all the data needed to evaluate semantic reconstruction for each sequence. For instance, clutter1
contains data to evaluate results on clutter
sequence. clutter/test.klg.ply
is the point cloud reconstruction of the scene from ElasticFusion. clutter/fragments
contains the following items:
objects.json
contains a complete list of objects in the scene. Each object is named asXXX_N.ply
where XXX is the object name in the CAD database and N is the count of the object (there might be multiple identical objects in the scene).- For each object listed in
objects.json
, we have its point cloud segmented out from the RGB-D reconstruction. This is used to find the alignment between semantic reconstruction and RGB-D reconstruction in ICP as described in our paper. alignment.json
contains the ground truth object pose in the scene. The poses are found by orientation constrained ICP described in our paper.augmented_scene.ply
contains RGB-D point cloud with points sampled from the CAD models aligned to the scene. This is a point cloud file since RGB-D reconstruction is in the form of point cloud though the CAD models are provided as meshes.
For example usage of data loader, see example/example_load.cpp
and run example_load DATASET_DIRECTORY
in the example sub-directory. The input RGB image, pre-computed edge map and object proposals, camera pose from SLAM will be loaded. To load the sparse reconstruction, 2D tracklets of feature points and other information from SLAM, see the protocol buffer file in protocols/vlslam.proto
and modify the dataset loader accordingly.
VISMA is designed for but not limited to developing visual-inertial semantic SLAM. In any case if one wants to use it for training deep neural networks, e.g., unsupervised depth prediction learned from monocular videos, we provide an example python script to load and process data.
First go to protocols
and
protoc vlslam.proto --python_out=../scripts
which generates vlslam_pb2.py
in scripts folder.
Go to project root directory and run the following
python scripts/example_load.py --dataroot YOUR_DIRECTORY_OF_PREPROCESSED_DATA
For more command line options, see the script.
- Complete python script on loading bounding boxes, sparse features, etc.
- Finalize example code on evaluation.