A modular interface for robotic manipulation:
- supports a suite of sensors (camera sensors, force sensors, touch sensors, audio sensors) and actuators (robotic arms: panda, and end-effectors).
- allows users to compose different sensor modalities and actuators to form new manipulation environments. (mujoco compatible envs)
- allows users to collect demonstrations from the created manipulation environments through a. teleoperation (VR, space mouse) b. manual control
- Install
mamba
- Clone the repo from https://github.com/AGI-Labs/manimo
- Set
MANIMO_PATH
as an environment variable in the.bashrc
file:export MANIMO_PATH={FOLDER_PATH_TO_MANIMO}/manimo/manimo
- Run the setup script on the client computer. Note that
mamba
setup does not work, always useminiconda
:source setup_manimo_env_client.sh
- Run the setup script on the server computer. Note that
mamba
setup does not work, always useminiconda
:source setup_manimo_env_server.sh
To verify that the installation works, run the polymetis server on NUC by running the following script under the scripts folder:
python get_current_position.py
-
Install
oculus_reader
VR client:- follow the instructions here.
-
Enable developer mode on the Oculus Quest. Follow the instructions at https://developer.oculus.com/documentation/native/android/mobile-device-setup/.
-
Install Android ADB tools to communicate with the headset:
sudo apt install android-tools-adb
-
(Optional) Set up Wi-Fi access to the device using the instructions provided at https://developer.oculus.com/documentation/native/android/ts-adb/.
- Download the Zed SDK based on the CUDA driver version on your system from https://www.stereolabs.com/developers/release.
- supports on-board calbiration of different sensors.
manimo's design is heavily inspired by franka_demo
- An Unbiased Look at Datasets for Visuo-Motor Pre-Training: Sudeep Dasari, Mohan Kumar Srirama, Unnat Jain, Abhinav Gupta
- PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play: Lili Chen, Shikhar Bahl, Deepak Pathak
- Hearing Touch: Audio-Visual Pretraining for Contact-Rich Manipulation: Jared Mejia, Victoria Dean, Tess Hellebrekers, Abhinav Gupta
- HRP: Human Affordances for Robotic Pre-Training: Mohan Kumar Srirama, Sudeep Dasari*, Shikhar Bahl*, Abhinav Gupta*