Skip to content

Releases: ARISE-Initiative/robosuite

v1.5 - Diverse robot embodiments and composite controllers

29 Oct 19:15
1a8701b
Compare
Choose a tag to compare

Robosuite v1.5 Release Notes

Highlights

The 1.5 release of Robosuite introduces significant advancements to extend flexibility and realism in robotic simulations. Key highlights include support for diverse robot embodiments (e.g., humanoids), custom robot compositions, composite controllers (such as whole-body controllers), expanded teleoperation devices, and photorealistic rendering capabilities.

New Features

  • Diverse Robot Embodiments: Support for complex robots, including humanoids, allowing exploration of advanced manipulation and mobility tasks. Please see robosuite_models for extra robosuite-compatible robot models.
  • Custom Robot Composition: Users can now build custom robots from modular components, offering extensive configuration options.
  • Composite Controllers: New controller abstraction includes whole-body controllers, and the ability to control robots with composed body parts, arms, and grippers.
  • Additional Teleoperation Devices: Expanded compatibility with teleoperation tools like drag-and-drop in the MuJoCo viewer and Apple Vision Pro.
  • Photorealistic Rendering: Integration of NVIDIA Isaac Sim for enhanced, real-time photorealistic visuals, bringing simulations closer to real-world fidelity.

Improvements

  • Updated Documentation: New tutorials and expanded documentation on utilizing advanced controllers, teleoperation, and rendering options.
  • Simulation speed improvement: By default we set the lite_physics flag to True to skip redundant calls to env.sim.step()

Migration

  • Composite controller refactoring: please see example of usage

Contributor Spotlight

We would like to introduce the newest member of our robosuite core team, who has contributed significantly to this release!

Kevin Lin @kevin-thankyou-lin

v1.4 - New Mujoco Backend

30 Nov 07:48
fbee584
Compare
Choose a tag to compare

robosuite 1.4.0 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Critical Bug Fixes
  • Other Bug Fixes

Highlights

This release of robosuite refactors our backend to leverage DeepMind's new mujoco bindings. Below, we discuss the key details of this refactoring:

Installation

Now, installation has become much simpler, with mujoco being directly installed on Linux or Mac via pip install mujoco. Importing mujoco is now done via import mujoco instead of import mujoco_py

Rendering

The new DeepMind mujoco bindings do not ship with an onscreen renderer. As a result, we've implented an OpenCV renderer, which provides most of the core functionality from the original mujoco renderer, but has a few limitations (most significantly, no glfw keyboard callbacks and no ability to move the free camera).

Improvements

The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

  • Standardize end-effector frame inference (#25). Now, all end-effector frames are correctly inferred from raw robot XMLs and take into account arbitrary relative orientations between robot arm link frames and gripper link frames.

  • Improved robot textures (#27). With added support from DeepMind's mujoco bindings for obj texture files, all robots are now natively rendered with more accurate texture maps.

  • Revamped macros (#30). Macros now references a single macro file that can be arbitrarily specified by the user.

  • Improved method for specifying GPU ID (#29). The new logic is as follows:

    1. If render_device_gpu_id=-1, MUJOCO_EGL_DEVICE_ID and CUDA_VISIBLE_DEVICES are not set, we either choose the first available device (usually 0) if macros.MUJOCO_GPU_RENDERING is True, otherwise use CPU;
    2. CUDA_VISIBLE_DEVICES or MUJOCO_EGL_DEVICE_ID are set, we make sure that they dominate over programmatically defined GPU device id.
    3. If CUDA_VISIBLE_DEVICES and MUJOCO_EGL_DEVICE_ID are both set, then we use MUJOCO_EGL_DEVICE_ID and make sure it is defined in CUDA_VISIBLE_DEVICES
  • robosuite docs updated

  • Add new papers

Critical Bug Fixes

  • Fix Sawyer IK instability bug (#25)

Other Bug Fixes

  • Fix iGibson renderer bug (#21)

Contributor Spotlight

We would like to introduce the newest members of our robosuite core team, all of whom have contributed significantly to this release!
@awesome-aj0123
@snasiriany
@zhuyifengzju

v1.3 - Renderers and Robot Perception Functionalities

19 Oct 17:57
c7d0b51
Compare
Choose a tag to compare

robosuite 1.3.0 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Critical Bug Fixes
  • Other Bug Fixes

Highlights

This release of robosuite brings powerful rendering functionalities including new renderers and multiple vision modalities, in addition to some general-purpose camera utilties. Below, we discuss the key details of these new features:

Renderers

In addition to the native Mujoco renderer, we present two new renderers, NVISII and iGibson, and introduce a standardized rendering interface API to enable easy swapping of renderers.

NVISII is a high-fidelity ray-tracing renderer originally developed by NVIDIA, and adapted for plug-and-play usage in robosuite. It is primarily used for training perception models and visualizing results in high quality. It can run at up to ~0.5 fps using a GTX 1080Ti GPU. Note that NVISII must be installed (pip install nvisii) in order to use this renderer.

iGibson is a much faster renderer that additionally supports physics-based rendering (PBR) and direct rendering to pytorch tensors. While not as high-fidelity as NVISII, it is incredibly fast and can run at up to ~1500 fps using a GTX 1080Ti GPU. Note that iGibson must be installed (pip install igibson) in order to use this renderer.

With the addition of these new renderers, we also introduce a standardized renderer for easy usage and customization of the various renderers. During each environment step, the renderer updates its internal state by calling update() and renders by calling render(...). The resulting visual observations can be polled by calling get_pixel_obs() or by calling other methods specific to individual renderers. We provide a demo script for testing each new renderer, and our docs also provide additional information on specific renderer details and installation procedures.

Vision Modalities

In addition to new renderers, we also provide broad support for multiple vision modalities across all (Mujoco, NVISII, iGibson) renderers:

  • RGB: Standard 3-channel color frames with values in range [0, 255]. This is set during environment construction with the use_camera_obs argument.
  • Depth: 1-channel frame with normalized values in range [0, 1]. This is set during environment construction with the camera_depths argument.
  • Segmentation: 1-channel frames with pixel values corresponding to integer IDs for various objects. Segmentation can occur by class, instance, or geom, and is set during environment construction with the camera_segmentations argument.

In addition to the above modalities, the following modalities are supported by a subset of renderers:

  • Surface Normals: [NVISII, iGibson] 3-channel (x,y,z) normalized direction vectors.
  • Texture Coordinates: [NVISII] 3-channel (x,y,z) coordinate texture mappings for each element
  • Texture Positioning: [NVISII, iGibson] 3-channel (x,y,z) global coordinates of each pixel.

Specific modalities can be set during environment and renderer construction. We provide a demo script for testing the different modalities supported by NVISII and a demo script for testing the different modalities supported by iGibson.

Camera Utilities

We provide a set of general-purpose camera utilities that intended to enable easy manipulation of environment cameras. Of note, we include transform utilities for mapping between pixel, camera, and world frames, and include a CameraMover class for dynamically moving a camera during simulation, which can be used for many purposes such as the DemoPlaybackCameraMover subclass that enables smooth visualization during demonstration playback.

Improvements

The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

  • Standardize EEF frames (#204). Now, all grippers have identical conventions for plug-and-play usage across types.

  • Add OSC_POSITION control option for spacemouse (#209).

  • Improve model class hierarchy for robots. Now, robots own a subset of models (gripper(s), mount(s), etc.), allowing easy external access to the robot's internal model hierarchy.

  • robosuite docs updated

  • Add new papers

Critical Bug Fixes

  • Fix OSC global orientation limits (#228)

Other Bug Fixes

  • Fix default OSC orientation control (valid default rotation matrix) (#232)

  • Fix Jaco self-collisions (#235)

  • Fix joint velocity controller clipping and tune default kp (#236)


Contributor Spotlight

A big thank you to the following community members for spearheading the renderer PRs for this release!
@awesome-aj0123
@divyanshj16
@fxia22

v1.2 - Observable Sensor Models and Dynamics Randomization

18 Feb 06:56
e439e96
Compare
Choose a tag to compare

robosuite 1.2.0 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Critical Bug Fixes
  • Other Bug Fixes

Highlights

This release of robosuite tackles a major challenge of using simulators: real-world transfer! (Sim2Real)

We present two features to significantly improve the sim2real transferrability -- realistic sensor modeling (observables) and control over physics modeling parameters (dynamics randomization).

Observables

This standardizes and modularizes how observations are computed and gathered within a given env. Now, all observations received from the env.step() call can be modified programmatically in either a deterministic or stochastic way. Sensor realism has been increased with added support for individual sensor sampling rates, corruption, delay, and filtering. The OTS behavior (obs dict structure, default no corruption / delay / filtering) has been preserved for backwards compatibility.

Each Observable owns its own _sensor, _corrupter, _delayer, and _filter functions, which are used to process new data computed during its update() call which is called after every simulation timestep, NOT policy step! (Note, however that a new value is only computed once per sampling period, NOT at every update() call). Its functionality is described briefly described below:

  • _sensor: Arbitrary function that takes in an observation cache and computes the "raw" (potentially ground truth) value for the given observable. It can potentially leverage pre-computed values from the observation cache to compute its output. The @sensor decorator is provided to denote this type of function, and guarantees a modality for this sensor as well.
  • _corrupter: Arbitrary function that takes in the output from _sensor and outputs the corrupted data.
  • _delayer: Arbitrary function that takes no arguments and outputs a float time value (in seconds), denoting how much delay should be applied to the next sampling cycle
  • _filter: Arbitrary function that takes in the output of _corrupter and outputs the filtered data.

All of the above can either be (re-)defined at initialization or during runtime. Utility functions have been provided in the base.py environment module to easily interact with all observables owned by the environment.

Some standard corrupter and delayer function generators are provided ([deterministic / uniform / gaussian] [corruption / delay]), including dummy no-ops for standard functions. All of this can be found in observables.py, and has been heavily documented.

An example script demo'ing the new functionality can be found in demo_sensor_corruption.py.

Dynamics Randomization

Physical parameters governing the underlying physics model can now be modified in real-time via the DynamicsModder class in mjmod.py. This modder allows mid-sim randomization of the following supported properties, sorted by element group (for more information, please see Mujoco XML Reference)

Opt (Global) Parameters

  • density: Density of the medium (i.e.: air)
  • viscosity: Viscosity of the medium (i.e.: air)

Body Parameters

  • position: (x, y, z) Position of the body relative to its parent body
  • quaternion: (qw, qx, qy, qz) Quaternion of the body relative to its parent body
  • inertia: (ixx, iyy, izz) diagonal components of the inertia matrix associated with this body
  • mass: mass of the body

Geom Parameters

  • friction: (sliding, torsional, rolling) friction values for this geom
  • solref: (timeconst, dampratio) contact solver values for this geom
  • solimp: (dmin, dmax, width, midpoint, power) contact solver impedance values for this geom

Joint parameters

  • stiffness: Stiffness for this joint
  • frictionloss: Friction loss associated with this joint
  • damping: Damping value for this joint
  • armature: Gear inertia for this joint

The new DynamicsModder follows the same basic API as the other Modder classes, and allows per-parameter and per-group randomization enabling. Apart from randomization, this modder can also be instantiated to selectively modify values at runtime. Detailed information can be found on our docs page along with an informative example script.

Improvements

The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

  • robosuite docs have been completely overhauled! Hopefully no more broken links or outdated APIs (#159)

  • Tuned parallel jaw grippers (PandaGripper and RethinkGripper) for improved grasp stability and significantly reduced sliding

  • Tuned default gains for teleoperation when using keyboard device

  • Added small frictionloss, damping, and armature values to all robot joints by default to improve stability and reduce no-op drift over time

  • Simulation now uses realistic values for air density and viscosity

  • tune_camera.py is more flexible: can now take string names as inputs and also modify any camera active in the simulation (e.g.: eye_in_hand!)

  • Add macros for automatically concatenating image frames from different cameras

  • Add new papers

  • Improve documentation semantics

Critical Bug Fixes

  • Fix interpolator dimension bug (#181)

  • Fixed xml object naming bug (#150)

  • Fixed deterministic action playback (however, a caveat, and a test script to test this) (#178)

Other Bug Fixes

  • Fix contact geom bug with Jaco, and a test script to test contact geoms for all robots (#180)

  • Fix OSC position default orientation and when to update orientation goals

  • Fix OSC orientation to be consistent with global coordinate axes frame of reference

  • Fix spacemouse import bug (#186)


Contributor Spotlight

A big thank you to the following community members for contributing PRs for this release!
@hermanjakobsen
@zhuyifengzju

v1.1 - Improved Interfaces and Bug Fixes

18 Dec 06:20
e0982ca
Compare
Choose a tag to compare

robosuite 1.1.0 Release Notes

  • Highlights
  • New Features
  • Improvements
  • Critical Bug Fixes
  • Other Bug Fixes

Highlights

While most surface-level functionality hasn't changed, the underlying infrastructure has been heavily reworked to reduce redundancy, improve standardization and ease-of-usage, and future-proof against expected expansions. Specifically, the following standards were pursued:

  • Pretty much everything should have a name (no name = no reference in sim)
  • All models should have a standardized interface (MujocoModel)
  • Any manipulation-specific properties or methods should be abstracted away to a subclass to future-proof against novel robotic domains that might be added in the future.
  • All associated attributes should try to be kept to a single object reference, to prevent silent errors from occurring due to partially modified objects. For example, instead of having self.object and self.object_name, just have self.object, since it already includes its own name reference in self.object.name.

New Features

This is not an exhaustive list, but includes the key features / changes in this PR most relevant to the common user that should greatly streamline environment prototyping and debugging.

Standardized Model Class Hierarchy

Now, all (robot, gripper, object) models inherit from the MujocoModel class, which defines many useful properties and methods, including references to the model joints, contact geoms, important sites, etc. This allows much more standardized usage of these models when designing environments.

Modularized Environment Class Hierarchy

We do not expect robosuite to remain solely manipulation-based. Therefore, all environment properties and methods common to manipulation-based domains were ported to ManipulationEnv, allowing future robot task domains to be added with little reworking. Similarly, common properties / methods common to Single or TwoArm environments were ported to SingleArmEnv and TwoArmEnv, respectively. This both (a) removes much redundant code between top-level env classes, and (b) frees users to focus exclusively on the environment prototyping unique to their use case without having to duplicate much boilerplate code. So, for example, Lift now has a class hierarchy of MujocoEnv --> RobotEnv --> ManipulationEnv --> SingleArmEnv --> Lift. Note that similar changes were made to the Robot and RobotModel base classes.

Standardized and Streamlined Object Classes

All object classes now are derived from MujocoObject, which itself is a subclass of MujocoModel. This standardizes the interface across all object source modalities (Generated vs. XML based), and provides the user with an expected set of properties that can be leveraged when prototyping custom environments. Additionally, complex, procedural object generation has been added with the CompositeObject class, of now which the HammerObject and PotWithHandles object are now subclasses of (as examples of how to design custom composite objects).

Greater Procedural Object Generation Support

CompositeObject and CompositeBodyObject classes have now been added. A CompositeObject is composed of multiple geoms, and a CompositeBodyObject is composed of multiple objects (bodies). Together, this allows for complex, procedural generation of arbitrary object shapes with potentially dynamic joint interactions. The HammerObject and PotWithHandlesObject are examples of the CompositeObject class, and HingedBoxObject is an example of the CompositeBodyObject class.

Standardized Geom Groups

All collision geoms now belong to group 0, while visual geoms belong to group 1. This means that methods can automatically check for the geom type by polling it's group attribute from its element or during sim. Moreover, all collision geoms are assigned solid rgba colors based on their semantic role (e.g.: robot vs. gripper vs. arena vs. objects). If rendering onscreen, you can easily toggle visualizing the visual and collision geoms by pressing 1 or 0, respectively. This can be useful for debugging environments and making sure collision bodies are formed / interacting as expected.

High-Utility Methods for Environment Prototyping

Because of this improved structure, many methods can now take advantage of this standardization. Some especially relevant methods are discussed briefly below:

  • env.get_contacts(model) (any env): This method will return the set of all geoms currently in contact with the inputted model. This is useful for debugging environments, or checking to see if certain conditions are met when designing rewards / interactions.

  • env._check_grasp(gripper, object_geoms) (only manipulation envs): This method will return True if the inputted gripper is grasping the object specified by object_geoms, which could be a MujocoModel or simply a list of geoms that define the object. This makes it very easy to design environments that depend on certain grasping requirements being met.

  • env._gripper_to_target(gripper, target, ...) and env._visualize_gripper_to_target(gripper, target, ...) (only manipulation envs): Methods to help streamline getting relevant distance info between a gripper gripper and target. Target can be a MujocoModel or any specific element (body, geom, site) name. The former calculates the distance, while the latter will set the gripper eef site sphere's color to be proportional to the distance to target. Both are useful for environment prototyping and debugging.

  • model.set_sites_visibility(sim, visible) (any MujocoModel): This method will set all the sites belonging to model in the current sim to either be visible or not depending on the visible arg. This is useful for quick debugging or teleoperation, to aid the user in visualizing specific points of reference in sim.

Improvements

The following briefly describes other changes that improve on the pre-existing structure. Again, this is not an exhaustive list, but a highlighted list of changes.

  • MountModel class added; pedestals used by robots are now assigned to this class and added to a RobotModel in a similar fashion to how the GripperModel is added. This allows abstraction of the robot model from its base mount model.

  • Abstracted site visualizations to a wrapper (VisualizationWrapper). This wrapper provides fine-grained control over sites being visualized within the environment: can specify whether to visualize site groups belonging to the wrapped env. This is controlled via keywords provided by a given environment. For example, for ManipulationEnv classes, this includes gripper, robot, and env keys, each of which control its associated site visualization.

  • Added openGL and openCV image convention option as a macro

  • Added macros.py in robosuite.utils and single file to store all macros for our repo. This includes numba macros and now includes instance randomization and image convention macros. Users can modify these macros mid-script by importing the macros module and modifying the module-level vars directly.

  • Placement samplers were no longer belong to Task class, but are separate. This is more intuitive, and allows for more modularity when designing future Task subclasses. Moreover, the placement sampler classes were refactored for more intuitive usage.

  • Refactor all top-level environments in a standardized fashion

  • Add functionality to modify cameras from Arena class; tuned cameras for Door, TwoArmHandover/PegInHole tasks

  • Renamed / modified a bunch of stuff so it's more semantically accurate / intuitive

  • Tuned Wipe environment with alternate compressed object observation space (this is enabled by default) and default environment parameters, such as table height / size and wipe marker sampling locations.

  • Update GymWrapper class to be more robust to general usage -- now, automatically flattens image observations so that it is Gym-compatible and also extends from the Gym Env class directly.

  • Add GPU device arg in environments for setups with multiple GPUs

  • Add new papers (#118)

  • Improve documentation

Critical Bug Fixes

  • Fixed grasping bug where a grasp is incorrectly inferred if a robot's two fingers are touching an object. This resulted in incorrect rewards being received which could negatively impact reward-based training. Grasps are now inferred correctly so robot cannot "cheat" a grasping-based reward.

  • Fixed singular value problem with OSC controller (#136). Control loop computations now utilizes numpy.pinv instead of our implementation of it.

  • Fix absolute control and control limits setting for OSC controller.

  • Fix model XML saving method. We now use env.sim.model.get_model() instead of env.model.get_model() so that we don't save a stale version of the current simulation snapshot.

Other Bug Fixes

  • Fix wiping gripper mass (too high before, leading to bad force-torque sensor readings)

  • Fix agentview cameras for Door, TwoArmHandover, and TwoArmPegInHole environments (#123)

  • Fix OSC controller bug that doesn't automatically re-update initial goal orientation upon reset

v1.0 major release

28 Sep 02:28
e4e4f0d
Compare
Choose a tag to compare

The first major version of robosuite. For more information, please check out https://robosuite.ai

v0.3.0 release with MuJoCo 2.0 support

28 Sep 02:24
5178d88
Compare
Choose a tag to compare

v0.2.0 release with MuJoCo 1.5

09 Dec 06:32
Compare
Choose a tag to compare
Pre-release

v0.2.0 release with MuJoCo 1.5 support

Initial release

27 Oct 00:37
4855b83
Compare
Choose a tag to compare

Initial release of Surreal Robotics Suite