Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed osc position controller bug [updated] #204

Closed
wants to merge 1 commit into from

Conversation

snasiriany
Copy link
Contributor

New PR, ignore the last identical request. I think the OSC_POSITION controller has incorrect values for the fixed orientation. I believe this PR fixes that issue. I verified that this is a valid rotation matrix -- the determinant is equal to 1, and it was generated using the euler2mat function we have.

@hermanjakobsen
Copy link
Contributor

To my experience, the "correct" values depends on the type of gripper used, or more precisely, the gripper's reference frame. Have you tested your values with all the different types of grippers?

@roberto-martinmartin
Copy link
Member

roberto-martinmartin commented Apr 9, 2021 via email

@snasiriany
Copy link
Contributor Author

Thanks @hermanjakobsen and @roberto-martinmartin for the comments! So in that case, will there be a separate PR on unifying so that all frames are aligned? Is there anyone actively working on that task?

@cremebrule
Copy link
Member

Thanks @hermanjakobsen and @roberto-martinmartin for the comments! So in that case, will there be a separate PR on unifying so that all frames are aligned? Is there anyone actively working on that task?

Hi @snasiriany , yes I've submitted a PR for this (#213 )

@JieFeng-cse
Copy link

This perfectly solved my problem! cheers

yukezhu pushed a commit that referenced this pull request Apr 27, 2021
This PR standardizes EEF frames, such that all grippers have identical conventions for better plug-and-play behavior. This also fixes some nuanced bugs related to orientations, such as issues brought up in #204 .

Specifically, the convention is now as follows:

EEF Sites are located at the grasping contact point for a given gripper. For parallel jaw / three finger grippers, this is the approximate point where the fingers touch together. For the wiping gripper, this is the bottom surface of the wiper.

For parallel jaw / three finger grippers, the Z-axis aligns with the prior robot arm's wrist joint rotation axis, and the Y-axis always points perpendicular to the gripper actuation axis

Default initial qpos for robots have also been standardized, such that all robots will start with their gripper oriented in the same way, regardless of the robot / gripper combination.

Breaking Changes: A couple of small, but significant changes have been introduced in this PR, and are described below:

The wiping gripper EEF site has been moved from the middle of the wiper to the bottom surface of the wiper.

The important_sites property in manipulator robot models used to include ee, ee_x, ee_y, and ee_z. Since these sites have now been moved to the gripper files, they no longer exist in the robot model class but now are in the gripper model class.
@yukezhu
Copy link
Member

yukezhu commented Apr 27, 2021

This has been merged into master in PR #213. Thanks.

@yukezhu yukezhu closed this Apr 27, 2021
rojas70 pushed a commit to learningLogisticsLab/robosuite that referenced this pull request Jul 28, 2021
This PR standardizes EEF frames, such that all grippers have identical conventions for better plug-and-play behavior. This also fixes some nuanced bugs related to orientations, such as issues brought up in ARISE-Initiative#204 .

Specifically, the convention is now as follows:

EEF Sites are located at the grasping contact point for a given gripper. For parallel jaw / three finger grippers, this is the approximate point where the fingers touch together. For the wiping gripper, this is the bottom surface of the wiper.

For parallel jaw / three finger grippers, the Z-axis aligns with the prior robot arm's wrist joint rotation axis, and the Y-axis always points perpendicular to the gripper actuation axis

Default initial qpos for robots have also been standardized, such that all robots will start with their gripper oriented in the same way, regardless of the robot / gripper combination.

Breaking Changes: A couple of small, but significant changes have been introduced in this PR, and are described below:

The wiping gripper EEF site has been moved from the middle of the wiper to the bottom surface of the wiper.

The important_sites property in manipulator robot models used to include ee, ee_x, ee_y, and ee_z. Since these sites have now been moved to the gripper files, they no longer exist in the robot model class but now are in the gripper model class.
@yukezhu yukezhu mentioned this pull request Oct 19, 2021
yukezhu added a commit that referenced this pull request Oct 19, 2021
# robosuite 1.3.0 Release Notes
- Highlights
- New Features
- Improvements
- Critical Bug Fixes
- Other Bug Fixes

# Highlights
This release of robosuite brings powerful rendering functionalities including new renderers and multiple vision modalities, in addition to some general-purpose camera utilities. Below, we discuss the key details of these new features:

## Renderers
In addition to the native Mujoco renderer, we present two new renderers, [NVISII](https://github.com/owl-project/NVISII) and [iGibson](http://svl.stanford.edu/igibson/), and introduce a standardized rendering interface API to enable easy swapping of renderers.

NVISII is a high-fidelity ray-tracing renderer originally developed by NVIDIA, and adapted for plug-and-play usage in **robosuite**. It is primarily used for training perception models and visualizing results in high quality. It can run at up to ~0.5 fps using a GTX 1080Ti GPU. Note that NVISII must be installed (`pip install nvisii`) in order to use this renderer.

iGibson is a much faster renderer that additionally supports physics-based rendering (PBR) and direct rendering to pytorch tensors. While not as high-fidelity as NVISII, it is incredibly fast and can run at up to ~1500 fps using a GTX 1080Ti GPU. Note that iGibson must be installed (`pip install igibson`) in order to use this renderer.

With the addition of these new renderers, we also introduce a standardized [renderer](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/renderers/base.py) for easy usage and customization of the various renderers. During each environment step, the renderer updates its internal state by calling `update()` and renders by calling `render(...)`. The resulting visual observations can be polled by calling `get_pixel_obs()` or by calling other methods specific to individual renderers. We provide a [demo script](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/demos/demo_segmentation.py) for testing each new renderer, and our docs also provide [additional information](http://robosuite.ai/docs/modules/renderers.md) on specific renderer details and installation procedures.

## Vision Modalities
In addition to new renderers, we also provide broad support for multiple vision modalities across all (Mujoco, NVISII, iGibson) renderers:

- **RGB**: Standard 3-channel color frames with values in range `[0, 255]`. This is set during environment construction with the `use_camera_obs` argument.
- **Depth**: 1-channel frame with normalized values in range `[0, 1]`. This is set during environment construction with the `camera_depths` argument.
- **Segmentation**: 1-channel frames with pixel values corresponding to integer IDs for various objects. Segmentation can occur by class, instance, or geom, and is set during environment construction with the `camera_segmentations` argument.
    
In addition to the above modalities, the following modalities are supported by a subset of renderers:

- **Surface Normals**: [NVISII, iGibson] 3-channel (x,y,z) normalized direction vectors.
- **Texture Coordinates**: [NVISII] 3-channel (x,y,z) coordinate texture mappings for each element
- **Texture Positioning**: [NVISII, iGibson] 3-channel (x,y,z) global coordinates of each pixel.

Specific modalities can be set during environment and renderer construction. We provide a [demo script](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/demos/demo_nvisii_modalities.py) for testing the different modalities supported by NVISII and a [demo script](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/demos/demo_igibson_modalities.py) for testing the different modalities supported by iGibson.

## Camera Utilities
We provide a set of general-purpose [camera utilities](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/utils/camera_utils.py) that intended to enable easy manipulation of environment cameras. Of note, we include transform utilities for mapping between pixel, camera, and world frames, and include a [CameraMover](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/utils/camera_utils.py#L244) class for dynamically moving a camera during simulation, which can be used for many purposes such as the [DemoPlaybackCameraMover](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/utils/camera_utils.py#L419) subclass that enables smooth visualization during demonstration playback.

# Improvements
The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

- Standardize EEF frames (#204). Now, all grippers have identical conventions for plug-and-play usage across types.

- Add OSC_POSITION control option for spacemouse (#209).

- Improve model class hierarchy for robots. Now, robots own a subset of models (gripper(s), mount(s), etc.), allowing easy external access to the robot's internal model hierarchy.

- robosuite docs updated

- Add new papers


# Critical Bug Fixes
- Fix OSC global orientation limits (#228)


# Other Bug Fixes
- Fix default OSC orientation control (valid default rotation matrix) (#232)

- Fix Jaco self-collisions (#235)

- Fix joint velocity controller clipping and tune default kp (#236)

-------

## Contributor Spotlight
A big thank you to the following community members for spearheading the renderer PRs for this release!
@awesome-aj0123 
@divyanshj16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants