You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"we present simple example metrics for offline evaluation: action similarity to a validation set of expert demonstrations using both joint angle error and end effector pose error"
and in the cloud-data-pouring.zip one may find additional keys eef_pose_observations and eef_pose_actions. In the toto_benchmark/assets/ it is said that there will be more information on how to use these keys, but none found anywhere.
So, in the end, how one should recreate your evaluation protocol? Is it just a simple MSE on actions_dataset vs predicted_actions or should one use previously mentioned keys in some way?
The text was updated successfully, but these errors were encountered:
Hi,
In your paper you write
and in the
cloud-data-pouring.zip
one may find additional keyseef_pose_observations
andeef_pose_actions
. In the toto_benchmark/assets/ it is said that there will be more information on how to use these keys, but none found anywhere.So, in the end, how one should recreate your evaluation protocol? Is it just a simple MSE on actions_dataset vs predicted_actions or should one use previously mentioned keys in some way?
The text was updated successfully, but these errors were encountered: