Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do offline evaluation? #6

Open
suessmann opened this issue Jul 14, 2023 · 0 comments
Open

How to do offline evaluation? #6

suessmann opened this issue Jul 14, 2023 · 0 comments

Comments

@suessmann
Copy link

Hi,

In your paper you write

"we present simple example metrics for offline evaluation: action similarity to a validation set of expert demonstrations using both joint angle error and end effector pose error"

and in the cloud-data-pouring.zip one may find additional keys eef_pose_observations and eef_pose_actions. In the toto_benchmark/assets/ it is said that there will be more information on how to use these keys, but none found anywhere.

So, in the end, how one should recreate your evaluation protocol? Is it just a simple MSE on actions_dataset vs predicted_actions or should one use previously mentioned keys in some way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant