Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running the model on 2 RGBD images #2

Open
seann999 opened this issue Aug 17, 2022 · 1 comment
Open

Running the model on 2 RGBD images #2

seann999 opened this issue Aug 17, 2022 · 1 comment

Comments

@seann999
Copy link

I am trying to run this on 2 images taken from a RealSense.
Is there code or documentation I can look at to know how to input these into the model?

@seann999
Copy link
Author

seann999 commented Sep 1, 2022

I was able to somewhat understand some of the data from reading the preprocessing scripts, but I do not understand how to obtain sample_pts in a real setting

It seems to be extracted from sim, which for simulated data is extracted from the physics engine. How do you obtain this in a real setting? Is it even required?

def get_scene_state_plush(self,raw=False,convert_to=None):
sim,vis = self._get_plush_points()
loc,rot,scale = self._get_plush_loc(),self._get_plush_rot(),self._get_plush_scale()
if not raw:
loc,rot,scale = tuple(loc),eval(str(rot)),tuple(scale)
state = {'sim':sim, 'vis':vis,
'loc':loc, 'rot':rot, 'scale':scale}
if convert_to is not None:
for k,v in state.items():
state[k] = np.array(v, convert_to)
return state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant