Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenCV Spatial AI Competition Feature Discovery and Requests #183

Closed
Luxonis-Brandon opened this issue Aug 19, 2020 · 12 comments
Closed

OpenCV Spatial AI Competition Feature Discovery and Requests #183

Luxonis-Brandon opened this issue Aug 19, 2020 · 12 comments
Labels
enhancement New feature or request

Comments

@Luxonis-Brandon
Copy link
Contributor

Luxonis-Brandon commented Aug 19, 2020

1. Saving JPEG images [DONE]

My understanding is this is a pre-canned example.
So this is actually supposed to be working out of the box. It seems we broke it w/ some other feature. Theoretically all you should need to do to get JPEGs is hit the c key when running the python3 test.py script.

Edit:

Can do this with -v -s jpegout and then can hit c on the keyboard to capture the JPEG in the depthai_demo.py example script. We will get this updated in the documentation.

And MJPEG is also now support in develop, more details/status here: #127

EDIT: 11 March 2021:

We also have a Gen2 example that shows how to automatically produce/save JPEG when a certain class is seen in the image:
https://github.com/luxonis/depthai-experiments/tree/master/gen2-class-saver-jpeg

2. Syncing various streams [DONE]

The implementation of a sequence number that corresponds to time-synched frames between various streams (assuming they are at the same framerate) will help a ton to easily know on the host which frames from a given stream correspond to which other frames from other streams.

See #211 for more details.

And for an existing host-side synchronization example before #211 is implemented, see #178 .

3. Obtaining intrinsic params [DONE]

Thoughts on this list here? Please do feel free to add there, or to respond here as well.

More details here: #182 and now implemented in #190

4. Readily available smooth point clouds [DONE]

This one will likely involve the combination of two parallel efforts we have going on:

5. Out-of-box Stereo Neural Inference Support [DONE]

As a first stop of this of this we are making it so that stereo neural inference runs directly on the rectified_left and rectified_right streams.

And as the next step we are implementing this all-included per #216

6. Support for rectified_left and rectified_right [DONE]

The why here is to allow the neural inference to be done directly on the rectified frames so that the key points or features returned form the stereo neural inference can be used directly to infer 3D position of these features/neural-network results.

Implemented in #190 and soaking in develop.

7. Add Sequence Number and Time Stamp to jpegout stream [DONE]

For recording groundtruth data, it is important to have time-synced data, particularly for training models on multiple sets of data together.

So it is beneficial for the jpegout stream to have a sequence number and a time stamp.

More details and tracking of this issue here: #212

8. Capability to get 1920x1080 resolution RAW RGB output. [DONE]

This helps with recording ground-truth.

The following PR #191 in depthai now supports -s color which opens a window with decoded yuv420p coming from directly after ISP of color camera.

Implemented in #191

And other resolutions are possible by changing the RGB sensor resolution.

9. Creating two homography during calibration to obtain better stereo depth map [DONE]

Thoughts on this here? Please do feel free to add there, or to respond here as well.

More details here: #187

Implemented in in #190 and is now soaking in the develop branch on its way to main.

10. Subpixel support [DONE]

See https://github.com/luxonis/depthai-experiments/tree/master/gen2-camera-demo#gen2-camera-demo.

See #163 to track implementation of this (including LR-check also) and also #184 for more of the why of subpixel support.

11. Fix the sequence number offset between the color camera and the grayscale cameras. [DONE]

As it stands now, the color camera starts receiving and streaming frames before the grayscale cameras activate, so it has some +N on the frame sequence number compared to the grayscale cameras. This makes it so that the grayscale and color cameras cannot be synchronized on the host by sequence number, but instead have to be synched by time-stamp, which is more difficult.

So changing the sequence numbers so that they match for images that were taken at the same time will make host-side synchronizing when doing data collection easier.

See this issue #211 for more details.

12. Add integrated Bilateral or WLS filter internal to DepthAI. [DONE]

This post shows some good comparisons: fixstars/libSGM#20 (comment)

It seems like Bilateral often outperforms WLS.

14. Capability to Send Double for Host Time that is added to the device timestamps [DONE].

See #214 for details on this one.

15. Give coordinates for tracked object when depth is enabled. [DONE]

See #213 for tracking on this (no pun intended)
Code sample for this here: https://docs.luxonis.com/projects/api/en/latest/samples/spatial_object_tracker/

16. RGB Alignment with Depth [DONE]

The purpose of this is to provide a per-pixel mapping between the RGB camera and the depth map. The depth map is centered on the right grayscale camera. So this alignment will serve two purposes:

  • Provide a per-pixel alignment between the RGB camera and the depth map.
  • Provide a per-pixel alignment between the RGB cameras and the right grayscale camera.

Progress is in #284
Code sample here: https://docs.luxonis.com/projects/api/en/latest/samples/rgb_depth_aligned/

@Luxonis-Brandon Luxonis-Brandon added the enhancement New feature or request label Aug 19, 2020
@Luxonis-Brandon
Copy link
Contributor Author

Luxonis-Brandon commented Aug 19, 2020

So Jagadish Mahendran is who helped put the list above together. And he also took the time to put together a proposal on this functionality below:
image
image

oakd_highlevel_architecture2.pdf

@jaggiK
Copy link

jaggiK commented Aug 21, 2020

I have implemented some of the above(except meta data and alignment ) APIs here - https://github.com/jaggiK/depthai_apis_synced_frames, feel free to use it. I used a buffer management system to sync frames internally. The end code size has reduced significantly and easy to use.

@Luxonis-Brandon
Copy link
Contributor Author

Thanks a TON for doing this @jaggiK ! We are taking a look now!

@jaggiK
Copy link

jaggiK commented Aug 21, 2020

update : added support to save and load calibration information - intrinsics and distortion co-efficients

@Luxonis-Brandon
Copy link
Contributor Author

Sweet, thanks!

@Luxonis-Brandon
Copy link
Contributor Author

As an update on 6 we got initial rectified output support for right and left stream. Still some work to get these back to the host.

@jaggiK
Copy link

jaggiK commented Aug 24, 2020

Added jpegoutbut they are out of sync and less fps. This is because jpegout requests slows down entire pipeline reducing fps significantly. Also jpegout stream packets does not have sequence numbers, so syncing is not possible. Hoping there will be a support to obtain seamless synced jpegout streams.

@Luxonis-Brandon
Copy link
Contributor Author

Thanks @jaggiK . We will be adding RAW output for the RGB sensor as well. Likely next week if all goes well. This should solve the problem you are seeing.

@Luxonis-Brandon
Copy link
Contributor Author

Luxonis-Brandon commented Sep 10, 2020

Hi @jaggiK

The color #191 in depthai now support -s color which opens a window with decoded yuv420p coming from directly after ISP of color camera. I updated this on the main issue above as well.

@themarpe
Copy link
Collaborator

@Luxonis-Brandon @jaggiK
Note: The raw-color branch was updated to rename the stream from raw_color to color as it is more inline with the operation of streams left and right (image frames after ISP processing)

@Luxonis-Brandon
Copy link
Contributor Author

subpixel, LR-check, and Extended disparity are now initially implemented:
#163

@Luxonis-Brandon
Copy link
Contributor Author

All features in this grouping of requests has been implemented with the exception of generating the point cloud on DepthAI directly, which is now tracked separately in #180.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants