-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenCV Spatial AI Competition Feature Discovery and Requests #183
Comments
I have implemented some of the above(except meta data and alignment ) APIs here - https://github.com/jaggiK/depthai_apis_synced_frames, feel free to use it. I used a buffer management system to sync frames internally. The end code size has reduced significantly and easy to use. |
Thanks a TON for doing this @jaggiK ! We are taking a look now! |
update : added support to save and load calibration information - intrinsics and distortion co-efficients |
Sweet, thanks! |
As an update on 6 we got initial rectified output support for right and left stream. Still some work to get these back to the host. |
Added |
Thanks @jaggiK . We will be adding RAW output for the RGB sensor as well. Likely next week if all goes well. This should solve the problem you are seeing. |
@Luxonis-Brandon @jaggiK |
subpixel, LR-check, and Extended disparity are now initially implemented: |
All features in this grouping of requests has been implemented with the exception of generating the point cloud on DepthAI directly, which is now tracked separately in #180. |
1. Saving JPEG images [DONE]
My understanding is this is a pre-canned example.
So this is actually supposed to be working out of the box. It seems we broke it w/ some other feature. Theoretically all you should need to do to get JPEGs is hit the c key when running the python3 test.py script.
Edit:
Can do this with
-v -s jpegout
and then can hitc
on the keyboard to capture the JPEG in thedepthai_demo.py
example script. We will get this updated in the documentation.And MJPEG is also now support in
develop
, more details/status here: #127EDIT: 11 March 2021:
We also have a Gen2 example that shows how to automatically produce/save JPEG when a certain class is seen in the image:
https://github.com/luxonis/depthai-experiments/tree/master/gen2-class-saver-jpeg
2. Syncing various streams [DONE]
The implementation of a sequence number that corresponds to time-synched frames between various streams (assuming they are at the same framerate) will help a ton to easily know on the host which frames from a given stream correspond to which other frames from other streams.
See #211 for more details.
And for an existing host-side synchronization example before #211 is implemented, see #178 .
3. Obtaining intrinsic params [DONE]
Thoughts on this list here? Please do feel free to add there, or to respond here as well.
More details here: #182 and now implemented in #190
4. Readily available smooth point clouds [DONE]
This one will likely involve the combination of two parallel efforts we have going on:
5. Out-of-box Stereo Neural Inference Support [DONE]
As a first stop of this of this we are making it so that stereo neural inference runs directly on the rectified_left and rectified_right streams.
And as the next step we are implementing this all-included per #216
6. Support for
rectified_left
andrectified_right
[DONE]The
why
here is to allow the neural inference to be done directly on the rectified frames so that the key points or features returned form the stereo neural inference can be used directly to infer 3D position of these features/neural-network results.Implemented in #190 and soaking in
develop
.7. Add Sequence Number and Time Stamp to
jpegout
stream [DONE]For recording groundtruth data, it is important to have time-synced data, particularly for training models on multiple sets of data together.
So it is beneficial for the jpegout stream to have a sequence number and a time stamp.
More details and tracking of this issue here: #212
8. Capability to get 1920x1080 resolution RAW RGB output. [DONE]
This helps with recording ground-truth.
The following PR #191 in depthai now supports
-s color
which opens a window with decoded yuv420p coming from directly after ISP of color camera.Implemented in #191
And other resolutions are possible by changing the RGB sensor resolution.
9. Creating two homography during calibration to obtain better stereo depth map [DONE]
Thoughts on this here? Please do feel free to add there, or to respond here as well.
More details here: #187
Implemented in in #190 and is now soaking in the
develop
branch on its way tomain
.10. Subpixel support [DONE]
See https://github.com/luxonis/depthai-experiments/tree/master/gen2-camera-demo#gen2-camera-demo.
See #163 to track implementation of this (including LR-check also) and also #184 for more of the
why
of subpixel support.11. Fix the sequence number offset between the color camera and the grayscale cameras. [DONE]
As it stands now, the color camera starts receiving and streaming frames before the grayscale cameras activate, so it has some +N on the frame sequence number compared to the grayscale cameras. This makes it so that the grayscale and color cameras cannot be synchronized on the host by sequence number, but instead have to be synched by time-stamp, which is more difficult.
So changing the sequence numbers so that they match for images that were taken at the same time will make host-side synchronizing when doing data collection easier.
See this issue #211 for more details.
12. Add integrated Bilateral or WLS filter internal to DepthAI. [DONE]
This post shows some good comparisons: fixstars/libSGM#20 (comment)
It seems like Bilateral often outperforms WLS.
14. Capability to Send Double for Host Time that is added to the device timestamps [DONE].
See #214 for details on this one.
15. Give coordinates for tracked object when depth is enabled. [DONE]
See #213 for tracking on this (no pun intended)
Code sample for this here: https://docs.luxonis.com/projects/api/en/latest/samples/spatial_object_tracker/
16. RGB Alignment with Depth [DONE]
The purpose of this is to provide a per-pixel mapping between the RGB camera and the depth map. The depth map is centered on the
right
grayscale camera. So this alignment will serve two purposes:Progress is in #284
Code sample here: https://docs.luxonis.com/projects/api/en/latest/samples/rgb_depth_aligned/
The text was updated successfully, but these errors were encountered: