-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed Pipeline, Recursive Services Tree, Mobile (iOS/Android) #1
Comments
Hi Richard, |
Hi Richard, The graphics part of CoCo (not on github for the moment) is based on OpenGL 3.3/4.5 and for this reason is not compatible with Android/iOS that rely on OpenGL ES, but to the composable nature of CoCo the current graphics component could be replaced with their equivalent OpenGL ES. -E |
nanomsg are a revamped version of zeromq by the same author (article). But a new and safer version is under development with a project called nng. For the graphic part, I was wondering if it would not be wiser to create a cross-platform pipeline, agnostic of 3D engines and dependencies; and focus only on memory management and performance optimizations, and network related extensions (RPC, IPC, websockets). Ultimate goal: extract as much information from a visual scene from a set of local or remote services. One eg that could maybe interest you too is the yarrar project, and use Unity3D for that part. Ideally, I was wondering if building a native plugin for Unity3D, iOS, and Android from coco, and only focus on binding several trackers like dlib for facial landmarks, tiny-dnn for category matching and some slam related frameworks (orb-slam2, dso or lsd-slam) with distributed scalable protocols, and build it up around a zero copy strategy of the camera inputs flow with timeouts and pre-processing filters, would be the best first step to create a worthy AR/MR pipeline. Have an awesome day ! Cheers, |
Dear Richard, thanks for the answer. Good to know the connection between nanomsg and zeromq. I am fine for any of them, but the key performance element is the exchange of large memory blocks (image frames / point clouds). For single machine socket transports are not good aiming at zerocopy approaches. And this is not taking into account GPU interprocess pipelines that are reasonable only under Windows/OSX, and coming to Linux via Vulkan. CoCoMR can be used indeed without any Graphics part, and indeed Graphics is only a group of components among the others. A Unity3D plugin is something we thought implementing in particular due to the current status of VR APIs: e.g. cross-platform support for Oculus and Vive. Some graphical components could be made already graphics backend agnostics (e.g. URDF from robot). For the point of the other trackers here are my takes:
For many of our application we also rely on ROS services that provide/wrap many of them Best Regards, |
Hi,
Hope you are all well !
I found your interesting repos as I am searching some project helping to build up an augmented/mixed reality pipeline POC using any web camera input (iOS,Android or Desktop) and to distribute requests
I wanted to create some kind of flow orchestration where u can pre-process x pictures per seconds (eg every 5 frames) and check if the picture is blurry or dark, and prevent them to be sent either to a local processes or remote micro-services for face detection, markerless tracking, wikipedia-search.
The key idea is to define some input validity checks fro the input, before doing some distributed requests. And to use a shared/efficient strategy for sharing the input with several respondents.
I think that creating such recursive tree of services, tagged by specific topologies of services/processes, would help to complete the visual semantic extraction.
or
** Distributed Detections on mobile devices:**
Distributed pipelines, with nanomsg or libmill, for video multi-processing (to re-iterate the idea with those 2 libs: parallel slam+markerless+facial landmarks with performance optimizations like sharing the same input matrix from the webcam to request several recognition services with IPC, RPC or TCP protocols): https://github.com/daniel-j-h/DistributedSearch, https://daniel-j-h.github.io/post/distributed-search-nanomsg-bond/
Image pre-processing pipeline: https://github.com/halide/Halide
GPGPU Acceleration https://github.com/hunter-packages/ogles_gpgpu/tree/hunter.develop
Cross-platform C++ Camera with Qt5 https://github.com/headupinclouds/gatherer
Goals:
Have a good week !
Cheers,
Richard
The text was updated successfully, but these errors were encountered: