You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 26, 2024. It is now read-only.
Discoverability is very important with new products. Since DepthAI is its own new category of product, discoverability is even more important than normal.
The reason discoverability is so important is that very few have the time to fully learn/understand something just to know if it can do what they want. They have to be able to spend as little time as possible to just see for themselves if this works for them. They can’t be required to learn APIs, piece code together, etc. This will artificially block many, especially busy engineers who have tons to do already, and bought this to try out in parallel to existing development architectures/efforts.
So we need to start with the ‘wow, that does effectively what I need’ moment, where the customer is running mostly what they need - prior to them having had to learn anything about how or why it works. We must do everything we can to prevent errors occuring in this experience, as each error means the person may give up and move on, which is especially unfortunate as they may then have missed an opportunity to discover that this platform is perfect for them.
Then, after this ‘wow this thing works for me’ moment (which hopefully takes effective no time), then the user can choose to learn more about what makes the system tick and get into the deep configurability of the system.
So the discoverability means that the user can discover layers of capabilities customizability after the thing is working. The configurability and flexibility of the device does not get in their way, it’s something they can discover after the system is already up and running - and they are confident it can help them.
So in short, we don’t waste their time forcing them to learn a new codebase or system of thought just to be able to try out the device.
Trying out the device should be as easy/fast as possible.
Move to the how:
Have the first thing the users experience be various use-case examples that they can just run, without having to go download a model, figure out where to place files, copy/paste code.
Ideally the user should be able to just run what they want by copying and pasting a couple lines of code, plug in the device, and presto, they have an example that is already doing close to what they want (e.g. locating an object in 3D space and tracking it, or doing gaze estimation in 3D coordinates, etc.).
Then after this, users can move onto tutorials, code samples, and the actual API reference. And advanced users can of course just skip to API - so this allows a good flow for brand-new uninitiated, but doesn’t force this flow, so that folks can skip to wherever they want, if they want.
Move to the what:
We have a flow from least difficulty, just works; requiring the least know-how/learning -> to the most difficult/requiring the most know-how/learning.
Examples. That just work. Allows the user to discover if this platform works for them. After which they can learn how the platform works, and the flexibility/customizability/etc. So these examples should have easy flexibility on options/control, say by command line arguments or interactive GUIs. So that the flexibility is discoverable but this flexibility does not get in their way before they discover it can work for them.
Tutorials. At tutorials and after, the user has already discovered that this platform is useful for their problem. So now the focus is showing as clear as possible how to do various, custom things with the platform.
Code Samples. These are clear, small snippets of code that work, which are building blocks that folks can use to make their own custom pipelines in Gen2 Pipeline Builder (DepthAI Pipeline Builder Gen2 depthai#136).
API Reference. Gives the full details on what is possible with each function/etc. so that folks can take example code from above, and then tweak calls/settings/etc. to their exact needs as they get deeper into implementation.
1. Examples
This would be both the demo/s and examples. An additional menu option would be presented (Demos / examples), as well as a item in main page (as is right now with DepthAI API and DepthAI GUI). Following the page would present a subpage (still under depthai-docs-website) that would list various subprojects/repositores that contain demos / examples with short descriptions what users can find there. Eg.: DepthAI Demo ('A demo application showcasing various functionality of DepthAI') -> links to depthai repository documentation, application description. Eg.: DepthAI Examples ('Examples which can be used as a starting point for building new applications or just to check out') -> links to depthai/examples documentation where these are listed, etc...
Add Demos under SUBPROJECTS: (to be renamed to Menu or merged with Content) here:
This would be above DepthAI API (to be renamed to Library (API) Python/C++) there (which itself houses its own TUTORIALS and CODE SAMPLES).
To start with, we want to make clear what is possible without the user even having to run anything. To do this, we should have GIFS of each use-case clearly next to a clearly-titled title and succinct description of the example.
Title: Social Distancing
Description: Detects the 3D location of people and measures the distance between them, overlaying the distance on the previous, and producing alerts if they are too close. This code can readily be repurposed for any time of object.
GIF: Something like this:
And for each example should be "Understand it right away" sort GIFS of these pipelines running (e.g. gaze estimation, or vehicle detection -> type -> license plate OCR) that allows folks to just kind of browse and look to see what each use-case is, such that the title and description only has to be read after understanding what the thing does from the GIF.
So some of the depthai-experiments (or many/most of them) that are there now fall into the Example Use Cases. Like gaze estimation, vehicle-security-barrier, interactive face demo (from ArduCam, here), etc. But we should also add examples like using DepthAI with ROS1 and ROS2. And other use-cases, including using BW1092 to upload neural network metadata to graphs on AWS (say people counting - in which case we'd show a side-by side GIF of people going by and a graph from an Azure webpage).
And just like depthai-experiments and Gen1 depthai-demo.py, the models (and all other requisite binaries/etc.) should either be directly included or automatically downloaded, so that users don’t have to muck around with downloading blob files (or other resources), moving them into the right directories, renaming, etc. They shouldn’t have to do any of this. As each one of these steps is a give up point where someone may give up on the platform, and not be able to discover that it could be perfect for their use-case - purely because they're already busy and so don't have time to fight code/debugging to just discover if this thing is useful.
We need broad coverage with these just work Example Use Cases. So covering pose estimation, hand-tracking, etc. The broader the better, as this is what provides validation to someone who finds us on the internet and snags a unit, that this is for them. So for example we should have both of geax hand-tracker (here) and 3D body-pose (here) examples in this Example Use Cases section.
This way, folks across all sorts of applications can get to a basic working proof of concept with as little time investment as possible. The more points we allow for something to go wrong (like having an incorrectly labeled blob file, or missing file, etc.) the higher the probability that a user will get blocked and not be able to discover the power of this platform, purely because of some file-structure issue.
The key is to remember is that to these new users, we’re just some company off the internet. We have to make our stuff just work on the first impression, otherwise there’s a high probability of just giving up. Everyone is busy, and if we can’t get a good first impression, there’s a high chance that engineers will have to get back to their other pile of work and label us as ‘doesn’t work’ or too hard even if we would have been perfect for their use-case. We would end up blocking the discoverability of if our platform would be useful with complexity of getting set up and running. That's what we want to avoid.
Final structure of demos / examples from depthai-docs-website perspective would look like:
docs.luxonis.com
│
▼
Demos / examples
│
┌───┴────┐
▼ ▼
depthai depthai/examples
.... ....
List of entries
Then after folks have tried these out, and said cool, this thing will work for my application, they can move onto the following:
2. Tutorials
Just like demos / examples, we should have an overview page that lists tutorial for respective repository / group and following that link should present a list of tutorials in that section/group with GIFS next to the title and description for each tutorial. That way the user can discover if this tutorial would be applicable for what they are wanting to accomplish before having to even click on it, just by browsing this page of GIFS/Titles/Descriptions.
Tutorials are much-more in-depth on everything they cover, example here, and are intended to really dig in. So after a user has gone through a tutorial, they should thoroughly understand every line of code and options thereof.
As such, there will be fewer tutorials to start (as they're so much more time-consuming), and the tutorial format will not be for everyone, as some programmers will not want this depth, and will actually find these annoying in comparison to having a Code Samples that just gives the code to pull something off.
So then after Tutorials will be Code Samples, but Tutorials come first as they do allow the person who is very interested in learning something deep to do so without confusion, and more advanced users can always skip to code samples or the API reference directly.
3. Code Samples:
Code Samples will live in Library (API) Python / C++ section as they directly relate to that repository.
For those who have made it past tutorials and understand the system deeply now, or otherwise for those for which this is all that they need.
Similar to Tutorials we should have GIFs showing what the code sample does on an overview page with Title and succinct description of each code sample.
4. API Reference
API Reference will live in Library (API) Python / C++ section as it directly relates to that repository.
For those who are fully bought in to making something. These are for folks who are now at the point where they are doing something that is beyond what we have even done with the platform. And either came in with substantial know-how, or have since learned it to be able deftly use the API. The API reference should be thorough and hold no detail back and can be incredibly long.
Code Locations:
So we are thinking that 1 (Demos / examples) will link to "subprojects": https://github.com/luxonis/depthai, which will be relabeled to depthai-demo, which is where these demos of example use-cases will live and be maintained (just like in Gen1, with depthai_demo.py), https://github.com/luxonis/depthai-experiments as well as some potential 3rdparty demos / examples that would like to be showcased here.
Second (Tutorials) will present groups of various tutorials, touching topics from OpenVINO, Model training, Library tutorials (links to depthai-python subprojects -> tutorials), ...
Code samples and reference will be specific to Library (API) Python / C++ and will be accessible by visiting Library link and then navigating to samples or reference.
The text was updated successfully, but these errors were encountered:
Start with the
why
:Discoverability is very important with new products. Since DepthAI is its own new category of product, discoverability is even more important than normal.
The reason discoverability is so important is that very few have the time to fully learn/understand something just to know if it can do what they want. They have to be able to spend as little time as possible to just see for themselves if this works for them. They can’t be required to learn APIs, piece code together, etc. This will artificially block many, especially busy engineers who have tons to do already, and bought this to try out in parallel to existing development architectures/efforts.
So we need to start with the ‘wow, that does effectively what I need’ moment, where the customer is running mostly what they need - prior to them having had to learn anything about how or why it works. We must do everything we can to prevent errors occuring in this experience, as each error means the person may give up and move on, which is especially unfortunate as they may then have missed an opportunity to discover that this platform is perfect for them.
Then, after this ‘wow this thing works for me’ moment (which hopefully takes effective no time), then the user can choose to learn more about what makes the system tick and get into the deep configurability of the system.
So the
discoverability
means that the user can discover layers of capabilities customizability after the thing is working. The configurability and flexibility of the device does not get in their way, it’s something they can discover after the system is already up and running - and they are confident it can help them.So in short, we don’t waste their time forcing them to learn a new codebase or system of thought just to be able to try out the device.
Trying out the device should be as easy/fast as possible.
Move to the
how
:Have the first thing the users experience be various use-case examples that they can just run, without having to go download a model, figure out where to place files, copy/paste code.
Ideally the user should be able to just run what they want by copying and pasting a couple lines of code, plug in the device, and presto, they have an example that is already doing close to what they want (e.g. locating an object in 3D space and tracking it, or doing gaze estimation in 3D coordinates, etc.).
Then after this, users can move onto tutorials, code samples, and the actual API reference. And advanced users can of course just skip to API - so this allows a good flow for brand-new uninitiated, but doesn’t force this flow, so that folks can skip to wherever they want, if they want.
Move to the
what
:We have a flow from least difficulty, just works; requiring the least know-how/learning -> to the most difficult/requiring the most know-how/learning.
1. Examples
This would be both the demo/s and examples. An additional menu option would be presented (Demos / examples), as well as a item in main page (as is right now with DepthAI API and DepthAI GUI). Following the page would present a subpage (still under depthai-docs-website) that would list various subprojects/repositores that contain demos / examples with short descriptions what users can find there. Eg.: DepthAI Demo ('A demo application showcasing various functionality of DepthAI') -> links to depthai repository documentation, application description. Eg.: DepthAI Examples ('Examples which can be used as a starting point for building new applications or just to check out') -> links to depthai/examples documentation where these are listed, etc...
Add
Demos
underSUBPROJECTS:
(to be renamed toMenu
or merged withContent
) here:This would be above
DepthAI API
(to be renamed toLibrary (API) Python/C++
) there (which itself houses its ownTUTORIALS
andCODE SAMPLES
).To start with, we want to make clear what is possible without the user even having to run anything. To do this, we should have GIFS of each use-case clearly next to a clearly-titled title and succinct description of the example.
Example for https://github.com/luxonis/depthai-experiments/tree/master/social-distancing#social-distancing would be:
Title: Social Distancing
Description: Detects the 3D location of people and measures the distance between them, overlaying the distance on the previous, and producing alerts if they are too close. This code can readily be repurposed for any time of object.
GIF: Something like this:
And for each example should be "Understand it right away" sort GIFS of these pipelines running (e.g. gaze estimation, or vehicle detection -> type -> license plate OCR) that allows folks to just kind of browse and look to see what each use-case is, such that the title and description only has to be read after understanding what the thing does from the GIF.
So some of the depthai-experiments (or many/most of them) that are there now fall into the Example Use Cases. Like gaze estimation, vehicle-security-barrier, interactive face demo (from ArduCam, here), etc. But we should also add examples like using DepthAI with ROS1 and ROS2. And other use-cases, including using BW1092 to upload neural network metadata to graphs on AWS (say people counting - in which case we'd show a side-by side GIF of people going by and a graph from an Azure webpage).
And just like depthai-experiments and Gen1 depthai-demo.py, the models (and all other requisite binaries/etc.) should either be directly included or automatically downloaded, so that users don’t have to muck around with downloading blob files (or other resources), moving them into the right directories, renaming, etc. They shouldn’t have to do any of this. As each one of these steps is a
give up point
where someone may give up on the platform, and not be able to discover that it could be perfect for their use-case - purely because they're already busy and so don't have time to fight code/debugging to just discover if this thing is useful.We need broad coverage with these
just work
Example Use Cases. So covering pose estimation, hand-tracking, etc. The broader the better, as this is what provides validation to someone who finds us on the internet and snags a unit, that this is for them. So for example we should have both of geax hand-tracker (here) and 3D body-pose (here) examples in this Example Use Cases section.This way, folks across all sorts of applications can get to a basic working proof of concept with as little time investment as possible. The more points we allow for something to go wrong (like having an incorrectly labeled blob file, or missing file, etc.) the higher the probability that a user will get blocked and not be able to discover the power of this platform, purely because of some file-structure issue.
The key is to remember is that to these new users, we’re just some company off the internet. We have to make our stuff just work on the first impression, otherwise there’s a high probability of just giving up. Everyone is busy, and if we can’t get a good first impression, there’s a high chance that engineers will have to get back to their other pile of work and label us as ‘doesn’t work’ or
too hard
even if we would have been perfect for their use-case. We would end up blocking the discoverability of if our platform would be useful with complexity of getting set up and running. That's what we want to avoid.Final structure of demos / examples from
depthai-docs-website
perspective would look like:Then after folks have tried these out, and said
cool, this thing will work for my application
, they can move onto the following:2. Tutorials
Just like demos / examples, we should have an overview page that lists tutorial for respective repository / group and following that link should present a list of tutorials in that section/group with GIFS next to the title and description for each tutorial. That way the user can discover if this tutorial would be applicable for what they are wanting to accomplish before having to even click on it, just by browsing this page of GIFS/Titles/Descriptions.
Tutorials are much-more in-depth on everything they cover, example here, and are intended to really dig in. So after a user has gone through a tutorial, they should thoroughly understand every line of code and options thereof.
As such, there will be fewer tutorials to start (as they're so much more time-consuming), and the tutorial format will not be for everyone, as some programmers will not want this depth, and will actually find these annoying in comparison to having a
Code Samples
that just gives the code to pull something off.So then after Tutorials will be
Code Samples
, but Tutorials come first as they do allow the person who is very interested in learning something deep to do so without confusion, and more advanced users can always skip to code samples or the API reference directly.3. Code Samples:
Code Samples will live in Library (API) Python / C++ section as they directly relate to that repository.
For those who have made it past tutorials and understand the system deeply now, or otherwise for those for which this is all that they need.
Similar to
Tutorials
we should have GIFs showing what the code sample does on an overview page with Title and succinct description of each code sample.4. API Reference
API Reference will live in Library (API) Python / C++ section as it directly relates to that repository.
For those who are fully bought in to making something. These are for folks who are now at the point where they are doing something that is beyond what we have even done with the platform. And either came in with substantial know-how, or have since learned it to be able deftly use the API. The API reference should be thorough and hold no detail back and can be incredibly long.
Code Locations:
So we are thinking that 1 (Demos / examples) will link to "subprojects": https://github.com/luxonis/depthai, which will be relabeled to
depthai-demo
, which is where these demos of example use-cases will live and be maintained (just like in Gen1, withdepthai_demo.py
), https://github.com/luxonis/depthai-experiments as well as some potential 3rdparty demos / examples that would like to be showcased here.Second (Tutorials) will present groups of various tutorials, touching topics from OpenVINO, Model training, Library tutorials (links to depthai-python subprojects -> tutorials), ...
Code samples and reference will be specific to Library (API) Python / C++ and will be accessible by visiting Library link and then navigating to samples or reference.
The text was updated successfully, but these errors were encountered: