TIDL is a comprehensive software product for acceleration of Deep Neural Networks (DNNs) on TI's embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP and TI's DNN accelerator (MMA). TIDL is released as part of TI's Software Development Kit (SDK) along with additional computer vision functions and optimized libraries including OpenCV. TIDL is available on a variety of embedded devices from Texas Instruments.
TIDL is a fundamental software component of TI’s Edge AI solution. TI's Edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries. DNN based product development requires two main streams of expertise:
- Data Scientists, who can design and train DNNs for targeted applications
- Embedded System Engineers, who can design and develop inference solutions for real time execution of DNNs on low power embedded device
TI's Edge AI solution provides the right set of tools for both of these categories:
- Edge AI Studio: Integrated development environment for development of AI applications for edge processors, hosting tools like Model Composer to train, compile and deploy models with click of mouse button and Model Analyzer to let you evaluate and analyze deep learning model performance on TI devices from your browser in minutes
- Model zoo: A large collection of pre-trained models for data scientists, which along with TI's Model Selection Tool enables picking the ideal model for TI's embedded devices
- Training and quantization tools for popular frameworks, allowing data scientists to make DNNs more suitable for TI devices
- Edge AI Benchmark: A python based framework which can allow you to perform accuracy and performance benchmark. Accuracy benchmark can be performed without development board, but for performance benchmark, a development board is needed.
- Edge AI TIDL Tools: Edge AI TIDL Tools provided in this repository shall be used for model compilation on X86. Artifacts from compilation process can used for Model inference. Model inference can happen on X86 machine (host emulation mode) or on development board with TI SOC. This repository also provides examples to be directly used on X86 target and can be used on development board with TI SOC. For deployment and execution on the development board, one has to use this package.
The figure below illustrates the work flow of DNN development and deployment on TI devices:
TIDL provides multiple deployment options with industry defined inference engines as listed below. These inference engines are being referred as Open Source Runtimes (OSRT) in this document.
- TFLite Runtime: TensorFlow Lite based inference with heterogeneous execution on cortex-A** + C7x-MMA, using TFlite Delegates TFLite Delgate API
- ONNX RunTime: ONNX Runtime based inference with heterogeneous execution on cortex-A** + C7x-MMA.
- TVM/Neo-AI RunTime: TVM/Neo-AI-DLR based inference with heterogeneous execution on cortex-A** + C7x-MMA
** AM68PA has cortex-A72 as its MPU, refer to the device TRM to know which cortex-A MPU it contains.
These heterogeneous execution enables:
- OSRT as the top level inference API for user applications
- Offloading subgraphs to C7x/MMA for accelerated execution with TIDL
- Runs optimized code on ARM core for layers that are not supported by TIDL
Edge AI TIDL Tools provided in this repository supports model compilation and model inference. The diagram below illustrates the TFLite based work flow as an example. ONNX Runtime and TVM/Neo-AI Runtime also follows similar work flow.
The below table covers the supported operations with this repository on X86_PC and TI's development board.
Operation | X86_PC | TI SOC | Python API | CPP API |
---|---|---|---|---|
Model Compilation | ✔️ | ❌ | ✔️ | ❌ |
Model Inference | ✔️ | ✔️ | ✔️ | ✔️ |
- Benchmark latency and Memory bandwidth of out of box example models (10+)
- Compile user / custom model for deployment with TIDL
- Inference of compiled models on X86_PC or TI SOC using file base input and output
- Camera , Display and Dl runtime based end-to-end pipeline development or benchmarking.
- Please refer Processor SDK Linux for Edge AI for such applications
- Benchmarking accuracy of models using TIDL acceleration with standard datasets, for e.g. - accuracy benchmarking using MS COCO dataset for object detection models.
- Please refer edgeai-benchmark for the same.
- Following table shows the devices supported by this repository
- Device with hardware acceleration have TI-DSP and MMA(Matrix Multiplier Accelerator) for faster execution.
Device Family(Product) | Environment Variable | Hardware Acceleration |
---|---|---|
AM62 | am62 | ❌ |
AM62A | am62a | ✔️ |
AM68PA | am68pa | ✔️ |
AM68A | am68a | ✔️ |
AM69A | am69a | ✔️ |
J721E (TDA4VM) | am68pa | ✔️ |
J721S2 (TDA4AL, TDA4VL) | am68a | ✔️ |
J784S4 (TDA4AP, TDA4VP, TDA4AH, TDA4VH) |
am69a | ✔️ |
Note Please select / checkout to the tag compatible with the SDK version that you are using with the TI's Evaluation board before continuing on the below steps. Refer to SDK Version compatibility Table for the tag of your SDK version
- X86_PC mode for this repository is validated with below configuration.
OS | Python Version |
---|---|
Ubuntu 22.04 | 3.10 |
- We have also validated under docker container in PC. Use Dockerfile for the list of dependencies installed on top of ubuntu 22.04 base line.
- We recommend docker based X86_PC setup to avoid running into any dependencies related issues
- Run the below one time setup for system level packages. This needs sudo permission, get it installed by your system administrator if required.
sudo apt-get install libyaml-cpp-dev
- Make sure you have all permission for the current directory before proceeding
- Run the below commands to install the dependent components on your machine and set all the required environments
Note source in the setup command is important as this script is exporting all required environment variables. Without this, user may encounter some compilation/runtime issues
git clone https://github.com/TexasInstruments/edgeai-tidl-tools.git
cd edgeai-tidl-tools
git checkout <TAG Compatible with your SDK version>
# Supported SOC name strings am62, am62a, am68a, am68pa, am69a
export SOC=<Your SOC name>
source ./setup.sh
-
Docker Based X86_PC Setup - Detailed steps to prepare docker container based environment for X86_PC mode.
-
While opening new terminal in a system where above setup is already done once for a given SDK version, set below environment variables
cd edgeai-tidl-tools
export SOC=<Your SOC name>
export TIDL_TOOLS_PATH=$(pwd)/tidl_tools
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TIDL_TOOLS_PATH
export ARM64_GCC_PATH=$(pwd)/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu
- We provide 10+ out-of-box examples for model compilation on X86_PC and Inference on X86_PC and TI SOC in the below category of tasks. Refer Model zoo for complete set of validated models across multiple categories
- Image classification
- Object detection
- Pixel level semantic Segmentation
- Execute below to compile and run inference of the model in X86_PC
- Inference is validated with both Python and CPP APIs
mkdir build && cd build
cmake ../examples && make -j && cd ..
source ./scripts/run_python_examples.sh
python3 ./scripts/gen_test_report.py
- The execution of above step will generate compiled-model artifacts and output images at
./edgeai-tidl-tools/output_images
. These outputs images can be compared against the expected outputs in/edgeai-tidl-tools/test_data/refs-pc-{soc}
, this confirms successful installation / setup on PC
model-artifacts/
models/
output_images/
test_report_pc.csv
- An output image can be found for each model in the'output_images' folder, similar to what's shown below
Image Classification | Object detection | Semantic Segmentation |
---|---|---|
- Prepare the development board by following the below steps
git clone https://github.com/TexasInstruments/edgeai-tidl-tools.git
cd edgeai-tidl-tools
git checkout <TAG Compatible with your SDK version>
export SOC=<Your SOC name>
export TIDL_TOOLS_PATH=$(pwd)
- Copy the compiled artifacts from X86_PC to Development boards file system at ./edgeai-tidl-tools/
- Execute below to run inference on target development board with both Python and CPP APIs
# scp -r <pc>/edgeai-tidl-tools/model-artifacts/ <dev board>/edgeai-tidl-tool/
# scp -r <pc>/edgeai-tidl-tools/models/ <dev board>/edgeai-tidl-tool/
mkdir build && cd build
cmake ../examples && make -j && cd ..
python3 ./scripts/gen_test_report.py
- The execution of above step will generate output images at
./edgeai-tidl-tools/output_images
. These outputs images can be compared against the expected outputs in/edgeai-tidl-tools/test_data/refs-{soc}
. This confirms successful installation / setup on board.
- New Model Evaluation : Refer this for a custom model that needs to be evaluated is falling into one of supported out-of-box example tasks categories.
- Custom Model Evaluation : Refer this for a custom model task category or input and output format is different from the supported list of tasks
- Reporting issues with Model deployment - Refer notes here for reporting issues in custom model deployment
- Python examples - Detailed documentation on all the compile and inference options for TIDL offload for each runtime sessions
- CPP examples - Detailed documentation on compiling the CPP examples on X86_PC as well as Development board.
- Jupyter Notebooks - Interactive step-by-step documented notebooks for pre-compiled models inference.
- Supported Operators and Runtimes - List of supported operators from TIDL offload and their limitations for each runtime.
- Advanced Setup Options - Setup options for advanced users to optimize setup time
- Feature Specific Guides
Please see the license under which this repository is made available: LICENSE