English | 中文
Note: This method only supports FastDeploy C++ SDK
Step 1: First, open the CMake GUI and initialize the FastDeploy project.:
Step 2: After clicking Configure, set the compile "x64" architecture in the pop-up window.
Once initialization is completed, it is shown as follows. :
Step 3: As FastDeploy currently only supports Release version, please change "CMAKE_CONFIGURATION_TYPES" to "Release" first.
Developers can customize compilation options and generate sln solutions according to their needs. We offer two examples for compiling the CPU and GPU versions of the SDK.
Step 1: Select the compilation option according to the CPU version. Please do not
select WITH_GPU and ENABLE_TRT_BACKEND
In this example, we enable ORT, Paddle, OpenVINO and other inference backends, and select the APIs that need to compile TEXT and VISION.
Step 2: Customize the SDK installation path and modify CMAKE_INSTALL_PREFIX
As the default installation path is C drive, we can modify CMAKE_INSTALL_PREFIX to specify our own installation path. Here we modify the installation path to the build\fastdeploy-win-x64-0.2.1
directory.
Step 1: Select the compilation option according to the CPU version. Please do
select WITH_GPU
In this example, we enable ORT, Paddle, OpenVINO and TRT inference backends, and select the APIs that need to compile TEXT and VISION. As we enabled GPU and TensorRT, we need to specify CUDA_DIRECTORY and TRT_DIRECTOR. Find these two variables in the GUI interface, select the options box on the right, and select the path where you installed CUDA and TensorRT respectively.
Step 2: Customize the SDK installation path and modify CMAKE_INSTALL_PREFIX
As the default installation path is C drive, we can modify CMAKE_INSTALL_PREFIX to specify our own installation path. Here we modify the installation path to build\fastdeploy-win-x64-gpu-0.2.1
directory.
Step 1: Click "Generate" to generate the sln solution and open it with Visual Studio 2019
During this process, the model will download some resources needed for compilation by default. Developers can ignore the dev warning of cmake. After the generation is completed, the following interface will be on display.
CPU version SDK:
GPU Version SDK:
On the left, developers can see all the include paths and lib paths needed for compilation have been set up. Developers can record these paths for later development. On the right, Developers can see that the generated fastdeploy.sln solution file. Please open this solution file with Visual Studio 2019 (VS2022 can also be compiled, but VS2019 is recommended for now).
Step 2: Click "ALL BUILD" in Visual Studio 2019 -> right click "Generate" to start compiling
CPU version SDK compiled successfully!
GPU version SDK compiled successfully!
Step 3: After compiling, click "INSTALL" -> right click "Generate" in Visual Studio 2019 to install the compiled SDK to the previously specified directory
SDK successfully installed to the specified directory!
Developers can select the BUILD_EXAMPLES option in the CMake GUI to compile all the examples together. All the executable files of the examples will be saved in the build/bin/Release directory after the compilation is finished.
For self compilation of the SDK, we support Windows 10/11, VS 2019/2022, CUDA 11.x and TensorRT 8.x configurations. And we recommend the default configuration of Windows 10, VS 2019, CUDA 11.2, and TensorRT 8.4.x versions.
Moreover, if there is problem with encoding Chinese characters during compilation (e.g. UIE example must input Chinese characters for inference), please refer to the official Visual Studio tutorials and set the source character set to /utf-8
to solve this problem..