-
Notifications
You must be signed in to change notification settings - Fork 4.2k
build minimal library
wiki-sync-bot edited this page Oct 15, 2024
·
1 revision
For some reason, if you're not happy with the binary size of the ncnn library, then here is the cheatsheet that helps you to build a minimal ncnn :P
cmake -DNCNN_DISABLE_RTTI=ON -DNCNN_DISABLE_EXCEPTION=ON ..
- Cannot use RTTI and Exceptions when ncnn functions are called.
cmake -DNCNN_VULKAN=OFF ..
- Cannot use GPU acceleration.
cmake -DNCNN_STDIO=OFF ..
-
Cannot load model from files, but can load model from memory or by Android Assets.
Read more here.
cmake -DNCNN_STRING=OFF ..
-
Cannot load human-readable param files with visible strings, but can load binary param.bin files.
Read more here
-
Cannot identify blobs by string name when calling
Extractor::input / extract
, but can identify them by enum value inid.h
.Read more here.
cmake -DNCNN_BF16=OFF ..
- Cannot use bf16 storage type in inference.
cmake -DNCNN_INT8=OFF ..
- Cannot use quantized int8 inference.
cmake -DNCNN_PIXEL_DRAWING=OFF ..
- Cannot use functions doing drawing basic shape and text like
ncnn::draw_rectangle_xx / ncnn::draw_circle_xx / ncnn::draw_text_xx
, but functions likeMat::from_pixels / from_pixels_resize
are still available.
cmake -DNCNN_PIXEL_ROTATE=OFF -DNCNN_PIXEL_AFFINE=OFF ..
- Cannot use functions doing rotatation and affine transformation like
ncnn::kanna_rotate_xx / ncnn::warpaffine_bilinear_xx
, but functions likeMat::from_pixels / from_pixels_resize
are still available.
cmake -DNCNN_PIXEL=OFF ..
- Cannot use functions transferring from image to pixels like
Mat::from_pixels / from_pixels_resize / to_pixels / to_pixels_resize
, and need create a Mat and fill in data by hand.
cmake -DNCNN_OPENMP=OFF ..
- Cannot use openmp multi-threading acceleration. If you want to run a model in single thread on your target machine, it is recommended to close the option.
cmake -DNCNN_AVX2=OFF -DNCNN_ARM82=OFF ..
- Do not compile optimized kernels using avx2 / arm82 instruction set extensions. If your target machine does not support some of them, it is recommended to close the related options.
cmake -DNCNN_RUNTIME_CPU=OFF ..
- Cannot check supported cpu instruction set extensions and use related optimized kernels in runtime.
- If you know which instruction set extensions are supported on your target machine like avx2 / arm82, you can open related options like
-DNCNN_AVX2=ON / -DNCNN_ARM82=ON
by hand and then sse2 / arm8 version kernels will not be compiled.
cmake -DWITH_LAYER_absval=OFF -DWITH_LAYER_bnll=OFF ..
- If your model does not include some layers, taking absval / bnll as a example above, you can drop them.
- Some key or dependency layers should not be dropped, like convolution / innerproduct, their dependency like padding / flatten, and activation like relu / clip.
cmake -DNCNN_SIMPLESTL=ON ..
- STL provided by compiler is no longer depended on, and use
simplestl
provided by ncnn as a replacement. Users also can only usesimplestl
when ncnn functions are called. - Usually with compiler parameters
-nodefaultlibs -fno-builtin -nostdinc++ -lc
- Need cmake parameters
cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake -DANDROID_STL=system
to avoid STL conflict when compiling to Android.
- Modify the source code under
ncnn/src/layer/arm/
to delete unnecessary optimized kernels or replace them with empty functions. - You can also drop layers and related optimized kernels by
-DWITH_LAYER_absval=OFF
as mentioned above.
- Modify
ncnn/src/layer/binaryop.cpp unaryop.cpp
andncnn/src/layer/arm/binaryop.cpp unaryop_arm.cpp
by hand to delete unnecessary operators.