Releases: microsoft/onnxruntime
ONNX Runtime v1.20.0
Release Manager: @apsonawane
Announcements
- All ONNX Runtime Training packages have been deprecated. ORT 1.19.2 was the last release for which onnxruntime-training (PyPI), onnxruntime-training-cpu (PyPI), Microsoft.ML.OnnxRuntime.Training (Nuget), onnxruntime-training-c (CocoaPods), onnxruntime-training-objc (CocoaPods), and onnxruntime-training-android (Maven Central) were published.
- ONNX Runtime packages will stop supporting Python 3.8 and Python 3.9. This decision aligns with NumPy Python version support. To continue using ORT with Python 3.8 and Python 3.9, you can use ORT 1.19.2 and earlier.
- ONNX Runtime 1.20 CUDA packages will include new dependencies that were not required in 1.19 packages. The following dependencies are new: libcudnn_adv.so.9, libcudnn_cnn.so.9, libcudnn_engines_precompiled.so.9, libcudnn_engines_runtime_compiled.so.9, libcudnn_graph.so.9, libcudnn_heuristic.so.9, libcudnn_ops.so.9, libnvrtc.so.12, and libz.so.1.
Build System & Packages
- Python 3.13 support is included in PyPI packages.
- ONNX 1.17 support will be delayed until a future release, but the ONNX version used by ONNX Runtime has been patched to include a shape inference change to the Einsum op.
- DLLs in the Maven build are now digitally signed (fix for issue reported here).
- (Experimental) vcpkg support added for the CPU EP. The DML EP does not yet support vcpkg, and other EPs have not been tested.
Core
- MultiLoRA support.
- Reduced memory utilization.
- Fixed alignment that was causing mmap to fail for external weights.
- Eliminated double allocations when deserializing external weights.
- Added ability to serialize pre-packed weights so that they don’t cause an increase in memory utilization when the model is loaded.
- Support bfloat16 and float8 data types in python I/O binding API.
Performance
- INT4 quantized embedding support on CPU and CUDA EPs.
- Miscellaneous performance improvements and bug fixes.
EPs
CPU
- FP16 support for MatMulNbits, Clip, and LayerNormalization ops.
CUDA
- Added support of cuDNN Flash Attention and Lean Attention in MultiHeadAttention op.
TensorRT
QNN
- QNN HTP support for weight sharing across multiple ORT inference sessions. (See ORT QNN EP documentation for more information.)
- Support for QNN SDK 2.27.
OpenVINO
- Added support up to OpenVINO 2024.4.1.
- Compile-time memory optimizations.
- Enhancement of ORT EPContext Session option for optimized first inference latency.
- Added remote tensors to ensure direct memory access for inferencing on NPU.
DirectML
- DirectML 1.15.2 support.
Mobile
- Improved Android QNN support, including a pre-built Maven package and various performance improvements.
- FP16 support for ML Program models with CoreML EP.
- FP16 XNNPACK kernels to provide a fallback option if CoreML is not available at runtime.
- Initial support for using the native WebGPU EP on Android and iOS. _Note: The set of initial operators is limited, and the code is available from the main branch, not ORT 1.20 packages. See #22591 for more information.
Web
- Quantized embedding support.
- On-demand weight loading support (offloads Wasm32 heap and enables 8B-parameter LLMs).
- Integrated Intel GPU performance improvements.
- Opset-21 support (Reshape, Shape, Gelu).
GenAI
- MultiLoRA support.
- Generations can now be terminated mid-loop.
- Logit soft capping support in Group Query Attention (GQA).
- Additional model support, including Phi-3.5 Vision Multi-Frame, ChatGLM3, and Nemotron-Mini.
- Python package now available for Mac.
- Mac / iOS now available in NuGet packages.
Full release notes for ONNX Runtime generate() API v0.5.0 can be found here.
Extensions
- Tokenization performance improvements.
- Support for latest Hugging Face tokenization JSON format (transformers>=4.45).
- Unigram tokenization model support.
- OpenCV dependency removed from C API build.
Full release notes for ONNX Runtime Extensions v0.13 can be found here.
Olive
- Olive command line interface (CLI) now available with support to execute well-defined, concrete workflows without the need to create or edit configs manually.
- Additional improvements, including support for YAML-based workflow configs, streamlined DataConfig management, simplified workflow configuration, and more.
- Llama and Phi-3 model updates, including an updated MultiLoRA example using the ORT generate() API.
Full release notes for Olive v0.7.0 can be found here.
Contributors
Big thank you to the release manager @apsonawane, as well as @snnn, @jchen351, @sheetalarkadam, and everyone else who made this release possible!
Tianlei Wu, Yi Zhang, Yulong Wang, Scott McKay, Edward Chen, Adrian Lizarraga, Wanming Lin, Changming Sun, Dmitri Smirnov, Jian Chen, Jiajia Qin, Jing Fang, George Wu, Caroline Zhu, Hector Li, Ted Themistokleous, mindest, Yang Gu, jingyanwangms, liqun Fu, Adam Pocock, Patrice Vignola, Yueqing Zhang, Prathik Rao, Satya Kumar Jandhyala, Sumit Agarwal, Xu Xing, aciddelgado, duanshengliu, Guenther Schmuelling, Kyle, Ranjit Ranjan, Sheil Kumar, Ye Wang, kunal-vaishnavi, mingyueliuh, xhcao, zz002, 0xdr3dd, Adam Reeve, Arne H Juul, Atanas Dimitrov, Chen Feiyue, Chester Liu, Chi Lo, Erick Muñoz, Frank Dong, Jake Mathern, Julius Tischbein, Justin Chu, Xavier Dupré, Yifan Li, amarin16, anujj, chenduan-amd, saurabh, sfatimar, sheetalarkadam, wejoncy, Akshay Sonawane, AlbertGuan9527, Bin Miao, Christian Bourjau, Claude, Clément Péron, Emmanuel, Enrico Galli, Fangjun Kuang, Hann Wang, Indy Zhu, Jagadish Krishnamoorthy, Javier Martinez, Jeff Daily, Justin Beavers, Kevin Chen, Krishna Bindumadhavan, Lennart Hannink, Luis E. P., Mauricio A Rovira Galvez, Michael Tyler, PARK DongHa, Peishen Yan, PeixuanZuo, Po-Wei (Vincent), Pranav Sharma, Preetha Veeramalai, Sophie Schoenmeyer, Vishnudas Thaniel S, Xiang Zhang, Yi-Hong Lyu, Yufeng Li, goldsteinn, mcollinswisc, mguynn-intc, mingmingtasd, raoanag, shiyi, stsokolo, vraspar, wangshuai09
Full changelog: v1.19.2...v1.20.0
ONNX Runtime v1.19.2
Announcements
- ORT 1.19.2 is a small patch release, fixing some broken workflows and introducing bug fixes.
Build System & Packages
- Fixed the signing of native DLLs.
- Disabled absl symbolize in Windows Release build to avoid dependency on dbghelp.dll.
Training
- Restored support for CUDA compute capability 7.0 and 7.5 with CUDA 12, and 6.0 and 6.1 with CUDA 11.
- Several fixes for training CI pipelines.
Mobile
- Fixed ArgMaxOpBuilder::AddToModelBuilderImpl() nullptr Node access for CoreML EP.
Generative AI
- Added CUDA kernel for Phi3 MoE.
- Added smooth softmax support in CUDA and CPU kernels for the GroupQueryAttention operator.
- Fixed number of splits calculations in GroupQueryAttention CUDA operator.
- Enabled causal support in the MultiHeadAttention CUDA operator.
Contributors
@prathikr, @mszhanyi, @edgchen1, @tianleiwu, @wangyems, @aciddelgado, @mindest, @snnn, @baijumeswani, @MaanavD
Thanks to everyone who helped ship this release smoothly!
Full Changelog: v1.19.0...v1.19.2
ONNX Runtime v1.19.0
Announcements
- Note that the wrong commit was initially tagged with v1.19.0. The final commit has since been correctly tagged: 26250ae. This shouldn't effect much, but sorry for the inconvenience!
Build System & Packages
- Numpy support for 2.x has been added
- Qualcomm SDK has been upgraded to 2.25
- ONNX has been upgraded from 1.16 → 1.16.1
- Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
- TensorRT 10.2 support added
- Introduced Java CUDA 12 packages on Maven.
- Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
- Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
- Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.
Core
- Implemented DeformConv
- Fixed big-endian and support build on AIX
Performance
- Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
- Implemented FlashAttention on CPU to improve performance for GenAI prompt cases
- Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs
Execution Providers
-
TensorRT
- Updated to support TensorRT 10.2
- Remove calls to deprecated api’s
- Enable refittable embedded engine when ONNX model provided as byte stream
-
CUDA
- Upgraded cutlass to 3.5.0 for performance improvement of memory efficient attention.
- Updated MultiHeadAttention and Attention operators to be thread-safe.
- Added sdpa_kernel provider option to choose kernel for Scaled Dot-Product Attention.
- Expanded op support - Tile (bf16)
-
CPU
- Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 small)
-
QNN
- Updated to support QNN SDK 2.25
- Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), Matmul (int4 weights), Conv (int4 weights), prelu (fp16)
- Expanded fusion support – Conv + Clip/Relu fusion
-
OpenVINO
- Added support for OpenVINO 2024.3
- Support for enabling EpContext using session options
-
DirectML
- Updated DirectML from 1.14.1 → 1.15.1
- Updated ONNX opset from 17 → 20
- Opset 19 and Opset 20 are supported with known caveats:
- Gridsample 20: 5d not supported
- DeformConv not supported
Mobile
- Additional CoreML ML Program operators were added
- See supported operators list here
- Fixed packaging issue with macOS framework in onnxruntime-c cocoapod
- Removed Xamarin support
- Xamarin EOL was May 1, 2024
- Xamarin official support policy | .NET (microsoft.com)
Web
- Updated JavaScript packaging to align with best practices, including slight incompatibilities when apps bundle onnxruntime-web
- Improved CPU operators coverage for WebNN (now supported by Chrome)
Training
- No specific updates
GenAI
- Support for new models Qwen, Llama 3.1, Gemma 2, phi3 small
- Support to build quantized models with method AWQ and GPTQ
- Performance improvements for Intel and Arm CPU
- Packing and language binding
- Added Java bindings (build from source)
- Separate OnnxRuntime.dll and directml.dll out of GenAI package to improve usability
- Publish packages for Win Arm
- Support for Android (build from source)
- Bug fixes, like the long prompt correctness issue for phi3.
Extensions
- Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper
- Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
- Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
- Enhanced Custom Op Lite API on GPU and fused kernels for DORT
- Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release
Contributors
Changming Sun, Baiju Meswani, Scott McKay, Edward Chen, Jian Chen, Wanming Lin, Tianlei Wu, Adrian Lizarraga, Chester Liu, Yi Zhang, Yulong Wang, Hector Li, kunal-vaishnavi, pengwa, aciddelgado, Yifan Li, Xu Xing, Yufeng Li, Patrice Vignola, Yueqing Zhang, Jing Fang, Chi Lo, Dmitri Smirnov, mingyueliuh, cloudhan, Yi-Hong Lyu, Ye Wang, Ted Themistokleous, Guenther Schmuelling, George Wu, mindest, liqun Fu, Preetha Veeramalai, Justin Chu, Xiang Zhang, zz002, vraspar, kailums, guyang3532, Satya Kumar Jandhyala, Rachel Guo, Prathik Rao, Maximilian Müller, Sophie Schoenmeyer, zhijiang, maggie1059, ivberg, glen-amd, aamajumder, Xavier Dupré, Vincent Wang, Suryaprakash Shanmugam, Sheil Kumar, Ranjit Ranjan, Peishen Yan, Frank Dong, Chen Feiyue, Caroline Zhu, Adam Louly, Ștefan Talpalaru, zkep, winskuo-quic, wejoncy, vividsnow, vivianw-amd, moyo1997, mcollinswisc, jingyanwangms, Yang Gu, Tom McDonald, Sunghoon, Shubham Bhokare, RuomeiMS, Qingnan Duan, PeixuanZuo, Pavan Goyal, Nikolai Svakhin, KnightYao, Jon Campbell, Johan MEJIA, Jake Mathern, Hans, Hann Wang, Enrico Galli, Dwayne Robinson, Clément Péron, Chip Kerchner, Chen Fu, Carson M, Adam Reeve, Adam Pocock.
Big thank you to everyone who contributed to this release!
Full Changelog: v1.18.1...v1.19.0
ONNX Runtime v1.18.1
What's new?
Announcements:
- ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release.
- CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x.
- Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.
TensorRT EP:
- TensorRT Weightless API integration.
- Support for TensorRT hardware compatible engines.
- Support for INT64 types in TensorRT constant layer calibration.
- Now using latest commit of onnx-tensorrt parser, which includes several issue fixes.
- Additional TensorRT support and performance improvements.
Packages:
- Publish CUDA 12 Java packages to Azure DevOps feed.
- Various packaging pipeline fixes.
This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.
Big thank you to @yf711 for driving this release as the release manager and to all our contributors!
@yf711 @jchen351 @mszhanyi @snnn @wangyems @jywu-msft @skottmckay @chilo-ms @moraxu @kevinch-nv @pengwa @wejoncy @pranavsharma @Craigacp @jslhcl @adrianlizarraga @inisis @jeffbloo @mo-ja @kunal-vaishnavi @sumitsays @neNasko1 @yufenglee @dhruvbird @wangshuai09 @xiaoyu-work @axinging @yuslepukhin @YUNQIUGUO @shubhambhokare1 @fs-eire @afantino951 @tboby @HectorSVC @baijumeswani
ONNX Runtime v1.18.0
Announcements
- Windows ARM32 support has been dropped at the source code level.
- Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
- The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.
Build System & Packages
- CoreML execution provider now depends on coremltools.
- Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
- ONNX has been upgraded from 1.15 → 1.16.
- EMSDK has been upgraded from 3.1.51 → 3.1.57.
- Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
- There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
- Added support for Catalyst for macOS build support.
- Added initial support for RISC-V and three new build options for it:
--rv64
,--riscv_toolchain_root
, and--riscv_qemu_path
. - Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
- Some security-related compile/link flags have been moved from the default setting → new build option:
--use_binskim_compliant_compile_flags
. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF. - Windows ARM64 build now depends on PyTorch CPUINFO library.
- Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).
Core
- Added ONNX 1.16 support.
- Added additional optimizations related to Dynamo-exported models.
- Improved testing infrastructure for EPs developed as shared libraries.
- Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.
- Improved lock contention due to memory allocations.
- Improved session creation time (graph and graph transformer optimizations).
- Added new SessionOptions config entry to disable specific transformers and rules.
- [C# API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
- [Java API] Added CUDA 12 Java support.
Performance
- Improved 4bit quant support:
- Added HQQ quantization support to improve accuracy.
- Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
- Improved GEMM kernel quality and performance on x64.
- Implemented general GEMM kernel and improved GEMV performance on ARM64.
- Improved MultiheadAttention performance on CPU.
Execution Providers
-
TensorRT
- Added support for TensorRT 10.
- Finalized support for DDS ops.
- Added Python support for user provided CUDA stream.
- Fixed various bugs.
-
CUDA
- Added support of multiple CUDA graphs.
- Added a provider option to disable TF32.
- Added Python support for user provided CUDA stream.
- Extended MoE to support of Tensor Parallelism and int4 quantization.
- Fixed bugs in BatchNorm and TopK kernel.
-
QNN
- Added support for up to QNN SDK 2.22.
- Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
- Added fp16 execution support via enable_htp_fp16 option.
- Added multiple partition support for QNN context binary.
- Expanded operator support and fixed various bugs.
- Added support for per-channel quantized weights for Conv.
- Integration with Qualcomm’s AIHub.
-
OpenVINO
- Added support for up to OpenVINO 2024.1.
- Added support for importing pre-compiled blob as EPContext blob.
- Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
- Deprecated CPU_FP32 and GPU_FP32 terminology and introduced CPU and GPU terminology.
AUTO:GPU,CPU
will only create GPU blob, not CPU blob.
-
DirectML
- Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
- Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.
Mobile
- Improved performance of ARM64 4-bit quantization.
- Added support for building with QNN on Android.
- Added MacCatalyst support.
- Added visionOS support.
- Added initial support for creating ML Program format CoreML models.
- Added support for 1D Conv and ConvTranspose to XNNPACK EP.
Web
- Added WebNN EP preview.
- Improved WebGPU performance (MHA, ROE).
- Added more WebGPU and WebNN examples.
- Increased generative model support.
- Optimized Buffer management to reduce memory footprint.
Training
- Large Model Training
- Added optimizations for Dynamo-exported models.
- Added Mixtral integration using ORT backend.
- On-Device Training
- Added support for models >2GB to enable SLM training on edge devices.
GenAI
- Added additional model support: Phi-3, Gemma, LLama-3.
- Added DML EP support.
- Improved tokenizer quality.
- Improved sampling method and ORT model performance.
Extensions
- Created Java packaging pipeline and published to Maven repository.
- Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
- Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
- Fixed Whisper large model pre-processing bug.
- Enabled eager execution for custom operator and refactored the header file structure.
Contributors
Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997
Big thank you to everyone who contributed to this release!
ONNX Runtime v1.17.3
What's new?
General:
- Update copying API header files to make Linux logic consistent with Windows (#19736) - @mszhanyi
- Pin ONNX version to fix DML and Python packaging pipeline exceptions (#20073) - @mszhanyi
Build System & Packages:
Core:
CUDA EP:
- Fix onnxruntime_test_all build break with CUDA (#19673) - @gedoensmax
- Fix broken pooling CUDA NHWC ops and ensure NCHW / NHWC parity (#19889) - @mtavenrath
TensorRT EP:
- Fix TensorRT build break caused by image update (#19880) - @jywu-msft
- Fix TensorRT custom op list concurrency bug (#20093) - @chilo-ms
Web:
- Add hardSigmoid op support and hardSigmoid activation for fusedConv (#19215, #19233) - @qjia7
- Add support for WebNN async API with Asyncify (#19415) - @Honry
- Add uniform support for conv, conv transpose, conv grouped, and fp16 (#18753, #19098) - @axinging
- Add capture and replay support for JS EP (#18989) - @fs-eire
- Add LeakyRelu activation for fusedConv (#19369) - @qjia7
- Add FastGelu custom op support (#19392) - @fs-eire
- Allow uint8 tensors for WebGPU (#19545) - @satyajandhyala
- Add and optimize MatMulNBits (#19852) - @satyajandhyala
- Enable ort-web with any Float16Array polyfill (#19305) - @fs-eire
- Allow multiple EPs to be specified in backend resolve logic (#19735) - @fs-eire
- Various bug fixes: (#19258) - @gyagp, (#19201, #19554) - @hujiajie, (#19262, #19981) - @guschmue, (#19581, #19596, #19387) - @axinging, (#19613) - @satyajandhyala
- Various improvements for performance and usability: (#19202) - @qjia7, (#18900, #19281, #18883) - @axinging, (#18788, #19737) - @satyajandhyala, (#19610) - @segevfiner, (#19614, #19702, #19677, #19857, #19940) - @fs-eire, (#19791) - @gyagp, (#19868) - @guschmue, (#19433) - @martholomew, (#19932) - @ibelem
Windows:
- Fix Windows memory mapping bug affecting some larger models (#19623) - @yufenglee
Kernel Optimizations:
- Fix GQA and Rotary Embedding bugs affecting some models (#19801, #19874) - @aciddelgado
- Update replacement of MultiHeadAttention (MHA) and GroupQueryAttention (GQA) (#19882) - @kunal-vaishnavi
- Add support for packed QKV input and Rotary Embedding with sm<80 using Memory Efficient Attention kernel (#20012) - @aciddelgado
Models:
- Add support for benchmarking LLaMA model end-to-end performance (#19985, #20033, #20149) - @kunal-vaishnavi
- Add example to demonstrate export of Open AI Whisper implementation with batched prompts (#19854) - @shubhambhokare1
This patch release also includes additional fixes by @spampana95 and @enximi. Big thank you to all our contributors!
ONNX Runtime v1.17.1
This patch release includes the following updates:
General
Build System and Packages
- Fix bug that was breaking arm64 build by disabling __cpuid check on arm64 builds since intrinsic is not available (#19574) - @smk2007
Core
- Add capturestate / rundown ETW support logging for session and provider options (#19397) - @ivberg
- Restrict L2 cache core check on Intel devices (#19483) - @smk2007
Performance
- Optimize KahnsTopologicalSort and PriorityNodeCompare to fix performance degradation in session creation time that was affecting many models (#19475) - @smk2007
EPs
QNN
- Fix split index bugs uncovered by QNN SDK 2.19 release (#19381) - @adrianlizarraga
- Add job that builds x64 Python wheels for QNN EP so cached QNN models can be created on Windows x64 (#19499) - @adrianlizarraga
OpenVINO
- Fix bugs for API backwards compatibility (#19482) - @preetha-intel
DirectML
- Fix bug in external data packing that was causing crash (#19415) - @PatriceVignola
- Fix bug in allocation planner by disabling streams for DML EP (#19481) - @PatriceVignola
Web
Training
- Reduce onnxruntime-training package size so it can be published on PyPI (#19486) - @baijumeswani
- Update default std flag used during torch extensions compilation (#19516) - @baijumeswani
- Add ATen fallback support for bicubic interpolation algorithm (#19380) - @prathikr
Quantization
- Update Q/DQ quantization to ensure Microsoft opset (#19335) - @adrianlizarraga
- Add contrib Q/DQ ops to symbolic shape inference tool (#19340) - @adrianlizarraga
- Fix subgraph quantization regression (#19421) - @fxmarty
- Add DefaultTensorType option to specify the default tensor type to quantize (#19455) - @yufenglee
- Fix bug with command line argparse to process --symmetric [True|False] correctly (#19577) - @satyajandhyala
Whisper Model
- Fix bug in BeamSearch implementation of Whisper model that was causing a crash in some scenarios (#19345) - @petermcaughan
- Fix bug in Whisper model timestamps and temperature (#19509) - @kunal-vaishnavi
ONNX Runtime v1.17.0
Announcements
In the next release, we will totally drop support for Windows ARM32.
General
- Added support for new ONNX 1.15 opsets: IsInf-20, IsNaN-20, DFT-20, ReduceMax-20, ReduceMin-20, AffineGrid-20, GridSample, ConstantOfShape-20, RegexFullMatch, StringConcat, StringSplit, and ai.onnx.ml.LabelEncoder-4.
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11, cpuinfo, safeint, and onnx.
- Added vector optimization code for loongarch architecture.
Build System and Packages
- Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
- Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
- Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
- Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
- Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.
Core
- Optimized graph inlining.
- Allow custom op to invoke internal thread-pool for parallelism.
- Added support for supplying a custom logger at the session level.
- Added new logging and tracing of session and execution provider options.
- Added new dynamic ETW provider that can trace/diagnose ONNX internals while maintaining great performance.
Performance
- Added 4bit quant support on NVIDIA GPU and ARM64.
EPs
TensorRT EP
- Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
- Added Python support for TensorRT plugins via ORT custom ops.
- Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 → enqueue_v3).
- Fixed various memory leak bugs.
QNN EP
- Added support for QNN SDK 2.18.
- Added context binary caching and model initialization optimizations.
- Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (soc_model, htp_arch, device_id), extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
- Fixed multi-threaded inference bug.
- Fixed various other bugs and added performance improvements.
- QNN profiling of the NPU can be enabled dynamically with ETW or write out to CSV.
OpenVINO EP
- Added support for OpenVINO 2023.2.
- Added AppendExecutionProvider_OpenVINO_V2 API for supporting new OpenVINO EP options.
DirectML EP
- Updated to DirectML 1.13.1.
- Updated operators LpPool-18 and AveragePool-19 with dilations.
- Improved Python I/O binding support.
- Added RotaryEmbedding.
- Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.
Mobile
- Added initial support for 4bit quantization on ARM64.
- Extended CoreML/NNAPI operator coverage.
- Added support for YOLOv8 pose detection pre/post processing.
- Added support for macOS in CocoaPods package.
Web
- Added support for external data format.
- Added support for I/O bindings.
- Added support for training.
- Added WebGPU optimizations.
- Transitioned WebGPU out of experimental.
- Added FP16 support for WebGPU.
Training
Large Model Training
- Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see PR).
- Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see PR).
- Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see PR).
- Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention (see PR).
- Packages now available for CUDA 11.8 and 12.1.
On Device Training
- On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.
Extensions
- Modified gen_processing_model tokenizer model to output int64, unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into the output.
- Added support for the custom operator implemented with CUDA kernels, including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified bugs.
Known Issues
- The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:
Installation instructions can also be accessed here.
python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0 pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training pip install torch-ort python -m torch_ort.configure
- For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
- The "neural-speed" library used by int4 kernels has a bug that could lead to out-of-bounds memory read/write.
- Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See #19345. Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
- Some Cast nodes will not be removed (see #17953): Cast node from higher precision to lower precision (like fp32 to fp16) will be kept. If model result is different from ORT 1.16 and 1.17, check whether some Cast nodes was removed in 1.16 but kept in 1.17.
- When running ONNX Runtime's python 3.12 package on Windows 11, you may see a warning like: “Unsupported Windows version (11). ONNX Runtime supports Windows 10 and above, only.” You may safely ignore it.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
Changming Sun, Yulong Wang, Tianlei Wu, Yi Zhang, Jian Chen, Jiajia Qin, Adrian Lizarraga, Scott McKay, Wanming Lin, pengwa, Hector Li, Chi Lo, Dmitri Smirnov, Edward Chen, Xu Xing, satyajandhyala, Rachel Guo, PeixuanZuo, RandySheriffH, Xavier Dupré, Patrice Vignola, Baiju Meswani, Guenther Schmuelling, Jeff Bloomfield, Vincent Wang, cloudhan, zesongw, Arthur Islamov, Wei-Sheng Chin, Yifan Li, raoanag, Caroline Zhu, Sheil Kumar, Ashwini Khade, liqun Fu, xhcao, aciddelgado, kunal-vaishnavi, Aditya Goel, Hariharan Seshadri, Ye Wang, Adam Pocock, Chen Fu, Jambay Kinley, Kaz Nishimura, Maximilian Müller, Yang Gu, guyang3532, mindest, Abhishek Jindal, Justin Chu, Numfor Tiapo, Prathik Rao, Yufeng Li, cao lei, snadampal, sophies927, BoarQing, Bowen Bao, George Wu, Jiajie Hu, MistEO, Nat Kershaw (M...
ONNX Runtime v1.16.3
ONNX Runtime v1.16.2
The patch release includes updates on:
- Performance optimizations for Llama2 on CUDA EP and DirectML EP
- Performance optimizations for Stable Diffusion XL model for CUDA EP
- Demos for text to image generation
- Mobile bug fixes for crash on some older 64-bit ARM devices and AOT inlining issue on iOS with C# bindings
- TensorRT EP bug fixes for user provided compute stream and stream synchronization