Skip to content

Releases: microsoft/nnfusion

NNFusion v0.4 Release Candidate

16 Mar 04:44
118f33f
Compare
Choose a tag to compare
Pre-release
Fix variable name typo (#334)

Co-authored-by: Wenxiang Hu <8460860+wenxcs@users.noreply.github.com>

NNFusion v0.3 Release

25 May 02:52
1b7f529
Compare
Choose a tag to compare

Major Feature

  • Support end-to-end BERT model training (in ONNX format) on real dataset
  • Add new operator fusion passes for transformer-based model optimization
  • Provide C++ and JSON interfaces for extending custom operators
  • Support a new HLSL code generator

Others

  • Update related documentations
  • Fix bugs

中文版本说明快捷通道-->#105 (comment)

NNFusion v0.2 Release

25 Dec 08:46
7c96540
Compare
Choose a tag to compare

Major Features

  • Support the use of Python interface to accelerate the training and inference of PyTorch model
  • Support low-precision and mixed-precision model compilation, e.g., fp16
  • Provide auto kernel tuner integration:
    • Add Antares IR for 60+ ops
    • Support auto tuning via Antares tuning service
  • Support parallel training via SuperScaler
  • Enable local kernel cache through kernel database

Others

  • Update related documentations
  • Some enhancements on user experiences and bug fix

中文版本说明快捷通道-->#105 (comment)

NNFusion v0.1 Release

05 Nov 04:42
8ae20c6
Compare
Choose a tag to compare
  • Build and Installation:

    • Support out-of-box installation with docker image
    • Support source code install on native system and docker
    • Support devices like CUDA GPUs, and ROCm GPUs.
  • Models, Framework and Operators:

    • Support DNN model formats including TensorFlow and ONNX
    • Support commonly used models including AlexNet, VGG11, ResNet50, seq2seq, BERT, etc.
    • Support more than 100 commonly used operators.
  • Model Compilation and Execution:

    • Provide a full-stack optimization mechanism, including data-flow graph optimizations, model-specific kernel selection, kernel co-scheduling, etc.
    • Provide ahead-of-time and source-to-source(model-to-code) compilation to reduce runtime overhead
    • Remove third-party library or framework dependencies
  • Usability:

    • Provide command line tool nnfusion
    • Provide tools for users to freeze TensorFlow and PyTorch models
    • Provide flexible way to customize optimization through direct code modification on generated code

中文版本说明快捷通道--> #72 (comment)