Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E HuggingFace Bert using LTC Backend #912

Merged

Commits on Jun 6, 2022

  1. Configuration menu
    Copy the full SHA
    80f61a1 View commit details
    Browse the repository at this point in the history
  2. Add ops to support bert lowering

    - Add empty_strided and as_strided
    
    - Restore zeros_like to op blacklist (Without this, tensors will be unintentionally created with a CPU device rather than lazy)
    
    - Check for composite implicit ops and add device data IR
    
    - Also fix codegen for functionalization
    antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    67237df View commit details
    Browse the repository at this point in the history
  3. Add autogen to CMakeList

    antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    31b2d33 View commit details
    Browse the repository at this point in the history
  4. Remove PyTorch submodule

    antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    a601b9e View commit details
    Browse the repository at this point in the history
  5. Reduced BERT model size

    henrytwo authored and antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    950e0ec View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    0187a26 View commit details
    Browse the repository at this point in the history
  7. Apply fixes to work with latest upstream/main

    - Pass importOptions into getMlirTypeFromTorchType during NodeImporter::importNode
    
      Without this, the tensor type created may have a mismatched type as ImportOptions may cause vtensor to be used instead of tensor
    antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    dac0269 View commit details
    Browse the repository at this point in the history
  8. Update shape inference functions

    - Fixed compute_shape_native_batch_norm when mean and var are uninitialized
    
      Previously, the number of shapes returned would be <3 if either mean or val was didn't exist. Instead, we now initialize them with a vector matching the number of channels.
    
    - Implemented compute_shape_mul
    
    - Fixed bug in reshape shape inference error message
    henrytwo authored and antoniojkim committed Jun 6, 2022
    Configuration menu
    Copy the full SHA
    f6e3761 View commit details
    Browse the repository at this point in the history

Commits on Jun 7, 2022

  1. Get MLIR backend more consistent with TS backend

    - Remove LazyNativeFunctions::_unsafe_view from autogen
    
    - Blacklist ops to make JIT graph more like output of TS backend
    
    - Print graph when SSA value has mismatch of types and results
    
    - Remove normalize_index from LazyShapeInference
    
    - Fix seeds for LTC example models
    henrytwo authored and antoniojkim committed Jun 7, 2022
    Configuration menu
    Copy the full SHA
    38ee102 View commit details
    Browse the repository at this point in the history
  2. Update and clean up shape inference functions

    - Prune shape inference functions
    
    - Add shape inference function for GenerateSlice
    
    - Add shape inference function for GenerateCopy
    henrytwo committed Jun 7, 2022
    Configuration menu
    Copy the full SHA
    ec3c9a9 View commit details
    Browse the repository at this point in the history