Skip to content
This repository has been archived by the owner on Aug 26, 2022. It is now read-only.

Releases: tunib-ai/oslo

v2.0.2

25 Aug 19:28
Compare
Choose a tag to compare
  • Revert oslo to 1.1.2.

v2.0.1

20 Feb 23:03
a7ab69c
Compare
Choose a tag to compare
  • Merge changes from functorch upstream.
  • Fix documents and tutorials

v2.0.0

14 Feb 18:46
0582b8a
Compare
Choose a tag to compare

Official release of OSLO 2.0.0 🎉🎉

This version of OSLO provides the following features:

  • Tensor model parallelism
  • Efficient activation checkpointing
  • Kernel fusion

We plan to add the pipeline model parallelism and the ZeRO optimization in the next versions.


New feature: Kernel Fusion

{
  "kernel_fusion": {
    "enable": "bool",
    "memory_efficient_fusion": "bool",
    "custom_cuda_kernels": "list"
  }
}

For more information, please check the kernel fusion tutorial

v2.0.0a2

02 Feb 12:57
Compare
Choose a tag to compare

Quick fix of cuda rng state tracker

v2.0.0a1

02 Feb 12:42
Compare
Choose a tag to compare

Add activation checkpointing

You can use efficient activation checkpointing using OSLO with the following configuration.

model = oslo.initialize(
    model,
    config={
        "model_parallelism": {
            "enable": True,
            "tensor_parallel_size": YOUR_TENSOR_PARALLEL_SIZE,
        },
        "activation_checkpointing": {
            "enable": True,
            "cpu_checkpointing": True,
            "partitioned_checkpointing": True,
            "contiguous_checkpointing": True,
        },
    },
)

Tutorial: https://tunib-ai.github.io/oslo/TUTORIALS/activation_checkpointing.html

v2.0.0a0

30 Jan 05:32
b1854b3
Compare
Choose a tag to compare

New API

  • We paid homage to DeepSpeed. Now it's easier and simpler to use.
import oslo

model = oslo.initialize(model, config="oslo-config.json")

Add new models

  • Albert
  • Bert
  • Bart
  • T5
  • GPT2
  • GPTNeo
  • GPTJ
  • Electra
  • Roberta

Add document

Remove old pipeline parallelism, kernel fusion code

  • We'll refurbish them using the latest methods
    • Kernel fusion: AOTAutograd
    • Pipeline parallelism: Sagemaker PP

v1.1.2

15 Jan 22:03
4b33288
Compare
Choose a tag to compare

Updates

[#7] Selective Kernel Fusion
[#9] Fix argument bug

New Feature: Selective Kernel Fusion

Since version 1.1.2, you can fuse only partial kernels, not all kernels. Currently, only Attention class and MLP class are supported.

from oslo import GPT2MLP, GPT2Attention

# MLP only fusion
model.fuse([GPT2MLP])

# Attention only fusion
model.fuse([GPT2Attention])

# MLP + Attention fusion
model.fuse([GPT2MLP, GPT2Attention])

v1.1

29 Dec 22:48
Compare
Choose a tag to compare

[#3] Add deployment launcher of Parallelformers into OSLO.

from oslo import GPTNeoForCausalLM

model = GPTNeoForCausalLM.from_pretrained_with_parallel(
    "EleutherAI/gpt-neo-2.7B",
    tensor_parallel_size=2,
    pipeline_parallel_size=2,
    deployment=True  # <-- new feature !
)

You can easily use deployment launcher by deployment=True. Please refer to USAGE.md for more details.

v1.0.1

22 Dec 12:31
Compare
Choose a tag to compare

Quick Fix

  • Support Megatron-LM style (.jsonl) file preprecessing.

v1.0

21 Dec 05:27
Compare
Choose a tag to compare


O S L O

Open Source framework for Large-scale transformer Optimization

GitHub release Apache 2.0 Docs Issues



What's New:

What is OSLO about?

OSLO is a framework that provides various GPU based optimization features for large-scale modeling. As of 2021, the Hugging Face Transformers is being considered de facto standard.
However, it does not best fit the purposes of large-scale modeling yet.
This is where OSLO comes in. OSLO is designed to make it easier to train large models with the Transformers.
For example, you can fine-tune GPTJ on the Hugging Face Model Hub without many extra efforts using OSLO. Currently, GPT2, GPTNeo, and GPTJ are supported, but we plan to support more soon.

Installation

OSLO can be easily installed using the pip package manager.
All the dependencies such as torch, transformers, dacite,
ninja and pybind11 should be installed automatically with the following command.
Be careful that the 'core' in the PyPI project name.

pip install oslo-core

Some of features rely on the C++ language.
So we provide an option, CPP_AVAILABLE, to decide whether or not you install them.

  • If the C++ is available:
CPP_AVAILABLE=1 pip install oslo-core
  • If the C++ is not available:
CPP_AVAILABLE=0 pip install oslo-core

Note that the default value of CPP_AVAILABLE is 0 in Windows and 1 in Linux.

Key Features

import deepspeed 
from oslo import GPTJForCausalLM

# 1. 3D Parallelism
model = GPTJForCausalLM.from_pretrained_with_parallel(
    "EleutherAI/gpt-j-6B", tensor_parallel_size=2, pipeline_parallel_size=2,
)

# 2. Kernel Fusion
model = model.fuse()

# 3. DeepSpeed Support
engines = deepspeed.initialize(
    model=model.gpu_modules(), model_parameters=model.gpu_paramters(), ...,
)

# 4. Data Processing
from oslo import (
    DatasetPreprocessor, 
    DatasetBlender, 
    DatasetForCausalLM, 
    ...    
)

OSLO offers the following features.

  • 3D Parallelism: The state-of-the-art technique for training a large-scale model with multiple GPUs.
  • Kernel Fusion: A GPU optimization method to increase training and inference speed.
  • DeepSpeed Support: We support DeepSpeed which provides ZeRO data parallelism.
  • Data Processing: Various utilities for efficient large-scale data processing.

See USAGE.md to learn how to use them.

Administrative Notes

Citing OSLO

If you find our work useful, please consider citing:

@misc{oslo,
  author       = {Ko, Hyunwoong and Kim, Soohwan and Park, Kyubyong},
  title        = {OSLO: Open Source framework for Large-scale transformer Optimization},
  howpublished = {\url{https://github.com/tunib-ai/oslo}},
  year         = {2021},
}

Licensing

The Code of the OSLO project is licensed under the terms of the Apache License 2.0.

Copyright 2021 TUNiB Inc. http://www.tunib.ai All Rights Reserved.

Acknowledgements

The OSLO project is built with GPU support from the AICA (Artificial Intelligence Industry Cluster Agency).