Skip to content

Latest commit

 

History

History
72 lines (54 loc) · 3.69 KB

README.md

File metadata and controls

72 lines (54 loc) · 3.69 KB

You Only Cut Once (YOCO)

YOCO is a simple method/strategy of performing augmentations, which enjoys the properties of parameter-free, easy usage, and boosting almost all augmentations for free (negligible computation & memory cost).

You Only Cut Once: Boosting Data Augmentation with a Single Cut
Junlin Han, Pengfei Fang, Weihao Li, Jie Hong, Ali Armin, Ian Reid, Lars Petersson, Hongdong Li
DATA61-CSIRO and Australian National University and University of Adelaide
International Conference on Machine Learning (ICML), 2022

@inproceedings{han2022yoco,
  title={You Only Cut Once: Boosting Data Augmentation with a Single Cut},
  author={Junlin Han and Pengfei Fang and Weihao Li and Jie Hong and Mohammad Ali Armin and and Ian Reid and Lars Petersson and Hongdong Li},
  booktitle={International Conference on Machine Learning (ICML)},
  year={2022}
}

YOCO cuts one image into two equal pieces, either in the height or the width dimension. The same data augmentations are performed independently within each piece. Augmented pieces are then concatenated together to form the final augmented image.  

Results

Overall, YOCO benefits almost all augmentations in multiple vision tasks (classification, contrastive learning, object detection, instance segmentation, image deraining, image super-resolution). Please see our paper for more.

Easy usages

Applying YOCO is quite easy, here is a demo code of performing YOCO at the batch level.

***
images: images to be augmented, here is tensor with (b,c,h,w) shape
aug: composed augmentation operations, we use horizontal flip here
h: height of images
w: width of images
***

def YOCO(images, aug, h, w):
    images = torch.cat((aug(images[:, :, :, 0:int(w/2)]), aug(images[:, :, :, int(w/2):w])), dim=3) if \
    torch.rand(1) > 0.5 else torch.cat((aug(images[:, :, 0:int(h/2), :]), aug(images[:, :, int(h/2):h, :])), dim=2)
    return images
    
for i, (images, target) in enumerate(train_loader):    
    aug = torch.nn.Sequential(
      transforms.RandomHorizontalFlip(), )
    _, _, h, w = images.shape
    # perform augmentations with YOCO
    images = YOCO(images, aug, h, w) 

You may use any pytorch inbuilt augmentation operations to replace the horizontal flip operation.

Prerequisites

This repo aims to be minimal modifications on official PyTorch ImageNet training code and MoCo. Following their instructions to install the environments and prepare the datasets.

timm is also required for ImageNet classification, simply run

pip install timm

Images augmented with YOCO

For each quadruplet, we show the original input image, augmented image from image-level augmentation, and two images from different cut dimensions produced by YOCO.

Contact

junlinhcv@gmail.com

If you tried YOCO in other tasks/datasets/augmentations, please feel free to let me know the results. They will be collected and presented in this repo, regardless of positive or negative. Many thanks!

Acknowledgments

Our code is developed based on official PyTorch ImageNet training code and MoCo. We thank anonymous reviewers for their invaluable feedback!