Skip to content

SMAI Project: Understanding Deep Learning Requires Rethinking Generalization

Notifications You must be signed in to change notification settings

mdv3101/Rethinking_Generalization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SMAI Project

Understanding Deep Learning Requires Rethinking Generalization (arxiv)

Aim:

To understand what differentiates neural networks that generalize well from those that do not

Datasets:

CIFAR10, ImageNet

Models:

MLP-512, Inception (tiny), Wide ResNet, AlexNet, Inception_v3

Experiments done:

  • Effect of explicit regularization like augmentation, weight decay, dropout
  • Effect of implicit regularization like BatchNorm
  • Input data corruption: Pixel shuffle, Gaussian pixels, Random pixels
  • Label corruption with different corruption levels from 1 to 100 %

Results

Data Corruption experiments

Label corruption experiments

Regularization experiments

Checkpoint files of Model Trained on ImageNet (Explicit Regularization):

  • w/o Augmentation, Learning Rate Scheduler, Dropout: checkpoint
  • w/o Augmentation, w/o Learning Rate Scheduler, Dropout : checkpoint
  • with Augmentation, Learning Rate Scheduler, Dropout : checkpoint

Team Members:

About

SMAI Project: Understanding Deep Learning Requires Rethinking Generalization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published