Skip to content

Robust Learning and Robust Inference in the context of deep learning: noisy examples, outliers, adversaries, etc

XAWang edited this page Aug 9, 2019 · 2 revisions

Why is it important?

  1. DNNs can fit well training examples with random lables. 'Understanding deep learning requires rethinking generalization, https://arxiv.org/abs/1611.03530'

  2. In the large-scale training datasets, noisy training data points generally exist. Specifically and explicitly, the observations and their corresponding semantic labels may not matched. `Emphasis Regularisation by Gradient Rescaling for Training Deep Neural Networks with Noisy Labels, https://arxiv.org/pdf/1905.11233.pdf'

  3. Fortunately, the concept of adversarial examples become universe/unrestricted now, i.e., any examples that fool a model can be viewed as a adversary, e.g., examples with noisy labels which are fitted well during training, outliers which are fitted well during training or get high confidence scores during testing, examples with small pixel perturbation and perceptually ignorable which fool the model.

Paper reading: https://drive.google.com/file/d/1fU3N_u-_puOwEbupK6aOENerP2S45tZX/view?usp=sharing