Skip to content

Sparse Auto Encoder and regular MNIST classification with mini batch's

Notifications You must be signed in to change notification settings

snooky23/K-Sparse-AutoEncoder

Repository files navigation

K-Sparse-AutoEncoder

What have we done

  • We have create an MNIST classification solver with mini batches and more.
  • After the training step we reached 96% accuracy on the test data-set.
  • The network architecture was constructed out of 3 layer:
    • 784 to 30 Neurons
    • 30 to 10 Neurons
    • 10 to 10 Neurons
  • On this step we took the same network and created an Auto-Encoder
    • 784 to 100 Neurons (encode layer)
    • 100 to 784 Neurons (decode layer)
  • Implemented K-Sparse logic on middle layer (encode layer)
  • We visualized the weights of the encode layer for different K valu

Sparse AutoEncoder

  • k-Sparse Autoencoders finds the k highest activations in z (hidden layer) and zeros out the rest.
  • The error is then backpropogated only through the k active nodes in z.
  • In the following example k equals 2

Alt text

Our results for different K value

Alt text

Alt text

Alt text

Alt text

How to use

  • main_mnist.py - is the main runnable example, you can easily choose between running a simple MNIST classification or a K-Sparse AutoEncoder task.
  • auto_encoder_3.ipynb - this is the Jupiter example, we used it to show the K-Sparse code and graphs in an easy fashion.

About

Sparse Auto Encoder and regular MNIST classification with mini batch's

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published