Ultimate goal is to implement reference and optimized (blas, cuda, etc.) algorithms on C++ for three major class of Machine learning\Deep learning\AI Graph Models:
- Feed-forward Neural Networks including Convolutional Neural Network model.
- Reccurent Neural Networks including Long Short-Term Memory model.
- Probabilistic Graphical Models.
- Spiking Neural Networks.
The following models are implemented:
-
Deep Recurrent Neural Networks: vanilla RNN and Long Short-Term Memory LSTM.
- Character generation example: text_gen
-
Reinforcement: Q-Learning with function approximation by Deep Neural Network.
- PuckWorld example: reinforce
-
Deep Feed-forward Neural Networks: fully-connected and Convolutional Neural Network CNN.
- Fully-connected example (~98% accuracy in about 1 min on i7 CPU): mnist-fc
- Convolutional example (~99% accuracy in about 3 min on i7 CPU): mnist-conv
- Imagenet classification (Tiny model trained only ~4 days, ~100 hours on 3Gb GeForce GTX 780. Center crop accuracy, top-1: 47.7%, top-5: 72.9%): imagenet_tiny
-
Evaluating fully-connected model as Spiking neural network (97.5% accuracy): mnist-fc-spike_eval
- Y. LeCun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition, 1998.
- S. Hochreiter, J. Schmidhuber. Long Short-Term Memory, 1997.
- A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, 2012.
- G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors, 2012.
- V. Nair, G.E. Hinton. Rectified linear units improve restricted boltzmann machines, 2010.
- S. Ioffe, C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, 2015.
- P.U. Diehl, D. Neil, J. Binas, M. Cook, S.C. Liu, M. Pfeiffer. Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing, 2015.
-
Andrej Karpathy javascript libraries: recurrentjs, reinforcejs, convnetjs