Skip to content

Releases: samsja/rusty-grad

Release 0.3.0

04 Oct 14:11
Compare
Choose a tag to compare

This release add

  • the support of backward for ( dot product, softmax, Linear perceptron, MLP, mse loss)

  • a toy dataset the moon dataset.

  • the SGD otpim algo

  • an nice example to train a full MLP on the moon dataset

What need to be done next

  • Use ndarray as a backend and extend it to array composition.
  • Implement a full MLP
  • implement some basic optim algo (SGD)
  • Plug everything and train a simple network on a simple task !
  • Work directly on batch of data
  • Add more grad fn function, more loss function and array manipulation like max
  • allow the use of view in the autograd graph
  • monitor memory leaks and performance
  • have fun and implement state of the art technique, dropout etcc ...
  • use GPU acceleration

0.2.0 Release

28 Sep 12:50
Compare
Choose a tag to compare

This is still a prototype of a deep learning library.

this release add the support of rust ndarray as a backend.

All 4 operation (add,sub,div,mul) and relu are backward compatible.

TODO for the next releases (hopefully)

  • Use ndarray as a backend and extend it to array composition.
  • implement a full MLP
  • implement some basic optim algo (SGD)
  • Plug everything and train a simple network on a simple task !

First release

27 Sep 13:17
Compare
Choose a tag to compare

This is still a prototype of a deep learning library.

What it is so far is an autograd library for float type. It creates dynamic graph via reference counting option of rust and the interior mutability pattern of rust.

What we can do so far is to compose any kind of the following operation:

  • Add
  • Sub
  • Mul
  • Div
  • Relu
    and do a backward pass to get the gradient.

TODO for the next release (hopefully)

  • Use ndarray as a backend to extend it to array composition.
  • implement a full MLP
  • implement some basic optim algo (SGD)
  • Plug everything and train a simple network on a simple task !