Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nvidia Apex for FP16 calculations #36

Merged
merged 2 commits into from
Jul 24, 2019
Merged

Commits on Jul 23, 2019

  1. Nvidia Apex for FP16 calculations

    Included Compatibility with the Nvidia's Apex library, which can do Floating Point16 calculations. This gives significant speedup in training. This code has been tested on a single RTX2070. If  the Nvidia Apex library is not found the code should run as normal. 
    
    To install Apex: https://github.com/NVIDIA/apex#quick-start
    
    Known bugs: 
    -Does not work with adam parameter
    -Gradient overflow keeps happening at the start, however it automatically reduces loss scale to 8192 after which this notification disappears
    
    examples:
    Loading: https://i.imgur.com/3nZROJz.png
    Training: https://i.imgur.com/Q2w52m7.png
    YacobBY committed Jul 23, 2019
    Configuration menu
    Copy the full SHA
    a0f43fa View commit details
    Browse the repository at this point in the history

Commits on Jul 24, 2019

  1. use amp grad clipping

    YacobBY committed Jul 24, 2019
    Configuration menu
    Copy the full SHA
    2d45ba2 View commit details
    Browse the repository at this point in the history