Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt warpctc grad op for gradient checking #7414

Merged
merged 7 commits into from
Jan 15, 2018

Commits on Jan 10, 2018

  1. 1. Fix warpctc grad op

    2. Add check grad test
    wanghaoshuang committed Jan 10, 2018
    Configuration menu
    Copy the full SHA
    b1af5e4 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    9eb3fb2 View commit details
    Browse the repository at this point in the history

Commits on Jan 11, 2018

  1. Configuration menu
    Copy the full SHA
    89de5d5 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    fd24e19 View commit details
    Browse the repository at this point in the history

Commits on Jan 13, 2018

  1. 1. Fix warpctc grad tensor initial bug.

    2. Remove num_seq arguments.
    3. Refine CUDA kernel of ScaleLoDTensorFunctor.
    4. Change max_relative_error of gradient unitest to 0.007
    wanghaoshuang committed Jan 13, 2018
    Configuration menu
    Copy the full SHA
    137f0df View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    45cf234 View commit details
    Browse the repository at this point in the history

Commits on Jan 15, 2018

  1. Fix sequence scale functor cuda kernel

    1. Fix kernel
    2. Add more test case
    wanghaoshuang committed Jan 15, 2018
    Configuration menu
    Copy the full SHA
    8f37c3c View commit details
    Browse the repository at this point in the history