Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lrelu layer #717

Closed
wants to merge 35 commits into from
Closed

Lrelu layer #717

wants to merge 35 commits into from

Conversation

qipeng
Copy link
Contributor

@qipeng qipeng commented Jul 17, 2014

Implemented the Leaky ReLU unit described in this paper

Maas, Andrew L., Awni Y. Hannun, and Andrew Y. Ng. "Rectifier nonlinearities improve neural network acoustic models." ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. 2013.

which shares similar sparse activation properties with the ReLU, but was shown easier to optimize.

@jeffdonahue
Copy link
Contributor

Thanks for this contribution @qipeng!

Unless there is a noticeable performance impact on the existing ReLU layer, I think this would be better implemented as a simple generalization of ReLU than a new layer. In particular you could add to caffe.proto a message ReLUParameter which has a field optional float negative_coeff = ... [default = 0]. This also needs unit tests (see src/caffe/test/test_neuron_layer.cpp). Will merge if these changes are made and tests pass. Thanks!

@qipeng
Copy link
Contributor Author

qipeng commented Jul 18, 2014

Hi Jeff (@jeffdonahue ), I've merged the code back into ReLULayer and added unit test. Please let me know if anything further needs to be done.

@bhack
Copy link
Contributor

bhack commented Jul 19, 2014

I think you need to rebase from dev.

@qipeng
Copy link
Contributor Author

qipeng commented Jul 19, 2014

Closing and moving to #740

@qipeng qipeng closed this Jul 19, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants