Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavior when not specifying the blobs_lr in a layer #100

Closed
tdomhan opened this issue Feb 13, 2014 · 7 comments
Closed

Behavior when not specifying the blobs_lr in a layer #100

tdomhan opened this issue Feb 13, 2014 · 7 comments

Comments

@tdomhan
Copy link
Contributor

tdomhan commented Feb 13, 2014

Is it intended, that the default behavior, when not specifying the blobs_lr in a layer, is to not use backpropagation on this layer?
I just spend a couple of hours trying to figure out why my network wasn't working until I realized that this was the cause.
I personally think this is a very dangerous behavior and the default should be to set the learning rate multiplier for this blob to 1. Only if someone explicitly sets the blobs_lr to 0, backpropagation should be deactivated.

@huangjunshi
Copy link

I think if the blobs_lr is 0, one can also set force_backward (the member of NetParameter) as true to operate back propagation in the training net.

@Yangqing
Copy link
Member

@tdomhan That is indeed a bad design. Will change default to 1.

Yangqing added a commit that referenced this issue Feb 13, 2014
Fixed bug raised by @tdomhan in issue #100.
@Yangqing
Copy link
Member

Fixed in a302daf. Congratulations! You filed issue #100.

@sguada
Copy link
Contributor

sguada commented Feb 13, 2014

Would that affect the Test phase? Or the finetuning?

Sergio
On Feb 12, 2014 5:32 PM, "Yangqing Jia" notifications@github.com wrote:

Closed #100 #100.

Reply to this email directly or view it on GitHubhttps://github.com//issues/100
.

@Yangqing
Copy link
Member

My bad, fixing things in the afternoon droozy mode apparently is not the best. Just realized that the current fix has one caveat: when a layer has no trainable parameters, it will always trigger need_backwards. We will need to add functionality to also check if the layer has parameters to train.

Repoened and I will fix later.

@Yangqing Yangqing reopened this Feb 13, 2014
@shelhamer
Copy link
Member

Perhaps we should have a dev branch so we can keep master a little more stable #101

@Yangqing
Copy link
Member

Fixed the bug in #103.

@sguada Testing phase will not be affected since backward won't be actually carried out. Finetuning will not be affected (if we set blobs_lr to be 0, we will still not do backpropagation, as intended).

I am going to abuse my power a little bit and simply merge that pull request...

Yangqing added a commit that referenced this issue Feb 13, 2014
mitmul pushed a commit to mitmul/caffe that referenced this issue Sep 30, 2014
Fixed bug raised by @tdomhan in issue BVLC#100.
mitmul pushed a commit to mitmul/caffe that referenced this issue Sep 30, 2014
mitmul pushed a commit to mitmul/caffe that referenced this issue Sep 30, 2014
conner99 pushed a commit to conner99/caffe that referenced this issue Jul 18, 2016
Fixed paths of CommonSetttings.props in *.vcxproj
dennyglee added a commit to dennyglee/caffe that referenced this issue Jul 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants