-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster R-CNN (work from Ross Girshick) #4163
base: master
Are you sure you want to change the base?
Conversation
TYPED_TEST_CASE(ROIPoolingLayerTest, TestDtypes); | ||
|
||
TYPED_TEST(ROIPoolingLayerTest, TestGradient) { | ||
typedef typename TypeParam::Dtype Dtype; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line is where the compiler is complaining. TypeParam here is the actually data type, since you specified GPUDeviceTest. Remove this line. And change Dtype to TypeParam in line 87 and 88.
If you want to merge in code from another project, please handle attribution properly, honoring the license from From what I can tell, the current attribution is not sufficient and doesn't satisfy the original license, which requires that it appear in any copies (https://github.com/rbgirshick/py-faster-rcnn/blob/master/LICENSE#L14-15). |
@happyharrycn Thank you ! |
@seanbell Updated the commit message with the proper LICENSE. |
@rbgirshick any comment on trying to merge your code into caffe? |
@rbgirshick and caffe developers out there - |
@yossibiton I don't really understand your question. But if you want to add new layers you have to do it in your own fork of caffe before being able to merge it with the main caffe repo. |
The question is why forking caffe in order to write one layer, instead of On Sun, May 22, 2016, 20:04 Hugo SERRAT notifications@github.com wrote:
|
@yossibiton there is no way to make a "light project" extending caffe. Caffe doesn't have a system for adding in your own layers, other than forking the entire project and adding in the layers, editing the central net schema. It would be great to make such a system, but currently one does not exist. |
Thanks for the clarification. I'm new to caffe (worked with torch before) On Sun, May 22, 2016, 20:40 Sean Bell notifications@github.com wrote:
|
just a comment - Error in proposal_im_detect (line 22) Error in script_faster_rcnn_demo (line 54) |
@yossibiton I didn't test it yet on faster_rcnn. I am actually working on the python version. In the PR I didn't include all the changes on the python layer since some of them were already implemented. And I took the commit used by the fork of py-faster-rcnn. I didn't check if it was the same as faster_rcnn. There is some refactor to do on the python version : - self_.attr("param_str") = bp::str(
+ self_.attr("param_str_") = bp::str( So in the python you have to change param_str to param_str_ I had a quick look the fork used by faster_rcnn has some modification for the matlab implementation. I didn't include them. |
@seanbell I'm happy with merging, though it comes with the caveat that I might not be very active in terms of helping to maintain the code. |
@rbgirshick Great News. I have tested the PR with an updated version of py-faster-rcnn which works with python3.4 and the code of the PR. @yossibiton I am sorry I don't have matlab so I won't be able to work on the matlab part of the fast-rcnn. Is it possible to relaunch the travis build ? It failed at the beginning ! |
Hi, |
Hi, |
@saiprabhakar Hi, I am not working on the CPU implementation of the smooth L1 loss layer. Put if you do I would be glad to add it to the PR |
@Austriker Hi, I tried to use smooth l1 loss to train a regression network but I always got iteration 0, loss = 0, but the train net output loss is okay. And when I use euclidean loss, it works just fine. I wonder how to use smooth l1 loss to train a regression task and what does iteration loss =0 means? |
Shouldn't you update the LICENSE file itself, instead of putting the license in the commit message? |
@koenvandesande I followed the guideline => The BVLC caffe license allows for you to list the various authors using the commit history: https://github.com/BVLC/caffe/blob/master/LICENSE#L11-L16 @mariolew I never tried it. I have been using the nets included in the py-faster-rcnn : https://github.com/Austriker/py-faster-rcnn/blob/master/models/pascal_voc/VGG16/faster_rcnn_end2end/train.prototxt |
src/caffe/proto/caffe.proto
Outdated
@@ -393,8 +393,10 @@ message LayerParameter { | |||
optional ReductionParameter reduction_param = 136; | |||
optional ReLUParameter relu_param = 123; | |||
optional ReshapeParameter reshape_param = 133; | |||
optional ROIPoolingParameter roi_pooling_param = 146; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
146 is now taken by recurrent_param in the latest master. Also, be sure to update the "last added parameter" at the top of the structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Austriker Thanks for the faster_rcnn_end2end/tain.prototxt.Can you give me more information such as what is the type of the train set. It is better to give some examples.
@koenvandesande Thanks for the comment ! I have updated the PR ! |
I tried to implement the CPU mode, but this still can not pass the runtest. Could you please give some suggestion? :) `// ------------------------------------------------------------------ #include "caffe/fast_rcnn_layers.hpp" namespace caffe { template template template caffe_sub( for (int index = 0;index < count; ++index){ if (has_weights_) { Dtype loss = caffe_cpu_dot(count, ones_.cpu_data(), errors_.cpu_data()); template for (int index = 0;index < count; ++index){ for (int i = 0; i < 2; ++i) { #ifdef CPU_ONLY INSTANTIATE_CLASS(SmoothL1LossLayer); } // namespace caffe |
@zhouphd which error do you have at test ? The tests for the smooth L1 layer is only written for the GPU mode. Could you also share the test code ? #ifndef CPU_ONLY
template <typename Dtype>
class SmoothL1LossLayerTest : public GPUDeviceTest<Dtype> {
protected:
SmoothL1LossLayerTest()
: blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),
blob_bottom_label_(new Blob<Dtype>(10, 5, 1, 1)),
blob_bottom_inside_weights_(new Blob<Dtype>(10, 5, 1, 1)),
blob_bottom_outside_weights_(new Blob<Dtype>(10, 5, 1, 1)),
blob_top_loss_(new Blob<Dtype>()) {
// fill the values
FillerParameter const_filler_param;
const_filler_param.set_value(-1.);
ConstantFiller<Dtype> const_filler(const_filler_param);
FillerParameter filler_param;
GaussianFiller<Dtype> filler(filler_param);
filler.Fill(this->blob_bottom_data_);
blob_bottom_vec_.push_back(blob_bottom_data_);
filler.Fill(this->blob_bottom_label_);
blob_bottom_vec_.push_back(blob_bottom_label_);
filler.Fill(this->blob_bottom_inside_weights_);
blob_bottom_vec_.push_back(blob_bottom_inside_weights_);
filler.Fill(this->blob_bottom_outside_weights_);
blob_bottom_vec_.push_back(blob_bottom_outside_weights_);
blob_top_vec_.push_back(blob_top_loss_);
}
virtual ~SmoothL1LossLayerTest() {
delete blob_bottom_data_;
delete blob_bottom_label_;
delete blob_bottom_inside_weights_;
delete blob_bottom_outside_weights_;
delete blob_top_loss_;
}
Blob<Dtype>* const blob_bottom_data_;
Blob<Dtype>* const blob_bottom_label_;
Blob<Dtype>* const blob_bottom_inside_weights_;
Blob<Dtype>* const blob_bottom_outside_weights_;
Blob<Dtype>* const blob_top_loss_;
vector<Blob<Dtype>*> blob_bottom_vec_;
vector<Blob<Dtype>*> blob_top_vec_;
};
TYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypes);
TYPED_TEST(SmoothL1LossLayerTest, TestGradient) {
LayerParameter layer_param;
SmoothL1LossParameter* loss_param =
layer_param.mutable_smooth_l1_loss_param();
loss_param->set_sigma(2.4);
const TypeParam kLossWeight = 3.7;
layer_param.add_loss_weight(kLossWeight);
SmoothL1LossLayer<TypeParam> layer(layer_param);
layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);
GradientChecker<TypeParam> checker(1e-2, 1e-2, 1701);
checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
this->blob_top_vec_, 0);
checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
this->blob_top_vec_, 1);
}
#endif |
@Austriker ./include/caffe/test/test_gradient_check_util.hpp:175: Failure ... [ FAILED ] 2 tests, listed below:
#include "gtest/gtest.h" #include "caffe/blob.hpp" #include "caffe/test/test_caffe_main.hpp" namespace caffe { //typedef ::testing::Types<GPUDevice, GPUDevice > TestDtypesGPU; template protected:
} Blob* const blob_bottom_data_; //TYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypesGPU); TYPED_TEST(SmoothL1LossLayerTest, TestGradient) { const Dtype kLossWeight = 3.7; } // namespace caffe |
Might I ask what the progress is for this PR? |
@hgaiser it's waiting to be merged. |
@Austriker I implemented cpu version of smooth_l1_loss and made a pull request to you. You can add it to this PR |
hope this PR will be get merged soon, this is very important to many researchers using the faster-rcnn framework |
As an alternative to not having to maintain multiple versions of Caffe for Faster-RCNN, modules can be used with #5294 . This branch shows an example for Faster-RCNN. |
Any update on this? |
@rbgirshick @Austriker HI i want to train my dataset on CPU can you let me knw where can i find the implemented files of smooth_L1_loss_layer and roi_pooling_layer for cpu mode??? I am getting confused with the above conversation , can u let me knw how to train on CPU |
Would you rebase this PR to the master branch? I found this PR doesn't support NCCL. |
Any update? |
This commit is a port from the following [fork](https://github.com/rbgirshick/caffe-fast-rcnn/tree/0dcd397b29507b8314e252e850518c5695efbb83) It adds : - smooth l1 loss layer - roi pooling layer - dropout scaling at test time (needed for MSRA-trained ZF network) LICENSE : Faster R-CNN The MIT License (MIT) Copyright (c) 2015 Microsoft Corporation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Added tests for ROI Pooling Layer Author: Ronghang Hu
@Noiredd I have rebased the PR against the latest master. |
Any reason why this was never merged? |
Hi,
I have been working on py-faster-rcnn. I would like to port the layers from the fork of caffe to avoid having to versions of caffe to maintain.
I have created a fork of py-faster-rcnn that adds support of python 3 and links to an updated caffe. There is still some work to be done.
The PR adds :
expose phase in pycaffe(already implemented)