Skip to content

Commit

Permalink
[docs] fix contrastive loss eq
Browse files Browse the repository at this point in the history
make documented equation match the correct implementation of the
`max(margin - d, 0)^2` term in the loss. see #2321
  • Loading branch information
shelhamer committed Jul 30, 2015
1 parent e4aed04 commit 7f70854
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions include/caffe/loss_layers.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,9 @@ class LossLayer : public Layer<Dtype> {
/**
* @brief Computes the contrastive loss @f$
* E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d +
* \left(1-y\right) \max \left(margin-d, 0\right)
* \left(1-y\right) \max \left(margin-d, 0\right)^2
* @f$ where @f$
* d = \left| \left| a_n - b_n \right| \right|_2^2 @f$. This can be
* d = \left| \left| a_n - b_n \right| \right|_2 @f$. This can be
* used to train siamese networks.
*
* @param bottom input Blob vector (length 3)
Expand All @@ -144,9 +144,9 @@ class LossLayer : public Layer<Dtype> {
* -# @f$ (1 \times 1 \times 1 \times 1) @f$
* the computed contrastive loss: @f$ E =
* \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d +
* \left(1-y\right) \max \left(margin-d, 0\right)
* \left(1-y\right) \max \left(margin-d, 0\right)^2
* @f$ where @f$
* d = \left| \left| a_n - b_n \right| \right|_2^2 @f$.
* d = \left| \left| a_n - b_n \right| \right|_2 @f$.
* This can be used to train siamese networks.
*/
template <typename Dtype>
Expand Down

0 comments on commit 7f70854

Please sign in to comment.