Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check labels in SoftmaxWithLoss #3043

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

seanbell
Copy link

@seanbell seanbell commented Sep 8, 2015

If you have an invalid ground truth label, SoftmaxWithLoss will silently access invalid memory without an error. This adds a lightweight check to the forward pass (for both CPU and GPU). Since the labels are already in CPU memory from the data layer, this should not introduce any extra transfers.

The old check only worked in DEBUG mode and also only worked for CPU.

@longjon
Copy link
Contributor

longjon commented Sep 8, 2015

Since the labels are already in CPU memory from the data layer, this should not introduce any extra transfers.

This is not true in general (and I don't think it's even true in master for the DataLayer, as (if I'm reading correctly) #2903 introduced async transfer of data; in GPU mode, only data top GPU buffers are written to). In general, labels may come from anywhere, e.g., in R-CNN style training, positives are determined by ground truth overlap, which may be computed in-network on GPU.

That said I'd be happy to have more checks for this situation, so long as we're confident there are no performance implications. It might be reasonable to always check (in either mode), we just need a performance argument/profile.

@shaibagon
Copy link
Member

Can we add a net parameter flag that controls this checks? i.e., to LayerParameter, like optional bool debug = 12 [default false];
With this flag, one may set it to true to enable checks in the layer, such as these "label in range" check.

As you can see from this SO thread it can be hard to track these run-time errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants