-
Notifications
You must be signed in to change notification settings - Fork 540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the value of MarginInnerProduct param #124
Comments
The training accuracy in the training protext is not the true accuracy of the model. Because when you are training the network with A-Softmax (say with m=4), you are actually using a harder classification rule than the standard classification. The network will classify the sample correcly only if the sample are achieving a large margin criterion, but in fact, the sample can be also successfully classified in testing even if it does not fully satisfy the m=4 large margin criterion. If you want to show the training accuracy, you might need to modify the computation of the training accuracy in the training protext file in order to make it comparable with the computation of the testing accuracy. Overall, it is unlikely that A-Softmax loss will reduce the classification accuracy (if you train it successfully). |
Thank you for your reply! |
Thanks for opening code!
I found that there were same parameters int the MarginInnerProduct layer, such as base and gamma. In your protext file, the values of base and gamma are 1000 and 0.12 respectively. Now, in my task, the class number of data is 1256, how should I set their values?
In addition, I add the accuracy in the training protext file and found that the accuracy of classification is just about 0.6. Is that the A-softmax improves the accuracy of recognition, but reduces the accuracy of classification?
The text was updated successfully, but these errors were encountered: