Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A small question about the code for training on SOP #2

Open
Dyfine opened this issue Apr 22, 2021 · 1 comment
Open

A small question about the code for training on SOP #2

Dyfine opened this issue Apr 22, 2021 · 1 comment

Comments

@Dyfine
Copy link

Dyfine commented Apr 22, 2021

Hi, thanks very much for sharing the code. When I use it to train models on SOP dataset, I get unexpected low results. I check the code and find in ProxyGML/loss/ProxyGML.py that one line (line36) has been commented in the function below.

def scale_mask_softmax(self,tensor,mask,softmax_dim,scale=1.0):
#scale = 1.0 if self.opt.dataset != "online_products" else 20.0
scale_mask_exp_tensor = torch.exp(tensor* scale) * mask.detach()
scale_mask_softmax_tensor = scale_mask_exp_tensor / (1e-8 + torch.sum(scale_mask_exp_tensor, dim=softmax_dim)).unsqueeze(softmax_dim)
return scale_mask_softmax_tensor

I uncomment this line and get the expected results, i.e., 78.0 R@1. So I think you may comment this line by mistake?

@YuehuaZhu
Copy link
Owner

Thank you for your attention and you are right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants