Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replicating the results #4

Open
majidhosseini87 opened this issue Jan 14, 2024 · 3 comments
Open

replicating the results #4

majidhosseini87 opened this issue Jan 14, 2024 · 3 comments

Comments

@majidhosseini87
Copy link

Hi,
Thank you for your amazing efforts. I've been trying to replicate the results of the paper titled "PSAC: Proactive Sequence-Aware Content Caching via Deep Learning at the Network Edge" using your code. Unfortunately, I am facing challenges in achieving the results described in the paper. Specifically, the results from the Psac_gen framework in your code significantly differ from the Qoe_score mentioned paper. Could you provide any guidance or updates that might assist in accurately replicating the results?

Sincerely,

@DarriusL
Copy link
Owner

Hi,
PSAC_gen in CoCheLab is mainly used for comparison with contrastive learning models (CL4SRec, etc.), so PSAC_gen here is a modified version:

  1. The user request sequence in the original paper was not clipped or filled to a fixed length. In order to unify it with other models, PSAC_gen here is also trained with a fixed-length sequence.
  2. The QoE in the original paper is calculated as follows:
image , where \theta is the average length of the sequence. Due to the former reason, if the original formula is used, the QoE must be 0 (fixed-length sequence). Therefore, I made some modifications to the original formula: image , where \theta is the user satisfaction rate and is a hyperparameter.

@Farzad-Mehrabi
Copy link

hey, in PSAC Framework for the forward pass there is tensor called su, could you define exactly what it is and what slide_len and L represent?
def forward(self, su):
#su: (batch_size, slide_len, L)
#Ec: (batch_size, slide_len, L, d)
Ec = self.encoder(su);
#Eu: (batch_size, 1, d)
Eu = self.encoder(su.reshape(su.shape[0], -1)).mean(dim = 1).unsqueeze(1);
#o: (batch_size, slide_len, 1, n)
o = self.VrtConv(Ec.transpose(0,1)).transpose(0,1).transpose(-1, -2);
#attn: (batch_size, slide_len, 1, L*d)
attn = self.self_attn(Ec);
#pro_logits: (batch_size, slide_len, req_set_len)
pro_logits = self.LSTFcNet(o, attn, Eu);
return pro_logits

@DarriusL
Copy link
Owner

DarriusL commented Jan 16, 2024

hey, in PSAC Framework for the forward pass there is tensor called su, could you define exactly what it is and what slide_len and L represent? def forward(self, su): #su: (batch_size, slide_len, L) #Ec: (batch_size, slide_len, L, d) Ec = self.encoder(su); #Eu: (batch_size, 1, d) Eu = self.encoder(su.reshape(su.shape[0], -1)).mean(dim = 1).unsqueeze(1); #o: (batch_size, slide_len, 1, n) o = self.VrtConv(Ec.transpose(0,1)).transpose(0,1).transpose(-1, -2); #attn: (batch_size, slide_len, 1, L*d) attn = self.self_attn(Ec); #pro_logits: (batch_size, slide_len, req_set_len) pro_logits = self.LSTFcNet(o, attn, Eu); return pro_logits

sure. I usually use su to represent the user sequence, and the input in PSAC_gen ([batch, slide_len, L]) is the original user sequence ([batch, n]) processed by the sliding window (length L)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants