Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add topk arg to return topk items and scores at inference step #678

Merged
merged 10 commits into from
Apr 28, 2023

Conversation

rnyak
Copy link
Contributor

@rnyak rnyak commented Apr 18, 2023

This PR adds functionality for returning topk most relevant (with the highest scores) item ids from Triton IS, for NextItemPrediction task.

Current blocker:

The code designed to return top_k item ids (int 64 dtype), but model.output_schema returns next_item as float32 dtype, which creates an error from Triton.

Shall we change the code base in a way that model.output_schema matches with the expected output and output dtype from Triton? Or shall we return top_k item id scores, instead of item_ids?

Status update:

After modifying the model.output_schema, we can now return two outputs (item_scores, item_ids) from Triton.

Remaining tasks:

  • be sure the dtype of categorical item-id in the model.output schema matches with the model.input_schema
  • add a unit test
  • add an example notebook to showcase topK layer or modify one of the existing example: will be taken care of by this PR update end-to-end example to use systems api #680

@rnyak rnyak added this to the Merlin 23.05 milestone Apr 18, 2023
@rnyak rnyak added the chore Maintenance for the repository label Apr 18, 2023
@github-actions
Copy link

outputs[name] = task(
body_outputs, targets=targets, training=training, testing=testing, **kwargs
)
if isinstance(task, NextItemPredictionTask):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the NextItemPredictionTask is always something that is created by user code. An alternative to passing to the foward method could be accepting top_k in the constructor __init__ method. That way we wouldn't need to have this condition here

Copy link
Contributor Author

@rnyak rnyak Apr 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. adding it to the NextItemPredictionTask constructor might be easier (as I show below) but, the reason we did not add this to the constructor is bcs there is no turning back changing the topK value at the inf step, once the model is trained with top_k arg. So to make it flexible, we decided to add it in the forward method of NextItemPredictionTask and constructor of the Model class.

head = tr.Head(
    body,
    tr.NextItemPredictionTask(weight_tying=True, 
                              metrics=metrics, top_k =20),
    inputs=inputs,
)

@sararb wanna add something else?

@@ -342,7 +344,11 @@ def forward(self, inputs: torch.Tensor, targets=None, training=False, testing=Fa
# Compute predictions probs
x, _ = self.pre(x) # type: ignore

return x
if top_k == -1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does top_k need to be a numeric value? Would it work with with None as the default value, for example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think -1 is a used convention but we can set it to None either. @sararb will setting it to None create any issue?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, it is just a convention. Setting it to None won't create any issue.

@rnyak rnyak changed the title [DRAFT] add topk_layer Add topk arg to return topk items and scores at inference step Apr 19, 2023
@@ -342,7 +344,11 @@ def forward(self, inputs: torch.Tensor, targets=None, training=False, testing=Fa
# Compute predictions probs
x, _ = self.pre(x) # type: ignore

return x
if top_k == -1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, it is just a convention. Setting it to None won't create any issue.

@rnyak
Copy link
Contributor Author

rnyak commented Apr 26, 2023

rerun tests

1 similar comment
@rnyak
Copy link
Contributor Author

rnyak commented Apr 27, 2023

rerun tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/inference chore Maintenance for the repository
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants