Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EPIC] Abstractions for monitoring outputs #241

Open
5 of 7 tasks
shreyashankar opened this issue Oct 23, 2021 · 0 comments
Open
5 of 7 tasks

[EPIC] Abstractions for monitoring outputs #241

shreyashankar opened this issue Oct 23, 2021 · 0 comments
Assignees
Labels
L Large task, maybe somewhat dreading (multiple day & refactor)

Comments

@shreyashankar
Copy link
Collaborator

shreyashankar commented Oct 23, 2021

We want to monitor ML pipelines to see when things might be going wrong. We need a way to encode "SLAs" into mltrace, then monitor outputs and metrics to make sure SLAs are met.

To Dos:

Logging

  • Add 2 new tables (outputs, feedback) that stores ts, key, val, identifiers/labels, indexed on ts, key
  • Add functions to write and read from those tables

Querying

Interface

  • Metric class with log_pred, log_true and compute_metric functions
  • "Monitor" view that accepts the group name(s), window size, and metric name
  • chartjs graph to show the above
@shreyashankar shreyashankar added the L Large task, maybe somewhat dreading (multiple day & refactor) label Oct 23, 2021
@shreyashankar shreyashankar self-assigned this Oct 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
L Large task, maybe somewhat dreading (multiple day & refactor)
Projects
None yet
Development

No branches or pull requests

1 participant