Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interpreting results of performance tests #149

Closed
ayrat555 opened this issue Oct 19, 2020 · 4 comments
Closed

interpreting results of performance tests #149

ayrat555 opened this issue Oct 19, 2020 · 4 comments
Assignees

Comments

@ayrat555
Copy link
Contributor

related issues/prs:

#59
#141
#145
omgnetwork/elixir-omg#1745

During code review discussions, the question about the interpretation of performance tests raised several times.

Currently, by default successful outcome is defined by error_rate mean = 0. Optionally you can check error_rate percentile (10%-90%) = 0.

@boolafish
Copy link

boolafish commented Oct 21, 2020

I think for several tests it will need:

  1. server side API latency (each API should have one metric). P9x version instead of max.
  2. client side error rate
  3. server side error rate (optional?)

Personally I think this is test specific. So it might be better to allow each test to inject each logic that it want to check and developers can add new checks for each test. (eg. the API for deposit test would be different from a watcher one)

@InoMurko
Copy link
Contributor

I think this is all done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants