Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kevin/load test erc20 token #1577

Merged
merged 8 commits into from
Jul 2, 2020
Merged

Kevin/load test erc20 token #1577

merged 8 commits into from
Jul 2, 2020

Conversation

kevsul
Copy link
Contributor

@kevsul kevsul commented Jun 11, 2020

#1093 Overview

Adds the ability to use an erc20 token when running load tests.

Changes

Describe your changes and implementation choices. More details make PRs easier to review.

  • Make load tests use an erc20 token by config e.g. test_currency: "0x942f123b3587EDe66193aa52CF2bF9264C564F87"
  • Configure Tesla.Middleware to not retry on error. It can be useful to retry, but usually when testing you don't want to do this.

```
make init
```
(You need to run this any time you change the env vars above)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is true. And if any change that causes this we should try to see if we can avoid that. The new LoadTest.Connection.XXX is designed to not need to make init with url changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, you're right - this was fixed

```

### 4. Run the test
`MIX_ENV=dev mix test apps/load_test/test/load_tests/runner/childchain_test.exs`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MIX_ENV=<your_env> mix test. Or mix test if you want to run against local services.

I think original description is better though. this line is pretty specific to a single test. Unless that is what we need only. 🤷‍♂️

Or you can use this line as an example to run single test after the description of a more generalized test running instruction as this is probably the most runned single test so people can copy paste the instruction easier.

defp retry?() do
fn
{:ok, %{status: status}} when status in 500..599 -> true
{:ok, %{status: status}} when status in 500..599 -> false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think instead of doing this -> false thing, better ways are:

  1. restrict this to 502 if you agree we want to ignore infra failure from our load test and we really don't want our test to stop at the middle (especially if it takes some time to run). I am guessing you're avoiding retry because the previous issue that the child chain does accepts the tx but timed out and retry would just not make sense. That would be 500 if I get it right.
  2. Or if you don't agree with above, then set this with config/env_var, and describe how we do this in readme instead of the comment here. We default to false if you have experienced that most of the time we don't want to retry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking is that by default we don't want to ignore any failures, because finding failures is the main reason for running the tests.

I get that we might want to keep the tests running regardless of errors, but if there is a known infra problem (such as the 502 error) and you want to ignore it and run the tests anyway, then I think you need to explicitly do that and understand how the error will affect the test results.

I'm not sure how to set that via config though, because you may want to retry on a status code (or a range of status codes)

{:ok, %{status: status}} when status = 502 -> true

or a particular error e.g.

{:error, "reason: timeout"} -> true

Do you have any idea?
Or should I just put a better explanation with some examples into the readme?

Copy link
Contributor

@boolafish boolafish Jun 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because finding failures is the main reason for running the tests.

Out of scope of this PR I would say, but better way I can think of is collect this failure metric and let the test to continue running. Usually the test should be run for a certain period, so the cost of re-run is quite high....for an engineer to wait 😛 We just want to know whether it has failed or not but we are not fixing with the load test anyway so we can continue to test the performance. Also, I think it might be interesting to monitor how our system works after some error occurs, does the error rate explodes after the first one or it auto recovers well.

Do you have any idea?

lol, okay you're definitely thinking sth harder then I expect haha. I was just suggesting to either allow retry or not with env_var or config. I guess we can start by either simply adding flag to turn retry on/off first....

What are the common :error that you've encountered? Probably change current {:error, _} to those you find it useful as default.

put a better explanation with some examples into the readme

I do like this one though. If we cannot give some default that is useful and I think we (probably aside from you as you're the load testing master 😅) don't have enough experience to say what should be put in there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failure rate! Yeah, that would be really cool!

@kevsul kevsul requested a review from boolafish June 16, 2020 07:35
@kevsul kevsul merged commit ca108e0 into master Jul 2, 2020
@kevsul kevsul deleted the kevin/load-test-erc20-token branch July 2, 2020 07:15
@unnawut unnawut added the chore Technical work that does not affect service behaviour label Aug 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Technical work that does not affect service behaviour
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants