Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

808/about executors #974

Merged
merged 29 commits into from
Jan 16, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
bb72e32
Scenarios, make new About section to explain
MattDodsonEnglish Jan 5, 2023
f970d1a
Scenarios, move other explanatory topics to about
MattDodsonEnglish Jan 5, 2023
702149a
Scenarios, rename arrival rate as open vs closed
MattDodsonEnglish Jan 5, 2023
d46d5bb
Scenarios, clean up the list pages
MattDodsonEnglish Jan 5, 2023
6bddcc5
punctuation fixes
MattDodsonEnglish Jan 5, 2023
94aa2d9
Scenarios, clean up new explanation texts
MattDodsonEnglish Jan 5, 2023
99aacd9
Formatting fixes to ul
MattDodsonEnglish Jan 5, 2023
c92b6dd
List fixes
MattDodsonEnglish Jan 5, 2023
0c12d52
Fix language in iterations dropped doc
MattDodsonEnglish Jan 5, 2023
8596a32
Better explain how duration affects allocation
MattDodsonEnglish Jan 10, 2023
ed10eb2
Add note arrival rate jitter
MattDodsonEnglish Jan 10, 2023
cb02cef
Merge branch 'main' into 808/about-executors
MattDodsonEnglish Jan 11, 2023
2becf58
Proofreading
MattDodsonEnglish Jan 11, 2023
b547cbc
typo fix.
MattDodsonEnglish Jan 12, 2023
020260c
Fix admonitions and simplify sentence.
MattDodsonEnglish Jan 12, 2023
b068f6f
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
98a361d
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
872b78f
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
c658116
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
4b7b088
Fix jitter admonition
MattDodsonEnglish Jan 12, 2023
977c45d
redo introductory sections
MattDodsonEnglish Jan 12, 2023
f580420
Explain why maxVUs counts againts rate.
MattDodsonEnglish Jan 13, 2023
5fa0a38
Apply suggestions from code review
MattDodsonEnglish Jan 13, 2023
6c0e924
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 13, 2023
11e7c43
Last suggestions from Neds review:
MattDodsonEnglish Jan 13, 2023
19784d4
Move parenthetical to admonition
MattDodsonEnglish Jan 13, 2023
5c6f9f8
Change files names and add redirects
MattDodsonEnglish Jan 13, 2023
9aaf8b8
Link arrival-rate reference docs to explanation
MattDodsonEnglish Jan 13, 2023
714d037
Merge branch 'main' into 808/about-executors
MattDodsonEnglish Jan 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,13 @@ export const options = {

For each k6 scenario, the VU workload is scheduled by an _executor_.
For example, executors configure:
- How long the test runs
- Whether VU traffic stays constant or changes
- Whether to model traffic by iteration number or by VU arrival rate.

Your scenario object must define the `executor` property with one of the predefined executors names.
Along with the generic scenario options, each executor object has additional options specific to its workload.
For the list of the executors, refer to the [Executor guide](/using-k6/scenarios/executors/).
For the list of the executors, refer [Executors](/using-k6/scenarios/executors/).
na-- marked this conversation as resolved.
Show resolved Hide resolved
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

## Scenario options {#options}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: "About scenarios"
excerpt: High-level explanations about how your executor configuration can change the test execution and test results
---

These topics explain the essential concepts of how scenarios and their executors work.

Different scenario configurations can affect many different aspects of your system,
including the generated load, utilized resources, and emitted metrics.
If you know a bit about how scenarios work, you'll both design better tests for resources and goals, and interpret test results with more understanding.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand "design better tests for resources and goals"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thinking was that if you understand scenarios better, you can:

  • Make better decisions for your test goals, because certain scenarios correspond better to certain test design. To spike test a single component for raw throughput probably requires an arrival rate executor. To just see how quickly your system can churn through x number of iterations, shared iterations is a simpler choice.
  • Use resources better, because understanding not to use maxVUs means you'll use CPU cycles more efficiently.

Of course now I realize that this is an enormous amount of implied information.

Will change to:

If you know a bit about how scenarios work, you'll design better tests and interpret test results with more understanding.

MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

| On this page | Read about |
|-------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| [Open and closed models](/using-k6/scenarios/about-scenarios/open-vs-closed/) | Different ways k6 can schedule VUs, their affects on test results, and how k6 implements the open model in its arrival-rate executors |
| [VU allocation](/using-k6/scenarios/about-scenarios/vu-allocation/) | How k6 allocates VUs in arrival-rate executors |
| [Dropped iterations](/using-k6/scenarios/about-scenarios/dropped-iterations/) | Possible reasons k6 might drop a scheduled iteration |
| [Graceful Stop](/using-k6/scenarios/about-scenarios/graceful-stop) | A configurable period to let iterations finish or ramp down after the test has reached it's scheduled duration |

Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
---
title: 'Arrival rate'
excerpt: 'In k6, we have implemented this open model with our two arrival rate executors: constant-arrival-rate and ramping-arrival-rate.'
title: 'Open and closed models'
slug: '/using-k6/scenarios/about-scenarios/open-vs-closed/'
excerpt: 'k6 has two ways to schedule VUs, which can affect test results. k6 implements the open model in its arrival-rate executors.'
---

Different k6 executors have different ways of scheduling VUs.
Some executors use the _closed model_, while the arrival rate executors use the _open model_.
Some executors use the _closed model_, while the arrival-rate executors use the _open model_.

In short, in the closed model, VU iterations start only when the last iteration finishes.
In the open model, on the other hand, VUs arrive independently of iteration completion.
Expand Down Expand Up @@ -83,7 +84,7 @@ To fix this problem of coordination, you can use an open model,
which decouples the start of new VU iterations from the iteration duration.
This reduces the influence of the target system's response time.

![Arrival rate closed/open models](../images/Scenarios/arrival-rate-open-closed-model.png)
![Arrival rate closed/open models](../../images/Scenarios/arrival-rate-open-closed-model.png)

k6 implements the open model with two _arrival rate_ executors:
[constant-arrival-rate](/using-k6/scenarios/executors/constant-arrival-rate) and [ramping-arrival-rate](/using-k6/scenarios/executors/ramping-arrival-rate):
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
title: VU allocation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU allocation" seems a bit misleading, since we are only talking about arrival-rate scenarios

Suggested change
title: VU allocation
title: Arrival-rate VU allocation

or maybe even

Suggested change
title: VU allocation
title: Arrival-rate configuration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to think about this.
"Arrival-rate configuration" probably opens the door to more topics, like using options. That's not bad, but it may mean there's a better place to put this (not blocking for this PR). Is there any reason readers should know about non-arrival rate allocation? If so, maybe we could add it. If not, maybe it doesn't matter to document.

Not sure. It's a good point though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I've thought about it, I don't want to go with "Arrival-rate configuration" because that should include Graceful stop and maybe more, which opens a whole new round of content structure. It's nice to keep info atomic.

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Either way, I choose these new titles, ranked by preference. You can pick and that's what will go with:

  1. VU allocation.
  2. VU pre-allocation
  3. Arrival-rate VU allocation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because that should include Graceful stop and maybe more

graceful stop is not specific to arrival-rate executors and already has its own dedicated page that explains it: https://k6.io/docs/using-k6/scenarios/graceful-stop/

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Well, considering the configuration of non-arrival-rate executors is specified in terms of VUs, there isn't really anything complicated to explain there 😅

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Is this not encompassed in VU Pre-allocation? Basically, I'm looking for the shortest way to say the most in the most accurate way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU Pre-allocation" only makes sense to you because you already know it applies for arrival-rate executors. A new user won't know that fact and won't click on that menu entry at all, even if this is exactly the information they are looking for.

excerpt: How k6 allocates VUs in the open-model, arrival-rate executors
---

This document explains how k6 allocates VUs in the arrival-rate executors.

In arrival-rate executors, three properties determine the iterations per second:
- `rate` determines how many iterations k6 starts.
- `timeUnit` determines how frequently it starts the number of iterations.
- `preAllocatedVUs` sets the number of VUs to use to reach the target iterations per second.

In short, while `rate` and `timeUnit` set the target iterations per second, **you must allocate enough VUs to reach this target.**
By pre-allocating VUs, k6 can initialize the VUs necessary for the test before the test runs,
which ensures that the CPU cost of allocation doesn't interfere with test execution.
Copy link
Member

@na-- na-- Jan 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more explanation is probably needed here 🤔 Something like this?

Suggested change
This document explains how k6 allocates VUs in the arrival-rate executors.
In arrival-rate executors, three properties determine the iterations per second:
- `rate` determines how many iterations k6 starts.
- `timeUnit` determines how frequently it starts the number of iterations.
- `preAllocatedVUs` sets the number of VUs to use to reach the target iterations per second.
In short, while `rate` and `timeUnit` set the target iterations per second, **you must allocate enough VUs to reach this target.**
By pre-allocating VUs, k6 can initialize the VUs necessary for the test before the test runs,
which ensures that the CPU cost of allocation doesn't interfere with test execution.
The desired load in scenarios with arrival-rate executors is not configured in terms of how many VUs will loop through the iteration code in parallel. Instead, we need to specify at what interval k6 should try to start a new iteration, regardless of how many other iterations (and VUs) are currently running - the so-called ["open model"](/using-k6/scenarios/about-scenarios/open-vs-closed/#open-model). Or, put another way, we need to specify how many iterations per second (or per minute, hour, etc.) k6 should try and start.
Still, iterations need VUs to run on and k6 VUs, like other JavaScript runtimes, are single-threaded and can only run a single iteration (with its event loop) at any given time. This makes configuring scenarios with arrival-rate executors more complicated. We need to configure both the desired load, and to ensure we have enough pre-allocated initialized VUs to handle it. Which may be tricky to calculate, considering that the iteration duration depends entirely on what the script does and how quickly or slowly the system-under-test (SUT) responds. Slower SUT means slower iterations, which means more VUs will be needed, since some will be busy with running old iterations.
You can specify the desired arrival rate with the `rate` and `timeUnit` scenario options - k6 will try to start `rate` iterations evenly spread across a `timeUnit` (default `1s`) interval of time. So, given a scenario with a `constant-arrival-rate` executor and `rate: 10`, k6 will try to start a new iteration every 100 milliseconds. However, if the scenario had `rate: 10, timeUnit: '1m'`, k6 will try to start a new iteration every 6 seconds.
VUs can be configured via the `preAllocatedVUs` option - it sets the number of VUs k6 will initialize before the test runs. There is also the `maxVUs` option, which allows k6 to initialize VUs mid-test, though that is mostly useful during development and generally is a bad practice to use in actual tests. VU initialization is somewhat CPU intensive and doing it in the middle of the test could potentially interfere with test execution or skew results.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial thought is that the info is good...but dense.

One thing, I see that I was just wrong to type per second, it should be iteration rate.

I also think a short bulleted intro should stay. Maybe not. This creates a progressive disclosure, where readers who quickly scan could get the minimal info they need. Opening with four paragraphs in a row might scare many off.

I've tried to synthesize your text and mine with:
977c45d

I wrote it sleepy, need to proofread, but let me know what you think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bullet points are fine, except when they force you to unnecessarily break up explanations so much that they stop making sense and are harder to understand, e.g. https://github.com/grafana/k6-docs/pull/974/files#r1069127202

MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

```javascript
export const options = {
scenarios: {
constant_load: {
executor: "constant-arrival-rate",
preAllocatedVUs: 4,
rate: 8,
timeUnit: "1m",
},
},
};
```


<Blockquote mod="attention" title="">

In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this can be moved in the maxVUs section? 🤔

Suggested change
In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.**
In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.** This is necessary because we must allocate sufficient resources for `maxVUs` to be able to be initialized, even if they never are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to move the whole block down, because it's in an admonition and I think it's most respectful if we alert readers about things that can affect subscription use up front. But I'll move your addition to that section.

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized this might be confusing in another way, "both preAllocatedVUs and maxVUs count against your subscription" may confuse users that preAllocatedVUs + maxVUs will be counted against their subscription, while the reality is that just the bigger number between the two (i.e. max(preAllocatedVUs, maxVUs)) will be counted.

And, given that maxVUs will always be >= preAllocatedVUs, and if you don't specify maxVUs explicitly, it's implicitly equal to maxVUs, maxVUs is what will be counted against the subscription

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe:

The number of VUs you allocate (including maxVUs) counts against your subscription.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another problem - you mention maxVUs here, but we haven't mentioned maxVUs anywhere before in this document... This was one of the reasons for my longer explanation that you discarded, it had a paragraph with a cohesive explanation for both preAllocatedVUs and maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still kind of of the opinion that maxVUs shouldn't be mentioned anywhere :-). What I'll do is just make the first admonition only about preAllocatedVUs, and then a second admonition in the maxVUs section. It's not very elegant, but it doesn't cram so much information together.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, if I could rewrite history, maxVUs probably would not exist 😅 But now that it exists, we should try to make it as clear as possible how it function and why it might not be a good idea to use.

An admonition only for preAllocatedVUs doesn't make sense, this is how cloud subscriptions normally work, i.e. no admonition needed just for it.

And if we want to tuck maxVUs only at the end of the document, somewhat out of sight (which I don't necessarily mind), then we should only have an admonition there.

Copy link
Contributor Author

@MattDodsonEnglish MattDodsonEnglish Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how I've handled it: two admonitions. No points for subtlety, but I don't think anyone can say we're being sneaky.

EDIT: These are at the top and bottom of the page. In the GitHub UI it looks joined together.

image

image

Copy link
Member

@na-- na-- Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nobody will read the text you have highlighted in the middle of that paragraph 😅 People will just see the two admonitions and get the very wrong impression that in the cloud we will charge them for preAllocatedVUs + maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point taken. With 19784d4, it looks like this:

image


When planning a test, consider doing a trial initialization on a local machine to ensure you're allocating VUs efficiently.

</Blockquote>

## How k6 uses allocated VUs

Before an arrival-rate scenario starts, k6 first initializes the number of `preAllocatedVUs`.
When the test runs,
the number of available `preAllocatedVUs` determines how many iterations k6 can start.
k6 tries to reach the target iterations per second, and one of two things can happen.

| If the executor | Then.. |
|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Has enough VUs | the extra VUs are "idle," ready to be used when needed. |
| Has insufficient VUs. | k6 emits a [`dropped_iterations` metric](/using-k6/scenarios/about-scenarios/dropped-iterations) for each iteration that it can't run. |

## Iteration duration affects the necessary allocation

Imagine that you have a goal of 50 iterations per second.
The executor `rate` is `50`, and the `timeUnit` is `1s`.

How many `preAllocatedVUs` do you need?
This depends on your iteration duration.
If you could be sure iterations all had a duration of `1s`, then 50 pre-allocated VUs would be enough VUs&mdash;this is likely only in a perfect world.

In the real world, iteration durations fluctuate with load and random statistical variance.
You could _approximate_ the number of VUs you need by multiplying the desired rate by the expected median iteration duration (perhaps determined by an early test).

```
preAllocatedVUs = median iteration duration * rate
```

In real load tests, the median iteration duration might be quite hard to predict.
And even if you knew the median value, the variance could be so high that some periods may happen where the number of pre-allocated VUs was insufficient anyway.
You should probably add a number of extra iterations.
The amount necessary for your case may require some experimentation.

## You probably don't need `maxVUs`

The arrival-rate executors also have a `maxVUs` property.
If you set it, k6 runs in this sequence:
1. Pre-allocate the `preAllocatedVUs`.
1. Run the test, trying to reach the target iteration rate.
1. If the target exceeds the available VUs, allocate another VU.
1. If the target still exceeds available VUs, continue allocating VUs until reaching the number set by `maxVUs`.

Though it seems convenient, you should avoid using `maxVUs` in most cases.
Allocating VUs has CPU and memory costs, and allocating VUs as the test runs **can overload the load generator and skew results**.
If you're running in k6 Cloud, the `maxVUs` will count against your subscription.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
In almost all cases, the best thing to do is to pre-allocate the number of VUs you need beforehand.

Some of the times it might make sense to use `maxVUs` include:
- To determine necessary allocation in first-time tests
- To add a little "cushion" to the pre-allocated VUs that you expect the test needs
- In huge, highly distributed tests, in which you need to carefully scale load generators as you increment VUs.
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: Dropped iterations
excerpt: Explanations about how your scenario configuration or SUT performance can lead to dropped iterations
---

Sometimes, a scenario can't run the expected number of iterations.
k6 tracks this number in a counter metric, `dropped iterations`.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
The number of dropped iterations can be valuable data when you debug executors or analyze results.

Dropped iterations usually happen for one of two reasons:
- The executor configuration is insufficient.
- The SUT can't handle the configured VU arrival rate.

### Configuration-related iteration drops

Dropped iterations happen for different reasons in different types of executors.

With `shared-iterations` and `per-vu-iterations`, iterations drop if the scenario reaches its `maxDuration` before all iterations finish.
To mitigate this, you likely need to increase the value of the duration.

With `constant-arrival-rate` and `ramping-arrival-rate`, iterations drop if there are no free VUs.
**If it happens at the beginning of the test, you likely just need to allocate more VUs.**
If this happens later in the test, the dropped iterations might happen because of the SUT.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

### SUT-related iteration drops

If iterations drop later in the test run, your test SUT might have become slow to respond to requests, process iterations, or both.

At a certain point of high latency or longer iteration durations, k6 can no longer send VUs at the configured rate.
There could be a variety of causes for these dropped iterations:
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
- The SUT response has become so long that k6 starts dropping scheduled VUs from the queue.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
- The SUT iteration duration has become so long that k6 needs to schedule more VUs to reach the target arrival rate, exceeding the number of scheduled iterations.
- Some network errors between the generator and the SUT have caused iterations to drop.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

As the causes vary, dropped iterations might mean different things.
A few dropped iterations might indicate a quick network error.
Many dropped iterations might indicate that your SUT has completely stopped responding.

When you design your test, consider what an acceptable rate of dropped iterations is (the _error budget_).
To assert that the SUT responds within this error budget, you can use the `dropped_iterations` metric in a [Threshold](/using-k6/thresholds).