Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

808/about executors #974

Merged
merged 29 commits into from
Jan 16, 2023
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
bb72e32
Scenarios, make new About section to explain
MattDodsonEnglish Jan 5, 2023
f970d1a
Scenarios, move other explanatory topics to about
MattDodsonEnglish Jan 5, 2023
702149a
Scenarios, rename arrival rate as open vs closed
MattDodsonEnglish Jan 5, 2023
d46d5bb
Scenarios, clean up the list pages
MattDodsonEnglish Jan 5, 2023
6bddcc5
punctuation fixes
MattDodsonEnglish Jan 5, 2023
94aa2d9
Scenarios, clean up new explanation texts
MattDodsonEnglish Jan 5, 2023
99aacd9
Formatting fixes to ul
MattDodsonEnglish Jan 5, 2023
c92b6dd
List fixes
MattDodsonEnglish Jan 5, 2023
0c12d52
Fix language in iterations dropped doc
MattDodsonEnglish Jan 5, 2023
8596a32
Better explain how duration affects allocation
MattDodsonEnglish Jan 10, 2023
ed10eb2
Add note arrival rate jitter
MattDodsonEnglish Jan 10, 2023
cb02cef
Merge branch 'main' into 808/about-executors
MattDodsonEnglish Jan 11, 2023
2becf58
Proofreading
MattDodsonEnglish Jan 11, 2023
b547cbc
typo fix.
MattDodsonEnglish Jan 12, 2023
020260c
Fix admonitions and simplify sentence.
MattDodsonEnglish Jan 12, 2023
b068f6f
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
98a361d
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
872b78f
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
c658116
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 12, 2023
4b7b088
Fix jitter admonition
MattDodsonEnglish Jan 12, 2023
977c45d
redo introductory sections
MattDodsonEnglish Jan 12, 2023
f580420
Explain why maxVUs counts againts rate.
MattDodsonEnglish Jan 13, 2023
5fa0a38
Apply suggestions from code review
MattDodsonEnglish Jan 13, 2023
6c0e924
Update src/data/markdown/translated-guides/en/02 Using k6/14 Scenario…
MattDodsonEnglish Jan 13, 2023
11e7c43
Last suggestions from Neds review:
MattDodsonEnglish Jan 13, 2023
19784d4
Move parenthetical to admonition
MattDodsonEnglish Jan 13, 2023
5c6f9f8
Change files names and add redirects
MattDodsonEnglish Jan 13, 2023
9aaf8b8
Link arrival-rate reference docs to explanation
MattDodsonEnglish Jan 13, 2023
714d037
Merge branch 'main' into 808/about-executors
MattDodsonEnglish Jan 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,13 @@ export const options = {

For each k6 scenario, the VU workload is scheduled by an _executor_.
For example, executors configure:
- How long the test runs
- Whether VU traffic stays constant or changes
- Whether to model traffic by iteration number or by VU arrival rate.

Your scenario object must define the `executor` property with one of the predefined executors names.
Along with the generic scenario options, each executor object has additional options specific to its workload.
For the list of the executors, refer to the [Executor guide](/using-k6/scenarios/executors/).
For the list of the executors, refer to [Executors](/using-k6/scenarios/executors/).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MattDodsonEnglish I think we should list here all the executors. If not, readers might move to Concepts without getting the "general" idea of the different executor options.

I suggest adding something similar than the Executors table. For example:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ppcano I deleted that list in #936. I didn't like duplicating content in this way.

I'll make a new PR to put it back in, maybe just as a summary:

You can configure executors in to distribute workload according to:

  • Iterations. Either shared by the VUs, or distributed across them.
  • VUs. Either a constant number or a ramping number.
  • Iterations per second. Either constant or ramping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make a new PR to put it back in

#985


## Scenario options {#options}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: "About scenarios"
excerpt: High-level explanations about how your executor configuration can change the test execution and test results
---

These topics explain the essential concepts of how scenarios and their executors work.

Different scenario configurations can affect many different aspects of your system,
including the generated load, utilized resources, and emitted metrics.
If you know a bit about how scenarios work, you'll design better tests and interpret test results with more understanding.

| On this page | Read about |
|-------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| [Open and closed models](/using-k6/scenarios/about-scenarios/open-vs-closed/) | Different ways k6 can schedule VUs, their affects on test results, and how k6 implements the open model in its arrival-rate executors |
| [VU allocation](/using-k6/scenarios/about-scenarios/vu-allocation/) | How k6 allocates VUs in arrival-rate executors |
| [Dropped iterations](/using-k6/scenarios/about-scenarios/dropped-iterations/) | Possible reasons k6 might drop a scheduled iteration |
| [Graceful Stop](/using-k6/scenarios/about-scenarios/graceful-stop) | A configurable period to let iterations finish or ramp down after the test has reached it's scheduled duration |

Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
---
title: 'Arrival rate'
excerpt: 'In k6, we have implemented this open model with our two arrival rate executors: constant-arrival-rate and ramping-arrival-rate.'
title: 'Open and closed models'
slug: '/using-k6/scenarios/about-scenarios/open-vs-closed/'
excerpt: 'k6 has two ways to schedule VUs, which can affect test results. k6 implements the open model in its arrival-rate executors.'
---

Different k6 executors have different ways of scheduling VUs.
Some executors use the _closed model_, while the arrival rate executors use the _open model_.
Some executors use the _closed model_, while the arrival-rate executors use the _open model_.

In short, in the closed model, VU iterations start only when the last iteration finishes.
In the open model, on the other hand, VUs arrive independently of iteration completion.
Expand Down Expand Up @@ -83,7 +84,7 @@ To fix this problem of coordination, you can use an open model,
which decouples the start of new VU iterations from the iteration duration.
This reduces the influence of the target system's response time.

![Arrival rate closed/open models](../images/Scenarios/arrival-rate-open-closed-model.png)
![Arrival rate closed/open models](../../images/Scenarios/arrival-rate-open-closed-model.png)

k6 implements the open model with two _arrival rate_ executors:
[constant-arrival-rate](/using-k6/scenarios/executors/constant-arrival-rate) and [ramping-arrival-rate](/using-k6/scenarios/executors/ramping-arrival-rate):
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
title: VU allocation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU allocation" seems a bit misleading, since we are only talking about arrival-rate scenarios

Suggested change
title: VU allocation
title: Arrival-rate VU allocation

or maybe even

Suggested change
title: VU allocation
title: Arrival-rate configuration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to think about this.
"Arrival-rate configuration" probably opens the door to more topics, like using options. That's not bad, but it may mean there's a better place to put this (not blocking for this PR). Is there any reason readers should know about non-arrival rate allocation? If so, maybe we could add it. If not, maybe it doesn't matter to document.

Not sure. It's a good point though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I've thought about it, I don't want to go with "Arrival-rate configuration" because that should include Graceful stop and maybe more, which opens a whole new round of content structure. It's nice to keep info atomic.

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Either way, I choose these new titles, ranked by preference. You can pick and that's what will go with:

  1. VU allocation.
  2. VU pre-allocation
  3. Arrival-rate VU allocation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because that should include Graceful stop and maybe more

graceful stop is not specific to arrival-rate executors and already has its own dedicated page that explains it: https://k6.io/docs/using-k6/scenarios/graceful-stop/

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Well, considering the configuration of non-arrival-rate executors is specified in terms of VUs, there isn't really anything complicated to explain there 😅

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Is this not encompassed in VU Pre-allocation? Basically, I'm looking for the shortest way to say the most in the most accurate way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU Pre-allocation" only makes sense to you because you already know it applies for arrival-rate executors. A new user won't know that fact and won't click on that menu entry at all, even if this is exactly the information they are looking for.

excerpt: How k6 allocates VUs in the open-model, arrival-rate executors
---

In arrival-rate executors, as long as k6 has VUs available, it starts iterations according to your target rate.
The ability to set iteration rate comes with a bit more configuration complexity: you must pre-allocate a sufficient number of VUs.
In other words, before the tests runs, you must both:
- Configure load (as iteration per unit of time)
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
- Ensure that you've scheduled enough VUs.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

Read on to learn about how k6 allocates VUs in the arrival-rate executors.

## Pre-allocation in arrival-rate executors

As [open-model](/using-k6/scenarios/about-scenarios/open-vs-closed/#open-model) scenarios, arrival-rate executors start iterations according to a configured rate.
For example, you can configure arrival-rate executors to start 10 iterations each second, or minute, or hour.
This behavior is opposed to the closed-model scenarios, in which VUs wait for one iteration to finish before starting another

Each iteration need needs a VU to run it.
Because k6 VUs are single threaded, like other JS runtimes, a VU can only run the event loop of a single iteration at a time.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
To ensure you have enough, you must pre-allocate a sufficient number.

In your arrival-rate configuration, three properties determine the iteration rate:
- `rate` determines how many iterations k6 starts.
- `timeUnit` determines how frequently it starts the number of iterations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having these 2 in different lines, with a separate explanation for each, is more confusing than helpful in my opinion. The current disjointed explanation is both longer, more confusing and less correct than "k6 will try to start rate iterations evenly spread across a timeUnit (default 1s) interval of time"

- `preAllocatedVUs` sets the number of VUs to use to reach the target iterations per second.
In practice, determining the right number of iterations might take some trial and error,
as the necessary VUs entirely depends on how quickly the SUT can process iterations.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

```javascript
export const options = {
scenarios: {
constant_load: {
executor: "constant-arrival-rate",
preAllocatedVUs: 4,
rate: 8,
timeUnit: "1m",
},
},
};
```


<Blockquote mod="attention" title="">

In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this can be moved in the maxVUs section? 🤔

Suggested change
In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.**
In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.** This is necessary because we must allocate sufficient resources for `maxVUs` to be able to be initialized, even if they never are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to move the whole block down, because it's in an admonition and I think it's most respectful if we alert readers about things that can affect subscription use up front. But I'll move your addition to that section.

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized this might be confusing in another way, "both preAllocatedVUs and maxVUs count against your subscription" may confuse users that preAllocatedVUs + maxVUs will be counted against their subscription, while the reality is that just the bigger number between the two (i.e. max(preAllocatedVUs, maxVUs)) will be counted.

And, given that maxVUs will always be >= preAllocatedVUs, and if you don't specify maxVUs explicitly, it's implicitly equal to maxVUs, maxVUs is what will be counted against the subscription

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe:

The number of VUs you allocate (including maxVUs) counts against your subscription.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another problem - you mention maxVUs here, but we haven't mentioned maxVUs anywhere before in this document... This was one of the reasons for my longer explanation that you discarded, it had a paragraph with a cohesive explanation for both preAllocatedVUs and maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still kind of of the opinion that maxVUs shouldn't be mentioned anywhere :-). What I'll do is just make the first admonition only about preAllocatedVUs, and then a second admonition in the maxVUs section. It's not very elegant, but it doesn't cram so much information together.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, if I could rewrite history, maxVUs probably would not exist 😅 But now that it exists, we should try to make it as clear as possible how it function and why it might not be a good idea to use.

An admonition only for preAllocatedVUs doesn't make sense, this is how cloud subscriptions normally work, i.e. no admonition needed just for it.

And if we want to tuck maxVUs only at the end of the document, somewhat out of sight (which I don't necessarily mind), then we should only have an admonition there.

Copy link
Contributor Author

@MattDodsonEnglish MattDodsonEnglish Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how I've handled it: two admonitions. No points for subtlety, but I don't think anyone can say we're being sneaky.

EDIT: These are at the top and bottom of the page. In the GitHub UI it looks joined together.

image

image

Copy link
Member

@na-- na-- Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nobody will read the text you have highlighted in the middle of that paragraph 😅 People will just see the two admonitions and get the very wrong impression that in the cloud we will charge them for preAllocatedVUs + maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point taken. With 19784d4, it looks like this:

image


When planning a test, consider doing a trial initialization on a local machine to ensure you're allocating VUs efficiently.

</Blockquote>

## How k6 uses allocated VUs

Before an arrival-rate scenario starts, k6 first initializes the number of `preAllocatedVUs`.
When the test runs,
the number of available `preAllocatedVUs` determines how many iterations k6 can start.
k6 tries to reach the target iterations per second,
and one of two things can happen:

| If the executor | Then.. |
|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Has enough VUs | the extra VUs are "idle," ready to be used when needed. |
| Has insufficient VUs. | k6 emits a [`dropped_iterations` metric](/using-k6/scenarios/about-scenarios/dropped-iterations) for each iteration that it can't run. |

## Iteration duration affects the necessary allocation

The necessary allocation depends on the iteration duration:
Longer durations need more VUs.

In a perfect world, you could estimate the number of pre-allocated VUs with this formula:

```
preAllocatedVUs = [median_iteration_duration * rate] + constant_for_variance
```

In the real world, if you know _exactly_ how long an iteration takes, you likely don't need to run a test.
What's more, as the test goes on, iteration duration likely increases.
If response times slow so much that k6 lacks the VUs to start iterations at the desired rate,
the allocation might be insufficient and k6 will drop iterations.

To determine your strategy, you can run tests locally and gradually add more pre-allocated VUs.
As dropped iterations can also indicate that the system performance is degrading, this early experimentation can provide useful data on its own.

## You probably don't need `maxVUs`

The arrival-rate executors also have a `maxVUs` property.
If you set it, k6 runs in this sequence:
1. Pre-allocate the `preAllocatedVUs`.
1. Run the test, trying to reach the target iteration rate.
1. If the target exceeds the available VUs, allocate another VU.
1. If the target still exceeds available VUs, continue allocating VUs until reaching the number set by `maxVUs`.

Though it seems convenient, you should avoid using `maxVUs` in most cases.
Allocating VUs has CPU and memory costs, and allocating VUs as the test runs **can overload the load generator and skew results**.
If you're running in k6 Cloud, the `maxVUs` will count against your subscription.
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved
In almost all cases, the best thing to do is to pre-allocate the number of VUs you need beforehand.

Some of the times it might make sense to use `maxVUs` include:
- To determine necessary allocation in first-time tests
- To add a little "cushion" to the pre-allocated VUs that you expect the test needs
- In huge, highly distributed tests, in which you need to carefully scale load generators as you increment VUs.
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
title: Dropped iterations
excerpt: Explanations about how your scenario configuration or SUT performance can lead to dropped iterations
---

Sometimes, a scenario can't run the expected number of iterations.
k6 tracks the number of unsent iterations in a counter metric, `dropped iterations`.
The number of dropped iterations can be valuable data when you debug executors or analyze results.

Dropped iterations usually happen for one of two reasons:
- The executor configuration is insufficient.
- The SUT can't handle the configured VU arrival rate.

### Configuration-related iteration drops

Dropped iterations happen for different reasons in different types of executors.

With `shared-iterations` and `per-vu-iterations`, iterations drop if the scenario reaches its `maxDuration` before all iterations finish.
To mitigate this, you likely need to increase the value of the duration.

With `constant-arrival-rate` and `ramping-arrival-rate`, iterations drop if there are no free VUs.
**If it happens at the beginning of the test, you likely just need to allocate more VUs.**
If this happens later in the test, the dropped iterations might happen because SUT performance is degrading and iterations are taking longer to finish.

### SUT-related iteration drops

At a certain point of high latency or longer iteration durations, k6 will no longer have free VUs to start iterations with at the configured rate.
As a result, the executor will drop iterations.

The reasons for these dropped iterations vary:
- The SUT response has become so long that k6 starts dropping scheduled iterations from the queue.
- The SUT iteration duration has become so long that k6 needs to schedule more VUs to reach the target arrival rate, exceeding the number of scheduled iterations.

As the causes vary, dropped iterations might mean different things.
A few dropped iterations might indicate a quick network error.
Many dropped iterations might indicate that your SUT has completely stopped responding.

When you design your test, consider what an acceptable rate of dropped iterations is (the _error budget_).
To assert that the SUT responds within this error budget, you can use the `dropped_iterations` metric in a [Threshold](/using-k6/thresholds).

Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,13 @@ useful for a more accurate representation of RPS, for example.

See the [arrival rate](/using-k6/scenarios/arrival-rate) section for details.

<Blockquote mod="Note" title="Iteration starts are spaced fractionally">

Iterations **do not** start at exactly the same time.
At a `rate` of `10` with a `timeUnit` of `1s`, each iteration starts about every tenth of a second (that is, each 100ms).

</Blockquote>

## Options

Besides the [common configuration options](/using-k6/scenarios#options),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,13 @@ k6 will attempt to dynamically change the number of VUs to achieve the configure

See the [arrival rate](/using-k6/scenarios/arrival-rate) section for details.

<Blockquote mod="Note" title="Iteration starts are spaced fractionally">

Iterations **do not** start at exactly the same time.
At a `rate` of `10` with a `timeUnit` of `1s`, each iteration starts about every tenth of a second (that is, each 100ms).

</Blockquote>

## Options

Besides the [common configuration options](/using-k6/scenarios#options),
Expand Down