Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc scenario edits #936

Merged
merged 17 commits into from
Dec 19, 2022
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,19 @@ title: 'Advanced Examples'
excerpt: 'Advanced Examples using the k6 Scenario API - Using multiple scenarios, different environment variables and tags per scenario.'
---

## Using multiple scenarios
You can use multiple scenarios in one script, and these scenarios can be run in sequence or in parallel.
Some ways that you can combine scenarios include the following:
- Have different start times to sequence workloads
- Add per-scenario tags and environment variables
- Make scenario-specific thresholds.
- Export multiple scenarios as functions in [VU code](/using-k6/test-lifecycle).
MattDodsonEnglish marked this conversation as resolved.
Show resolved Hide resolved

This configuration will first execute a scenario where 50 VUs will try to run as many iterations
as possible for 30 seconds. It will then transition to the next scenario, executing 100 iterations
per VU for a maximum duration of 1 minute.

## Combine scenarios

With the `startTime` property, you can configure your script to start some scenarios later than others. If you combine this with the executor's duration options, you can sequence your scenarios (this is easiest to do with executors with set durations, like the arrival-rate executors.).

This configuration first executes a scenario where 50 VUs try to run as many iterations as possible for 30 seconds. It then runs the next scenario, which executes 100 iterations per VU for a maximum duration of 1 minute.

Note the use of `startTime`, and different `exec` functions for each scenario.

Expand Down Expand Up @@ -49,10 +57,10 @@ export function news() {

</CodeGroup>

## Different environment variables and tags per scenario.
## Use different environment variables and tags per scenario.

In the previous example we set tags on individual HTTP request metrics, but this
can also be done per scenario, which would apply them to other
The previous example sets tags on individual HTTP request metrics.
But, you can also set tags per scenario, which applies them to other
[taggable](/using-k6/tags-and-groups#tags) objects as well.

<CodeGroup labels={[ "multiple-scenarios-env-tags.js" ]} lineNumbers={[true]}>
Expand Down Expand Up @@ -98,14 +106,23 @@ export function news() {

</CodeGroup>

Note that by default a `scenario` tag with the name of the scenario as value is
applied to all metrics in each scenario, which can be used in thresholds and
simplifies filtering metrics when using [result outputs](/get-started/results-output).
This can be disabled with the [`--system-tags` option](/using-k6/options#system-tags).
<Blockquote mod="note" title="">

By default, k6 applies a `scenario` tag to all metrics in each scenario (the value is the scenario name.
You can combine these tags with thresholds, or use them to simplify results filtering.

To disable scenario tags, use the [`--system-tags` option](/using-k6/options#system-tags).

</Blockquote>

## Run multiple scenario functions, with different thresholds

## Multiple exec functions, tags, environment variables, and thresholds
You can also set different thresholds for different scenario functions.
To do this:
1. Set scenario-specific tags
1. Set thresholds for these tags.

A test with 3 scenarios, each with different `exec` functions, tags and environment variables, and thresholds:
This test has 3 scenarios, each with different `exec` functions, tags and environment variables, and thresholds:

<CodeGroup labels={[ "multiple-scenarios-complex.js" ]} lineNumbers={[true]}>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,20 @@ title: 'Arrival rate'
excerpt: 'In k6, we have implemented this open model with our two arrival rate executors: constant-arrival-rate and ramping-arrival-rate.'
---

Different k6 executors have different ways of scheduling VUs.
Some executors use the _closed model_, while the arrival rate executors use the _open model_.

In short, in the closed model, VU iterations start only when the last iteration finishes.
In the open model, on the other hand, VUs arrive independently of iteration completion.
Different models suit different test aims.

## Closed Model

> In a closed model, the execution time of each iteration dictates the actual
> number of iterations executed in your test, as the next iteration won't be started
> until the previous one is completed.
In a closed model, the execution time of each iteration dictates the
number of iterations executed in your test.
The next iteration doesn't start until the previous one finishes.

Prior to v0.27.0, k6 only supported a closed model for the simulation of new VU arrivals.
In this closed model, a new VU iteration only starts when a VU's previous iteration has
completed its execution. Thus, in a closed model, the start rate, or arrival rate, of
Thus, in a closed model, the start or arrival rate of
new VU iterations is tightly coupled with the iteration duration (that is, time from start
to finish of the VU's `exec` function, by default the `export default function`):

Expand Down Expand Up @@ -53,32 +58,34 @@ closed_model ✓ [======================================] 1 VUs 1m0s

```

## Drawbacks of using the closed model
### Drawbacks of using the closed model

This tight coupling between the VU iteration duration and start of new VU iterations
in effect means that the target system can influence the throughput of the test, via
its response time. Slower response times means longer iterations and a lower arrival
rate of new iterations, and vice versa for faster response times.
When the duration of the VU iteration is tightly coupled to the start of new VU iterations,
the target system's response time can influence the throughput of the test.
Slower response times means longer iterations and a lower arrival rate of new iterations―and vice versa for faster response times.
In some testing literature, this problem is known as _coordinated omission._

In other words, when the target system is being stressed and starts to respond more
slowly a closed model load test will play "nice" and wait, resulting in increased
In other words, when the target system is stressed and starts to respond more
slowly, a closed model load test will wait, resulting in increased
iteration durations and a tapering off of the arrival rate of new VU iterations.

This is not ideal when the goal is to simulate a certain arrival rate of new VUs,
This effect is not ideal when the goal is to simulate a certain arrival rate of new VUs,
or more generally throughput (e.g. requests per second).

## Open model

> Compared to the closed model, the open model decouples VU iterations from
> the actual iteration duration. The response times of the target system are no longer
> influencing the load being put on the target system.
Compared to the closed model, the open model decouples VU iterations from
the iteration duration.
The response times of the target system no longer
influence the load on the target system.

To fix this problem we use an open model, decoupling the start of new VU iterations
from the iteration duration and the influence of the target system's response time.
To fix this problem of coordination, you can use an open model,
which decouples the start of new VU iterations from the iteration duration.
This reduces the influence of the target system's response time.

![Arrival rate closed/open models](../images/Scenarios/arrival-rate-open-closed-model.png)

In k6, we've implemented this open model with our two "arrival rate" executors:
k6 implements the open model with two _arrival rate_ executors:
[constant-arrival-rate](/using-k6/scenarios/executors/constant-arrival-rate) and [ramping-arrival-rate](/using-k6/scenarios/executors/ramping-arrival-rate):

<CodeGroup labels={[ "open-model.js" ]} lineNumbers={[true]}>
Expand Down