Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make iterations a per-VU parameter? #381

Closed
ragnarlonn opened this issue Nov 27, 2017 · 10 comments · Fixed by #1007
Closed

Make iterations a per-VU parameter? #381

ragnarlonn opened this issue Nov 27, 2017 · 10 comments · Fixed by #1007
Milestone

Comments

@ragnarlonn
Copy link

ragnarlonn commented Nov 27, 2017

I just saw the question in #support from @paulp about the fact that his/her execution with 10 VU and 10 iterations created only 10 requests when the expectation was 100 requests, and @ppcano explaining that iterations was the total for the whole test.

I think this is unintuitive; I would expect iterations to behave exactly like @paulp did: i.e. to control how many times each VU executes the script. Practically all other tools that I've used work this way, but also, I think the word "iteration" (which means "repetition") implies that the exact same thing happens over and over again a certain number of times. If we have multiple threads executing with their own unique states, then thread B is not repeating what thread A just did - thread B is doing something new.

So I think this behaviour should be changed. I didn't in fact know k6 behaved like this.

What do you think?

@ragnarlonn
Copy link
Author

ragnarlonn commented Nov 28, 2017

Addition: seems the current behaviour is unintuitive to more than one person: @deepak also got bit by this just now.

The docs are describing things wrong currently, so if we still decide to keep the current behaviour, we need to update the docs.

@liclac @robingustafsson any opinions on this?

@ppcano
Copy link
Contributor

ppcano commented Nov 28, 2017

@ragnarlonn

The docs are describing things wrong currently, so if we still decide to keep the current behaviour, we need to update the docs.

Where? Could you fix this. I already changed it on the https://docs.k6.io/docs/options

@ppcano
Copy link
Contributor

ppcano commented Nov 28, 2017

I am for updating the docs now, and consider this issue for later.

@liclac
Copy link
Contributor

liclac commented Nov 29, 2017

The docs describe the old behaviour, which was changed because it got hella confusing around stages or any kind of scaling - how many iterations will this, for example, produce?

export let options = {
  iterations: 10,
  stages: [
    {duration: "10s", target: 100},
    {duration: "5s"},
    {duration: "10s", target: 0},
  ],
};

With the current behaviour, the answer is: 10, on the assumption that 10 iterations fit within the test's 25s duration.

With the old behaviour, you'd be gradually spawning VUs that would all perform 10 iterations, but only if they can pull all of those off before they're killed (the ramp down will look very odd). The final iteration count would be impossible to predict.

But there's another reason why this was changed - the old scheduling code was an obtuse mess, and a test like the above actually at one point triggered a hang in it. Each VU had to do internal accounting, and exit conditions and progress reports were hacky because of it.

The new code uses a system where a VU reads from a channel, runs an iteration once it gets an "iteration credit" assigned to it, and writes the resulting samples back to another channel (whether it's successful or failed, completed or cancelled).

The 1:1 relationship between these two channels means that the scheduler can simply track the number of "outstanding" and "completed" iterations, it can avoid even starting excessive iterations, it can wait for outstanding iterations at the end of a test, it can report quickly (eg. to the CLI for progress) both of these numbers, pausing can be done by simply not writing anything more to the "start" channel, and ideas like rate limiting of VUs ("arrivals") can be implemented in the same way. And the algorithm is beautifully simple.

Yes, we could've reimplemented the same behaviour, but it'd have complicated the code significantly for seemingly no real gain (going by how I got no objections when I asked if I could make this breaking change), and implementing "arrivals" as they were discussed back then would require essentially doing something like this on top of that, complicating it even further.

(As a reminder, the old Engine code that handled this was in the range of 1000-1500 lines of code. The current core/local/local.go is 492, and significantly more verbose and commented.)


TL;DR: We could do that, but it feels like it'd just complicate our code by roughly a heckton for something we're only doing "because everyone else does it", which is not (alone) an argument by which good design decisions are made.

@liclac liclac closed this as completed Dec 10, 2017
@Aqueum
Copy link

Aqueum commented Mar 6, 2019

Apologies if this is the wrong place to post this, but I have also been thoroughly confused by this & can't find any more appropriate page discussing how you have each VU run once only.

With 100 VUs, iterations: 1 only really runs 1 VU; iterations: 100 doesn't run all 100 VUs and iterates at exactly the same point as omitting the iterations line.

@na--
Copy link
Member

na-- commented Mar 6, 2019

@Aqueum, currently iterations (i.e. executions of the default function) are shared among all VUs (i.e. virtual users - independent JavaScript runtimes that execute your test script). Think of the iterations as a pile of work - when each VU starts, it takes a single piece of that pile and starts executing it. Once each VU finishes executing an iteration, it checks if there are any iterations left on the "pile" and if there are, it starts a new one. If you run k6 with --iterations 100, at the end of the test run, the default function will be executed exactly 100 times, regardless of how many --vus there are. More VUs just means that those 100 iterations will just get done faster. And if you run k6 with --iterations 100 --vus 100, that means that each VU will likely execute one iteration, unless the iterations are very fast. You can find out more details here.

What I described above is the current k6 behavior, but as you've seen, it could be somewhat confusing, and as described in this issue and in discussions in other GitHub issues and in Slack, it's not always ideal. So we're currently working on extending k6's test configuration and execution to also support iterations per VU (so in that mode, 5 VUs and 10 iterations will mean 50 iterations in total), among other improvements like the arrival-rate based execution. But that functionality is still a few weeks away from being available, there will be more information about it at a later time.

Also, GitHub issues, especially closed ones, aren't the most appropriate place for such questions. StackOverflow, the Slack chat or our new community Discourse forum are much better places to ask these types of questions.

@na-- na-- mentioned this issue May 15, 2019
39 tasks
@sirianni
Copy link

Just throwing in my 2c agreeing with the other comments. Thanks for the detailed explanation, but this behavior is quite counter-intuitive.

Think of the iterations as a pile of work - when each VU starts, it takes a single piece of that pile and starts executing it.

This is a fine mental model, but doesn't really align well with the term "virtual user" and "iterations". In my scenario, I'd like to assign each virtual user a fixed set of attributes that stress the system differently (just like real users) and then test my system under load for X simultaneous virtual users with the same number of iterations. As it stands now, the "slow" VUs get far fewer iterations and the "fast" VUs get far more. This is not an accurate simulation of our real workload.

@na--
Copy link
Member

na-- commented May 29, 2020

@sirianni, thanks for sharing your 2c, it validates our decision to support per-VU iterations (and other execution models, including mixing and matching of different ones) in the upcoming #1007. I'll actually reopen this issue, since we've reconsidered the original premise and are going to implement it, after all.

btw just saw this in my previous answer 😅

But that functionality is still a few weeks away from being available, there will be more information about it at a later time.

... So, now, a year and a few months later, this is, finally, pretty much ready... 😊 We're still doing final testing and fixing some minor issues, and we need to do some final renames for better UX (#1425), but you should be able to test it locally by compiling that branch or using the new-executors docker image tag.

If I understand your use case correctly, you should be able to run that new k6 version like this:

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
    execution: {
        per_vu_iters: {
            type: "per-vu-iterations",
            vus: 50,
            iterations: 10,
            maxDuration: "30m",
        },
    }
}

export default function () {
    if (__VU <= 20) {
        // 40% of VUs test API 1 for 10 iterations
        console.log(`VU ${__VU}, scenario 1, iteration ${__ITER}`);
        http.get("https://test-api.k6.io/public/crocodiles/");
        sleep(1);
    } else {
        // 60% of VUs test API 2 for 10 iterations, with longer iterations
        console.log(`VU ${__VU}, scenario 2, iteration ${__ITER}`);
        http.get("https://test-api.k6.io/public/crocodiles/1");
        http.get("https://test-api.k6.io/public/crocodiles/2");
        http.get("https://test-api.k6.io/public/crocodiles/3");
        sleep(2);
    }
}

or, to avoid that nasty if and have more sensible progress bars, something like this:

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
    execution: {
        scenario1: {
            type: "per-vu-iterations",
            vus: 20,
            iterations: 10,
            maxDuration: "30m",
            exec: "scenario1",
        },
        scenario2: {
            type: "per-vu-iterations",
            vus: 30,
            iterations: 10,
            maxDuration: "30m",
            exec: "scenario2",
        },
    }
}

export function scenario1() {
    console.log(`VU ${__VU}, scenario 1, iteration ${__ITER}`);
    http.get("https://test-api.k6.io/public/crocodiles/");
    sleep(1);
}

export function scenario2() {
    console.log(`VU ${__VU}, scenario 2, iteration ${__ITER}`);
    http.get("https://test-api.k6.io/public/crocodiles/1");
    http.get("https://test-api.k6.io/public/crocodiles/2");
    http.get("https://test-api.k6.io/public/crocodiles/3");
    sleep(2);
}

(for posterity, if someone stumbles on this in the future, keep in mind that the script above won't work in k6 v0.27.0, since execution will be renamed to scenarios soon (#1425), to better suit use cases like this one...)

@na-- na-- reopened this May 29, 2020
@ragnarlonn
Copy link
Author

Haha, @na-- is faster than I am :) I wrote the below but he managed to post a reply before me. This issue is one I've been concerned about earlier, and I think it was wrong of @liclac to close it, although I probably agree with her decision not to implement it in 2017, given that we had much less time to spend on k6 development back then.


@sirianni I think this might have been a case where we chose a UI solution that fit the architecture at the time, rather than the solution which would have been best for the user. If a behaviour is unintuitive, something isn't right about the design, period. However, I also think @liclac may have been right to not fix this back in 2017, as she was more or less the only person doing k6 development back then. If it was very tricky to fix this, due to architectural decisions, it was right to wait. The issue should just not have been closed, IMO, because it was still an issue that needed fixing. The problem with stages seems easy enough to solve by moving iterations into each stage, in case stages are used (making a global iterations parameter either ignored in that case, or be applicable to the whole batch of stages).

Like @na-- writes above, I think this problem will be fixed with the new executors, although it's been somewhat more than "a few weeks" since he wrote that I guess so we'll see when it happens :)

@na-- na-- added this to the v0.27.0 milestone May 29, 2020
@sirianni
Copy link

sirianni commented May 29, 2020

Thanks for the quick responses and code snippets! 💥

My use case is pretty simple.

// Array of users which each have differing data on the backend that may stress the system in different ways
// For example, if I'm testing a TODO list app, some users have 5 lists, some users have 1000 lists
const users = [ 'alice', 'bob', 'carol' ];

export const options = {
  vus: users.length
);

export default function() {
  const user = users[(__VU - 1 )% users.length];
  http.get(`https://my.app/users/${user}/lists`);
  sleep(1);
}

If I run this with

$ k6 run --iterations 9 ./myscript.js

I'd love for each user to get 9 iterations of the main loop in their own thread.
It would be counter-intuitive, but workable if each user got 3 iterations of the main loop (since iterations represents total).
Currently, each user gets how ever many iterations they can race through and get depending on the latency of their individual HTTP requests.

I'll give the new-executors branch a shot. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants