-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make iterations a per-VU parameter? #381
Comments
Addition: seems the current behaviour is unintuitive to more than one person: @deepak also got bit by this just now. The docs are describing things wrong currently, so if we still decide to keep the current behaviour, we need to update the docs. @liclac @robingustafsson any opinions on this? |
Where? Could you fix this. I already changed it on the https://docs.k6.io/docs/options |
I am for updating the |
The docs describe the old behaviour, which was changed because it got hella confusing around stages or any kind of scaling - how many iterations will this, for example, produce? export let options = {
iterations: 10,
stages: [
{duration: "10s", target: 100},
{duration: "5s"},
{duration: "10s", target: 0},
],
}; With the current behaviour, the answer is: 10, on the assumption that 10 iterations fit within the test's 25s duration. With the old behaviour, you'd be gradually spawning VUs that would all perform 10 iterations, but only if they can pull all of those off before they're killed (the ramp down will look very odd). The final iteration count would be impossible to predict. But there's another reason why this was changed - the old scheduling code was an obtuse mess, and a test like the above actually at one point triggered a hang in it. Each VU had to do internal accounting, and exit conditions and progress reports were hacky because of it. The new code uses a system where a VU reads from a channel, runs an iteration once it gets an "iteration credit" assigned to it, and writes the resulting samples back to another channel (whether it's successful or failed, completed or cancelled). The 1:1 relationship between these two channels means that the scheduler can simply track the number of "outstanding" and "completed" iterations, it can avoid even starting excessive iterations, it can wait for outstanding iterations at the end of a test, it can report quickly (eg. to the CLI for progress) both of these numbers, pausing can be done by simply not writing anything more to the "start" channel, and ideas like rate limiting of VUs ("arrivals") can be implemented in the same way. And the algorithm is beautifully simple. Yes, we could've reimplemented the same behaviour, but it'd have complicated the code significantly for seemingly no real gain (going by how I got no objections when I asked if I could make this breaking change), and implementing "arrivals" as they were discussed back then would require essentially doing something like this on top of that, complicating it even further. (As a reminder, the old Engine code that handled this was in the range of 1000-1500 lines of code. The current TL;DR: We could do that, but it feels like it'd just complicate our code by roughly a heckton for something we're only doing "because everyone else does it", which is not (alone) an argument by which good design decisions are made. |
Apologies if this is the wrong place to post this, but I have also been thoroughly confused by this & can't find any more appropriate page discussing how you have each VU run once only. With 100 VUs, |
@Aqueum, currently iterations (i.e. executions of the What I described above is the current k6 behavior, but as you've seen, it could be somewhat confusing, and as described in this issue and in discussions in other GitHub issues and in Slack, it's not always ideal. So we're currently working on extending k6's test configuration and execution to also support iterations per VU (so in that mode, 5 VUs and 10 iterations will mean 50 iterations in total), among other improvements like the arrival-rate based execution. But that functionality is still a few weeks away from being available, there will be more information about it at a later time. Also, GitHub issues, especially closed ones, aren't the most appropriate place for such questions. StackOverflow, the Slack chat or our new community Discourse forum are much better places to ask these types of questions. |
Just throwing in my 2c agreeing with the other comments. Thanks for the detailed explanation, but this behavior is quite counter-intuitive.
This is a fine mental model, but doesn't really align well with the term "virtual user" and "iterations". In my scenario, I'd like to assign each virtual user a fixed set of attributes that stress the system differently (just like real users) and then test my system under load for X simultaneous virtual users with the same number of iterations. As it stands now, the "slow" VUs get far fewer iterations and the "fast" VUs get far more. This is not an accurate simulation of our real workload. |
@sirianni, thanks for sharing your 2c, it validates our decision to support per-VU iterations (and other execution models, including mixing and matching of different ones) in the upcoming #1007. I'll actually reopen this issue, since we've reconsidered the original premise and are going to implement it, after all. btw just saw this in my previous answer 😅
... So, now, a year and a few months later, this is, finally, pretty much ready... 😊 We're still doing final testing and fixing some minor issues, and we need to do some final renames for better UX (#1425), but you should be able to test it locally by compiling that branch or using the If I understand your use case correctly, you should be able to run that new k6 version like this: import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
execution: {
per_vu_iters: {
type: "per-vu-iterations",
vus: 50,
iterations: 10,
maxDuration: "30m",
},
}
}
export default function () {
if (__VU <= 20) {
// 40% of VUs test API 1 for 10 iterations
console.log(`VU ${__VU}, scenario 1, iteration ${__ITER}`);
http.get("https://test-api.k6.io/public/crocodiles/");
sleep(1);
} else {
// 60% of VUs test API 2 for 10 iterations, with longer iterations
console.log(`VU ${__VU}, scenario 2, iteration ${__ITER}`);
http.get("https://test-api.k6.io/public/crocodiles/1");
http.get("https://test-api.k6.io/public/crocodiles/2");
http.get("https://test-api.k6.io/public/crocodiles/3");
sleep(2);
}
} or, to avoid that nasty import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
execution: {
scenario1: {
type: "per-vu-iterations",
vus: 20,
iterations: 10,
maxDuration: "30m",
exec: "scenario1",
},
scenario2: {
type: "per-vu-iterations",
vus: 30,
iterations: 10,
maxDuration: "30m",
exec: "scenario2",
},
}
}
export function scenario1() {
console.log(`VU ${__VU}, scenario 1, iteration ${__ITER}`);
http.get("https://test-api.k6.io/public/crocodiles/");
sleep(1);
}
export function scenario2() {
console.log(`VU ${__VU}, scenario 2, iteration ${__ITER}`);
http.get("https://test-api.k6.io/public/crocodiles/1");
http.get("https://test-api.k6.io/public/crocodiles/2");
http.get("https://test-api.k6.io/public/crocodiles/3");
sleep(2);
} (for posterity, if someone stumbles on this in the future, keep in mind that the script above won't work in k6 v0.27.0, since |
Haha, @na-- is faster than I am :) I wrote the below but he managed to post a reply before me. This issue is one I've been concerned about earlier, and I think it was wrong of @liclac to close it, although I probably agree with her decision not to implement it in 2017, given that we had much less time to spend on k6 development back then. @sirianni I think this might have been a case where we chose a UI solution that fit the architecture at the time, rather than the solution which would have been best for the user. If a behaviour is unintuitive, something isn't right about the design, period. However, I also think @liclac may have been right to not fix this back in 2017, as she was more or less the only person doing k6 development back then. If it was very tricky to fix this, due to architectural decisions, it was right to wait. The issue should just not have been closed, IMO, because it was still an issue that needed fixing. The problem with stages seems easy enough to solve by moving iterations into each stage, in case stages are used (making a global Like @na-- writes above, I think this problem will be fixed with the new executors, although it's been somewhat more than "a few weeks" since he wrote that I guess so we'll see when it happens :) |
Thanks for the quick responses and code snippets! 💥 My use case is pretty simple. // Array of users which each have differing data on the backend that may stress the system in different ways
// For example, if I'm testing a TODO list app, some users have 5 lists, some users have 1000 lists
const users = [ 'alice', 'bob', 'carol' ];
export const options = {
vus: users.length
);
export default function() {
const user = users[(__VU - 1 )% users.length];
http.get(`https://my.app/users/${user}/lists`);
sleep(1);
} If I run this with
I'd love for each user to get I'll give the |
I just saw the question in #support from @paulp about the fact that his/her execution with 10 VU and 10 iterations created only 10 requests when the expectation was 100 requests, and @ppcano explaining that iterations was the total for the whole test.
I think this is unintuitive; I would expect
iterations
to behave exactly like @paulp did: i.e. to control how many times each VU executes the script. Practically all other tools that I've used work this way, but also, I think the word "iteration" (which means "repetition") implies that the exact same thing happens over and over again a certain number of times. If we have multiple threads executing with their own unique states, then thread B is not repeating what thread A just did - thread B is doing something new.So I think this behaviour should be changed. I didn't in fact know k6 behaved like this.
What do you think?
The text was updated successfully, but these errors were encountered: