Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatch in check counts in the end-of-test summary #1033

Closed
na-- opened this issue May 30, 2019 · 0 comments · Fixed by #1007
Closed

Mismatch in check counts in the end-of-test summary #1033

na-- opened this issue May 30, 2019 · 0 comments · Fixed by #1007
Assignees
Labels
Milestone

Comments

@na--
Copy link
Member

na-- commented May 30, 2019

As reported in this forum topic, there's a difference between the two check counts in the end of test metrics.

It can be somewhat easily reproduced by running the following k6 script with k6 0.24.0:

import { check, sleep } from "k6";

export let options = {
    duration: "10s",
    vus: 5,
};

export default function () {
    sleep(Math.random() * 0.1);
    check(Math.random(), {
        "is below 99%": (r) => r < 0.99,
    });
}

That will result in something like the following output:

    duration: 10s, iterations: -
         vus: 5,   max: 5

    done [==========================================================] 10s / 10s

    ✗ is below 99%
     ↳  99% — ✓ 987 / ✗ 9

    checks...............: 99.09% ✓ 984 ✗ 9  
    data_received........: 0 B    0 B/s
    data_sent............: 0 B    0 B/s
    iteration_duration...: avg=50.15ms min=124.84µs med=50.51ms max=100.23ms p(90)=91.71ms p(95)=96.17ms
    iterations...........: 993    99.297203/s
    vus..................: 5      min=5 max=5
    vus_max..............: 5      min=5 max=5

I suspect that the issue stems from the current implementation of metric cutoff times. #652 has more information about that k6 detail and I suspect is a very similar issue. I'm almost positive that this would be fixed by the changes in #1007, since it improves the way we pipe metrics through the application. And even when testing the k6 build from that pull request with gracefulStop: "0s", to emulate the current k6 iteration interruption behavior, I couldn't reproduce the check count mismatch even once. That's the code I used:

import { check, sleep } from "k6";

export let options = {
    execution: {
        checktest: {
            type: "constant-looping-vus",
            vus: 5,
            duration: "10s",
            gracefulStop: "0s",
        },
    }
};

export default function () {
    sleep(Math.random() * 0.1);
    check(Math.random(), {
        "is below 99%": (r) => r < 0.99,
    });
}
@na-- na-- added the bug label May 30, 2019
@na-- na-- added this to the v1.0.0 milestone Aug 27, 2019
@na-- na-- self-assigned this Aug 27, 2019
@na-- na-- mentioned this issue Aug 27, 2019
39 tasks
@na-- na-- modified the milestones: v1.0.0, v0.27.0 May 21, 2020
@na-- na-- closed this as completed in #1007 Jul 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant