Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cargo-nextest 0.9.72] When one test fails, every test is marked as failed in the Neotest summary #460

Closed
7 tasks done
wvffle opened this issue Jul 27, 2024 · 16 comments · Fixed by #461
Closed
7 tasks done
Labels
bug Something isn't working

Comments

@wvffle
Copy link

wvffle commented Jul 27, 2024

Have you read the docs and searched existing issues?

Neovim version (nvim -v)

v0.10.0

Operating system/version

NixOS unstable

cargo 1.79.0 (ffa9cf99a 2024-06-03)
cargo-nextest 0.9.72

Output of :checkhealth rustaceanvim

rustaceanvim: require("rustaceanvim.health").check()

Checking for Lua dependencies ~
- WARNING dap not installed. Needed for debugging features [mfussenegger/nvim-dap](https://github.com/mfussenegger/nvim-dap)

Checking external dependencies ~
- OK rust-analyzer: found rust-analyzer 2024-07-15
- OK Cargo: found cargo 1.79.0 (ffa9cf99a 2024-06-03)
- OK rustc: found rustc 1.79.0 (129f3b996 2024-06-10)

Checking config ~
- OK No errors found in config.

Checking for conflicting plugins ~
- OK No conflicting plugins detected.

Checking for tree-sitter parser ~
- WARNING No tree-sitter parser for Rust detected. Required by 'Rustc unpretty' command.

How to reproduce the issue

nvim -u minimal.lua
:e foo.rs
:Neotest summary
<C-w>l
navigate to a file containing multiple tests
r

Expected behaviour

Only failed tests should be marked as failed and passed tests should be marked as passed

Actual behaviour

All tests are marked as failed

image
image

The minimal config used to reproduce this issue.

vim.env.LAZY_STDPATH = '.repro'
load(vim.fn.system('curl -s https://raw.githubusercontent.com/folke/lazy.nvim/main/bootstrap.lua'))()

require('lazy.minit').repro {
  spec = {
    {
      'mrcjkb/rustaceanvim',
      version = '^5',
      init = function()
        -- Configure rustaceanvim here
        vim.g.rustaceanvim = {}
      end,
      lazy = false,
    },
    {
      'nvim-neotest/neotest',
      dependencies = {
        "nvim-neotest/nvim-nio",
        "nvim-lua/plenary.nvim",
        "antoinemadec/FixCursorHold.nvim",
        "nvim-treesitter/nvim-treesitter"
      }
    },
  },
}

require("neotest").setup({
  adapters = {
    require("rustaceanvim.neotest"),
  },
})
@wvffle wvffle added the bug Something isn't working label Jul 27, 2024
@mrcjkb mrcjkb changed the title When one test fails, every test is marked as failed in the Neotest summary [extest 0.9.72] When one test fails, every test is marked as failed in the Neotest summary Jul 27, 2024
@mrcjkb mrcjkb changed the title [extest 0.9.72] When one test fails, every test is marked as failed in the Neotest summary [cargo-nextest 0.9.72] When one test fails, every test is marked as failed in the Neotest summary Jul 27, 2024
@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

@wvffle could you please also paste your test module?

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

@wvffle could you please also paste your test module?

Let me write a new one, the tested one was a private project

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

I've updated my project to use nextest 0.9.72 and I'm now getting similar output.
What I find confusing is the space between the namespace and the test name in your output.
In mine, they're separated with ::.

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

those are integration tests marked with #[tokio::test] maybe that's the culprit?

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

maybe that's the culprit?

Nope. Mine are marked with #[tokio::test] too.
It could be because my project has multiple targets. But it's still a strange difference.

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

https://github.com/wvffle/rustaceanvim-460-repro

using devenv.sh, so nix develop --impure

This is my output:
image

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

@wvffle I've implemented a hotfix in the nextest-fix branch.

Would you mind giving it a test drive in your private project?
In the long run, it would probably be better to use cargo-nextest's JUnit output (or better, libtest-json-plus once it becomes stable), but I don't have time for that right now.

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

@mrcjkb strangely it does not work in my private project but it does work in the repro project.

This is my output:

    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.34s
    Starting 14 tests across 1 binary (run ID: 37120dd7-4992-4197-bb44-2725f7054d71, nextest profile: default)
        FAIL [  19.954s] server::organization test_authorization

--- STDOUT:              server::organization test_authorization ---

running 1 test
test test_authorization ... FAILED

failures:

failures:
    test_authorization

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 13 filtered out; finished in 19.94s


--- STDERR:              server::organization test_authorization ---
thread 'test_authorization' panicked at /home/waff/.cargo/registry/src/index.crates.io-6f17d22bba15001f/poem-3.0.3/src/test/response.rs:21:9:
assertion `left == right` failed
  left: 200
 right: 401
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        PASS [  21.017s] server::organization test_get_by_id_not_found
        PASS [  21.050s] server::organization test_get_all
        FAIL [  21.145s] server::organization test_create_domain_already_taken

--- STDOUT:              server::organization test_create_domain_already_taken ---

running 1 test
test test_create_domain_already_taken ... FAILED

failures:

failures:
    test_create_domain_already_taken

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 13 filtered out; finished in 21.13s


--- STDERR:              server::organization test_create_domain_already_taken ---
thread 'test_create_domain_already_taken' panicked at /home/waff/.cargo/registry/src/index.crates.io-6f17d22bba15001f/poem-3.0.3/src/test/response.rs:21:9:
assertion `left == right` failed
  left: 500
 right: 409
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        PASS [  21.624s] server::organization test_delete_not_found
        PASS [  22.189s] server::organization test_create
        PASS [  22.258s] server::organization test_get_by_id
        PASS [  22.728s] server::organization test_delete
        PASS [  15.876s] server::organization test_update
        PASS [  15.514s] server::organization test_update_not_found
        PASS [  15.052s] server::organization test_uuid_is_not_nil
        PASS [  16.356s] server::organization test_get_users
        FAIL [  16.344s] server::organization test_update_domain_already_taken

--- STDOUT:              server::organization test_update_domain_already_taken ---

running 1 test
test test_update_domain_already_taken ... FAILED

failures:

failures:
    test_update_domain_already_taken

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 13 filtered out; finished in 16.33s


--- STDERR:              server::organization test_update_domain_already_taken ---
thread 'test_update_domain_already_taken' panicked at /home/waff/.cargo/registry/src/index.crates.io-6f17d22bba15001f/poem-3.0.3/src/test/response.rs:21:9:
assertion `left == right` failed
  left: 500
 right: 409
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        PASS [  17.624s] server::organization test_get_gyms
------------
     Summary [  37.579s] 14 tests run: 11 passed, 3 failed, 0 skipped
        FAIL [  19.954s] server::organization test_authorization
        FAIL [  21.145s] server::organization test_create_domain_already_taken
        FAIL [  16.344s] server::organization test_update_domain_already_taken
error: test run failed

Losing output color in neotest preview is a pity but I guess I can't get both :/

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

I guess it's because of the space between the namespace and the test name. That wasn't present in the repro project's output.

I'll have to revisit this another time.

Losing output color in neotest preview is a pity but I guess I can't get both :/

ansi escape codes would complicate the parsing logic.
It might work with junit output in the future - but I'm not sure.

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

@wvffle could it be that you don't have your tests in a tests module, such as:

#[cfg(test)]
mod tests {
    use super::*;
    //  test functions here
}

If you have all your tests at the top-level, rustaceanvim will just run the tests for the whole module, because there's no such thing as a "test file" in rust.

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

there's no such thing as a "test file" in rust

maybe not, but there for sure is tests/ directory: https://doc.rust-lang.org/rust-by-example/testing/integration_testing.html

I guess it's because of the space between the namespace and the test name. That wasn't present in the repro project's output.

Maybe because the repro project wasn't using cargo workspaces. When doing so, the hotfix still works in that repo. I've pushed a workspace branch to the repro project.
image

And the output contains the space you are talking about, I believe

    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.10s
    Starting 2 tests across 1 binary (run ID: 009dd9df-bb3e-44ac-901f-a811501f3275, nextest profile: default)
        PASS [   0.006s] workspace_crate2::workspace_integration test_ok
        FAIL [   0.007s] workspace_crate2::workspace_integration test_fail

--- STDOUT:              workspace_crate2::workspace_integration test_fail ---

running 1 test
test test_fail ... FAILED

failures:

failures:
    test_fail

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s


--- STDERR:              workspace_crate2::workspace_integration test_fail ---
thread 'test_fail' panicked at workspace_crate2/tests/workspace_integration.rs:3:5:
assertion `left == right` failed
  left: 1
 right: 2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

------------
     Summary [   0.007s] 2 tests run: 1 passed, 1 failed, 0 skipped
        FAIL [   0.007s] workspace_crate2::workspace_integration test_fail
error: test run failed


I guess I've found another bug:

With two failing tests, the second failed one seems to be marked as skipped even though the output is saying that zero tests were skipped and 2 have failed.

image

Output
   Compiling workspace_crate v0.1.0 (/home/waff/workspace/rustaceanvim-460-repro/workspace_crate)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.32stest)                                         
    Starting 3 tests across 1 binary (run ID: b3cc95de-6d4d-4499-af98-8e92a04a193c, nextest profile: default)
        FAIL [   0.005s] workspace_crate::workspace_integration test_fail

--- STDOUT:              workspace_crate::workspace_integration test_fail ---

running 1 test
test test_fail ... FAILED

failures:

failures:
    test_fail

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s


--- STDERR:              workspace_crate::workspace_integration test_fail ---
thread 'test_fail' panicked at workspace_crate/tests/workspace_integration.rs:3:5:
assertion `left == right` failed
  left: 1
 right: 2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        FAIL [   0.005s] workspace_crate::workspace_integration test_fail2

--- STDOUT:              workspace_crate::workspace_integration test_fail2 ---

running 1 test
test test_fail2 ... FAILED

failures:

failures:
    test_fail2

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s


--- STDERR:              workspace_crate::workspace_integration test_fail2 ---
thread 'test_fail2' panicked at workspace_crate/tests/workspace_integration.rs:13:5:
assertion `left == right` failed
  left: 1
 right: 2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        PASS [   0.005s] workspace_crate::workspace_integration test_ok
------------
     Summary [   0.006s] 3 tests run: 1 passed, 2 failed, 0 skipped
        FAIL [   0.005s] workspace_crate::workspace_integration test_fail
        FAIL [   0.005s] workspace_crate::workspace_integration test_fail2
error: test run failed

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

Up until now, I've been running the tests from the neotest summary, for the organization.rs file. Now I've tried to do so with the entire tests/ directory and it turns out, that another module in that directory works, but the organization.rs does not...

image

This is strange.

Output of auth.rs module
    Blocking waiting for file lock on package cache
    Blocking waiting for file lock on package cache
    Blocking waiting for file lock on package cache
   Compiling server v0.1.0 (/home/waff/workspace/flash-manager-api/server)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 3.18s                                              
    Starting 3 tests across 1 binary (run ID: a8c27e16-d31a-4e21-b6f5-cf9cd1502c01, nextest profile: default)
        FAIL [   0.008s] server::auth test_fail

--- STDOUT:              server::auth test_fail ---

running 1 test
test test_fail ... FAILED

failures:

failures:
    test_fail

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s


--- STDERR:              server::auth test_fail ---
thread 'test_fail' panicked at server/tests/auth.rs:6:5:
assertion `left == right` failed
  left: 1
 right: 2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

        PASS [  23.375s] server::auth test_hi
        PASS [  26.765s] server::auth test_client_login
------------
     Summary [  26.766s] 3 tests run: 2 passed, 1 failed, 0 skipped
        FAIL [   0.008s] server::auth test_fail
error: test run failed

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

@mrcjkb I guess I've cracked it:

I've commented out all failing tests and added a basic test that failed and it works well.

In the real test, I use an assert method on a test client from the poem framework and it fails like so:

thread 'test_authorization' panicked at /home/waff/.cargo/registry/src/index.crates.io-6f17d22bba15001f/poem-3.0.3/src/test/response.rs:21:9:
assertion `left == right` failed
  left: 200
 right: 401

This, I think, also makes it so that the test failure reason is not present as diagnostics in the file.

When I did a simple assert_eq!(1, 2), it worked well and failed like so:

thread 'test_fail' panicked at server/tests/organization.rs:10:5:
assertion `left == right` failed
  left: 1
 right: 2

I've replicated exact behavior in the repro project on the master branch:
image

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

maybe not, but there for sure is tests/ directory

The thing is, you can't tell cargo to "run a test file", which is a thing neotest tries to do when you run from the summary or with the cursor outside of a test function or module.

Afaik, neotest with neotest-rust works around this by parsing the file and running all tests in it individally.
That doesn't work with rust-analyzer.
Any support for doing so would have to be implemented in rust-analyzer or cargo.

With two failing tests, the second failed one seems to be marked as skipped even though the output is saying that zero tests were skipped and 2 have failed.

Could this be caused by neotest trying to run a test file too?

In the real test, I use an assert method on a test client from the poem framework and it fails like so

Yep. That error message contains no information about the location of the failure, because the assertion is in a library. There's not much I can do there.

Does this relate to this issue or is it a separate one?

@wvffle
Copy link
Author

wvffle commented Jul 27, 2024

The thing is, you can't tell cargo to "run a test file", which is a thing neotest tries to do when you run from the summary or with the cursor outside of a test function or module.

Afaik, neotest with neotest-rust works around this by parsing the file and running all tests in it individally. That doesn't work with rust-analyzer. Any support for doing so would have to be implemented in rust-analyzer or cargo.

Thanks for clarification

Could this be caused by neotest trying to run a test file too?

I don't know :/

Yep. That error message contains no information about the location of the failure, because the assertion is in a library. There's not much I can do there.

Does this relate to this issue or is it a separate one?

Well, it seems like it's the culprit why all tests fail in that specific module even after supporting nextest 0.9.72.

@mrcjkb
Copy link
Owner

mrcjkb commented Jul 27, 2024

I think the only way to solve this reliably is by using nexttest's JUnit output .

I'll merge #461 and close this issue for now, and I'll open a separate issue for using JUnit output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants