Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IATR](M1.0) Test runner filtering #24855

Closed
warrensplayer opened this issue Nov 28, 2022 · 3 comments
Closed

[IATR](M1.0) Test runner filtering #24855

warrensplayer opened this issue Nov 28, 2022 · 3 comments
Assignees

Comments

@warrensplayer
Copy link
Contributor

warrensplayer commented Nov 28, 2022

Add ability to filter tests for a spec in the runner to only the specs that failed in the cloud

Solution for filtering tests in the runner

To filter the tests in a way that will allow for the most flexibility with the developer experience, the following pattern should be followed:

  • A runId will be passed to the spec runner that will indicate which Cloud run should be used to filter the tests
  • The runner will query the cloud with the runId and spec path and will get back a list of test names and their status (PASSED, FAILED, PENDING, etc.)
  • After the local tests are parsed by mocha, the runner will match up the cloud tests with the parsed local tests. The matching will occur by looking at the full suite path of the test including the test name. In case there are multiple tests that match, then include all matching tests.
  • All tests except the ones from the cloud that were marked FAILED should be removed from the list of tests for mocha to run.
  • If the test is triggered to run again either by clicking the rerun button or if the file watcher detects a change, then the local spec file should be parsed again and the test filter from the cloud reapplied. The cloud will not need to be queried again, but use the test status information from the original query.

Background of solution can be found in recorded Zoom found in internal Cypress Slack initiative channel here

Requirements

  • Add new parameter to /specs/runner url to allow filtering tests by run
    • Only needed for open mode
    • Parameter should be runId={runId} to enable filter
  • Show a badge in the reporter header (packages/reporter/src/header/header.tsx) to indicate the tests are being filtered
    • Should show bug icon
    • Should show ratio of filtered tests being run versus total tests in the spec
    • Should have "x" icon that will close and remove the filter
      • When the filter is removed, the url parameter for the filter should be removed and the page should refresh to run all tests for the given spec

image

  • Filter tests in runner to only run failed tests that match from a run in the cloud

Clickup: https://app.clickup.com/t/18033298/PM-2546

@warrensplayer warrensplayer changed the title [IATR](M1) Test runner filtering [IATR](M1.0) Test runner filtering Dec 1, 2022
@warrensplayer
Copy link
Contributor Author

@lmiller1990
Copy link
Contributor

lmiller1990 commented Dec 19, 2022

Note: we have some logic around this already for Cypress Studio, which does a kind of "only run this test" when we are in "Studio" mode. I worked on Studio, so I can definitely provide some ideas. I'll write some below.

This requires hacking into Mocha internals. Luckily, Cypress does this already, so most of the patterns exist.

The App (Vue) and Reporter (React) generally communicate via the EventManager, so that will be how you implement this (it'll be similar to the workflow Studio uses).

It might be worth doing a small technical brief/prototype, or at least having a good think about how to implement this before you start coding. I think this is quite a tricky task, but a good opportunity to learn how the core of Cypress (driver, reporter, runner) work. I'd really recommend a quick prototype and then coming up with some ideas and running it by someone who knows the Cypress runner code well before writing a lot of code.


Here's some useful things that you might want to look at and considering when implementing this.

  • You can hook/hack into the runner (Mocha). We already do here. Poke around there and do some console.log to get a feel for how it works. For this task, we will basically want to do the programmatic equivalent of adding it.only to each of the tests we'd like to run.
  • We need to persist the state between page reloads. For example, if you've got two tests on a different domain, Cypress will do that thing where it refreshes the App to change the domain. We already do this - the reporter needs to re-populate when this happens. The "re-populate" parts happens here. The actual preserving is here when we navigate, but it's unlikely you need to do anything here. If you wire it up right, it should just work.
  • There are many hooks you can use to hook into the runner lifecycle. This file has many examples.
  • App -> Driver communication means crossing the Vue -> React seam. Reactivity is kind of weird here. It's possible to do so via the MobX store, but I think this is an antipattern. What you want to do is use EventManager to do this - it's event driven. Once you hook into the runner and set which tests need to execute, that should be it - Cypress will know what to do.
  • You should be able to implement the required communication entirely using the Event Manager. Reactivity is good for UI development, but this isn't really a UI feature - it's much closer to business logic. Whatever you do here, it should be able to survive a major UI refactor.

@emilyrohrbough
Copy link
Member

How does this UI wrap when the Command Log is smaller or there are over 99 tests?

@mike-plummer mike-plummer self-assigned this Jan 10, 2023
warrensplayer pushed a commit that referenced this issue Jan 17, 2023
Co-authored-by: Emily Rohrbough <emilyrohrbough@users.noreply.github.com>
Co-authored-by: Mark Noonan <mark@cypress.io>
Co-authored-by: Mike Plummer <mikep@cypress.io>
Closes #24855
@warrensplayer warrensplayer added stage: done and removed stage: needs review The PR code is done & tested, needs review labels Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants