-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add code coverage action #10340
Add code coverage action #10340
Conversation
I had a look at this exact problem recently when I was working on I think there's a few points I would highlight here. First one is minor: I think we should run this on a schedule. We don't currently do a full service test run on every commit to master. I think this is the right call. Although we don't merge super frequently, when we do dependabot updates we do often merge 10 or 20 PRs rapidly. A full service test run on every merge is slow, and does get us into the territory where we're possibly hitting rate limits on some of those api keys we use for testing. Previously we were reporting coverage based on a daily test run. I think once a day is about the right cadence for this. If we're going to have one service test run to make the markdown report and one to report coverage, maybe we do them a few hours apart. Secondly, when I had a go at this, I was running each individual test suite:
as a seperate step in the workflow. So mine looked like Yaml Workflow... steps: - name: Checkout uses: actions/checkout@v4 Here's an example of a run: https://github.com/badges/shields/actions/runs/9553559835/job/26332639191 You'll notice that as well as the service tests failing (which we kind of expect), the core and package tests were also failing. The reason for that is this The final thing is a question really. It doesn't look like you're doing to explicitly merge the coverage reports, but it does seem to be happening, looking at coveralls. Any idea how this is working? |
Will update accordingly 👍🏻
I guess one middle-ground would be to separate in two steps, one that runs non-service coverage, and one that runs service coverage, which aligns with the delineation we've done in package.json with
All tests seems to be running as expected, but having generated a report on Node 22 (excluding flaky service tests), coverage percentage is slightly higher (68.92% vs. 68.7%). I'm guessing bits of coverage are being missed unless something else has changed in Node 22. However, I'm tempted to not agonise over this and maybe just switch them to Node 22? 😄
Looking at how things are being generated locally, there seems to be some clever merging happening to append new coverage information into a single |
I think it would be useful to separate the run where we expect all tests to pass from the ones where we expect some to fail.
I think I would also be in favour of just measuring test coverage on node 22 as the least-worst solution. You'll need
but lets also leave a comment in the workflow yaml explaining why we're deploying on node 20 and measuring coverage on 22. |
I'll just add that part of the discussion here relates to a broader topic of discussion I was alluding to in #10341 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on board with the latest changes 👍
It's been exactly three years today since we last submitted a coverage report to Coveralls. This PR introduces a GitHub workflow that runs whenever a commit is pushed to master. To make sure everything was running as intended, I temporarily removed the master branch constraint before opening this PR, here's an example run: https://coveralls.io/builds/68519008