-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: require CI status indicator in PRs #17151
Conversation
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen.
what is a status indicator? |
The "All checks have passed" thing here, for example: #14998 |
I don't know... this is tricky. It wasn't working for a long time so it's not like it would've helped with the commits listed above. Also, sometimes people forget to check the box when running a CI. The language already in there pretty much mandates that a CI should be run before landing a pull request. People just overlook it from time to time. |
What about checking POST_STATUS_TO_PR by default in the CI job? The risk to have no status would be much lower. |
True, but it works now.
Maybe, but I think there are people who don't check it because they don't see the need.
Yes, and I think it's harder to overlook a big red cross than a URL. Agree, but it won't hurt to update the language too, so that people don't uncheck it. |
COLLABORATOR_GUIDE.md
Outdated
@@ -142,6 +142,7 @@ test should *fail* before the change, and *pass* after the change. | |||
All pull requests that modify executable code should be subjected to | |||
continuous integration tests on the | |||
[project CI server](https://ci.nodejs.org/). | |||
The pull request should have a CI status indicator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: If the CI job has the option, the pull request should have a CI status indicator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about just adding "if possible"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
Oh? I'm running #17153 as a test to myself because I kinda don't believe it yet haha. This seems like a good idea though. What about docs PRs? I'm not certain it is widely known how to get |
CI isn't required for docs PRs, is it? Or are you suggesting to require it? |
Now that we lint the docs we probably should be requiring CI for docs PRs. |
Perhaps, but I'd say it should be a separate issue. The idea of this PR is to require CI status indicator if CI is required. |
Now we build docs in test jobs, don't we? So the full test run is needed for docs, right? |
Hmm, this is also mentioned in the |
Mind that sometimes the bot seem to fail to report certain failures, I remember there was one in #14891 where linux failed with a relavent crash but it's green in the status updated by the bot. |
For some reason the test/linux link points to https://ci.nodejs.org/job/node-test-commit-linux/nodes=alpine36-container-x64/14195/. cc @nodejs/build |
Yeah, so I figured this out when tinkering with node-test-commit-linux-linked because the same thing was happening. What's going on is that it's submitting the build status for each node in the job not just for the parent job itself. For node-test-commit-linux we now have 20 nodes running each time so it's submitting status 20 times for each one and the final status will come from the final node that finishes which may or may not reflect the overall failure status. For node-test-linux-linked I fixed it by just submitting a different status for each node since they're different anyway (openssl 1.1.0, fips and debug for now) by pulling out a custom label before any status is submitted into a
So you can see For node-test-commit-link we don't want to do that (I think), want an overall status. Unfortunately, a Jenkins job is set up to run on each node, the parent node is just there to do the orchestration and I'm not aware of how you can tie specific execution to that orchestration executor. @maclover7 this is yours isn't it? Any ideas? |
I'm not sure there is a way to do that in the existing Jenkins UI. You could do it using pipelines though, if you trigger a multi-config job through a pipeline, the pipeline receives the aggregated set of results. |
An alternative is to make the bot smarter, it could count the number of "pending" for a given job+label and only report final status to github once it's received that many statuses from jenkins workers. This raises another interesting question though—isn't the bot supposed to only listen to the two jenkins-workspace nodes and ignore HTTP status posts from other hosts? These should be coming in from all of the various workers we have running. Unless I'm misunderstanding what's happening. Perhaps some logs from the bot might help if we can generate them? @nodejs/automation |
Yes, that's the thought at least. The whitelist hasn't been activated yet, but it's ready to go now that nodejs/build#985 is approved. To avoid anyone on the interwebs to send requests directly to the bot, we've been thinking the bot either have to have a whitelist or a secret. |
@rvagg Let's see if nodejs/build#985 fixes the issue, and if not we can revisit. Pipelines will take care of all of this, but that's obviously not being rolled out for a little while. Should an issue at nodejs/build be opened about this? |
Any progress here? I do not have a strong opinion about this but nothing new has happened for two weeks. Shall we just land it? Or is there a strong opposition and we might just close it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems uncontroversial.
Landed in ae2bed9, thanks! |
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms. Having a CI status indicator in the PR should make this less likely to happen. PR-URL: #17151 Reviewed-By: Michaël Zasso <targos@protonmail.com> Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com> Reviewed-By: Gibson Fahnestock <gibfahn@gmail.com> Reviewed-By: Jon Moss <me@jonathanmoss.me> Reviewed-By: Tobias Nießen <tniessen@tnie.de>
Commits are often landed despite failing on one or more CI platforms.
Having a CI status indicator in the PR should make this less likely to
happen.
Examples from the last 6 months:
6645126
f912080
aa011a1
a6973a3
Checklist
Affected core subsystem(s)
doc