-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should issues creating churn on 100s of files require backports to land? #16249
Comments
SGTM, although we could lower the number 100 if necessary. Maybe a new label like |
another commit touching > 100 files This seems to be a pattern in the last couple months that we had not done in the past @nodejs/tsc how would you suggest going from here? Should we add this to the agenda? @nodejs/collaborators would enjoy some more insight on this |
I love this idea. |
I'm -0 on requiring a backport PR in order for the original to land. I'd be more inclined to not backport these types of changes (and yes, I fully realize that not doing so creates it's own backporting burden)... but I would argue that these kinds of broad brush changes do not necessarily need to be backported at all. |
@jasnell One problem is, if they have touched enough files then not backporting them will result in more conflicts later, because new PRs will come to depend on those changes, especially those touching a lot of tests |
Yeah, I've been through that pain first hand, which is why I'm only -0. I don't think this should be a hard rule, but a useful recommendation. There are just going to be cases where backporting them is not necessary |
I think this is a great idea. It should simplify the work of the release team. |
I think this is a good idea. |
I like the idea. There might be some difficulties with that as well but I think we should give it a try. |
I like the idea, and I agree that if it touches enough files then backporting is necessary for the sake of the backporting team. My worry would be that this will lead to us backporting things straight away. How would this work if we wanted to leave the PR to bake in current for a release or two? |
I recall that, for some of my big-churn PRs, LTS maintainers ask me to promise to do a backport when the time comes. Maybe we can just formalize such promises? Or this does not guarantee us anything? |
We can also ask that all the reviewers with LGTM would promise to help with a backport if the author cannot do it due to some circumstances (+ re-review the backport). |
Also, maybe it is worth to mention in the guides that a contributor may need to ask in an issue if some big-churn PR would be accepted (and would have reviewers and backport guarantees) to prevent disappointing and wasting time. |
I think we should formalize both of these things for all PRs, to help set expectations. If your PR gets landed and needs a backport, you should be ready to do said backport. Obviously help is available if you're unsure. |
I like all of these ideas! Does anyone feel like taking the action item of formalizing some of this in the Contributor guide? |
@MylesBorins I think this actually belongs to the collaborator guide? So far the ideas are:
|
This is for all contributors though, not just collaborators, so maybe Contributing makes more sense. |
@gibfahn Yeah. but my instincts was this should be placed where the "semver-major needs two TSC sign-off" is, and turns out that's in the collaborator guide. I guess that makes sense because it's the people landing it should be the most careful. |
I'll put something together and submit to the collaborator's guide |
Since I create such churn yearly… yes, I'm willing to take on backports for
ICU if needed. Would you like backports at the same time as the main PR
lands for ICU? I ask because ICU 60 will be here in a week or so.
|
I was just reviewing #14162 which created a new lint rule and touched 294 with a very subtle style churn. I have no problems with this type of change landing, but I think that we may want to consider pre-opening backport PRs as part of the sign off requirements.
The potential for running into conflicts mid audit becomes quite large when touching a large percentage of the tests at once like this.
The text was updated successfully, but these errors were encountered: