-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/go: add package version support to Go toolchain #24301
Comments
Frequently Asked QuestionsThis issue comment answers the most frequently asked questions, whether from the discussion below or from other discussions. Other questions from the discussion are in the next issue comment. Why is the proposal not “use Dep”?At the start of the journey that led to this proposal, almost two years ago, we all believed the answer would be to follow the package versioning approach exemplified by Ruby's Bundler and then Rust's Cargo: tagged semantic versions, a hand-edited dependency constraint file known as a manifest, a separate machine-generated transitive dependency description known as a lock file, a version solver to compute a lock file satisfying the manifest, and repositories as the unit of versioning. Dep follows this rough plan almost exactly and was originally intended to serve as the model for go command integration. However, the more I understood the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, and the more I discussed those details with others on the Go team, a few of the details seemed less and less a good fit for Go. The proposal adjusts those details in the hope of shipping a system that is easier for developers to understand and to use. See the proposal's rationale section for more about the specific details we wanted to change, and also the blog post announcing the proposal. Why must major version numbers appear in import paths?To follow the import compatibility rule, which dramatically simplifies the rest of the system. See also the blog post announcing the proposal, which talks more about the motivation and justification for the import compatibility rule. Why are major versions v0, v1 omitted from import paths?v1 is omitted from import paths for two reasons. First, many developers will create packages that never make a breaking change once they reach v1, which is something we've encouraged from the start. We don't believe all those developers should be forced to have an explicit v1 when they may have no intention of ever releasing v2. The v1 becomes just noise. If those developers do eventually create a v2, the extra precision kicks in then, to distinguish from the default, v1. There are good arguments about visible stability for putting the v1 everywhere, and if we were designing a system from scratch, maybe that would make it a close call. But the weight of existing code tips the balance strongly in favor of omitting v1. v0 is omitted from import paths because - according to semver - there are no compatibility guarantees at all for those versions. Requiring an explicit v0 element would do little to ensure compatibility; you'd have to say v0.1.2 to be completely precise, updating all import paths on every update of the library. That seems like overkill. Instead we hope that developers will simply look at the list of modules they depend on and be appropriately wary of any v0.x.y versions they find. This has the effect of not distinguishing v0 from v1 in import paths, but usually v0 is a sequence of breaking changes leading to v1, so it makes sense to treat v1 as the final step in that breaking sequence, not something that needs distinguishing from v0. As @Merovius put it (#24301 (comment)):
Finally, omitting the major versions v0 and v1 is mandatory - not optional - so that there is a single canonical import path for each package. Why must I create a new branch for v2 instead of continuing to work on master?You don't have to create a new branch. The vgo modules post unfortunately gives that impression in its discussion of the "major branch" repository layout. But vgo doesn't care about branches. It only looks up tags and resolves which specific commits they point at. If you develop v1 on master, you decide you are completely done with v1, and you want to start making v2 commits on master, that's fine: start tagging master with v2.x.y tags. But note that some of your users will keep using v1, and you may occasionally want to issue a minor v1 bug fix. You might at least want to fork a new v1 branch for that work at the point where you start using master for v2. Won't minimal version selection keep developers from getting important updates?This is a common fear, but I really think if anything the opposite will happen. Quoting the "Upgrade Speed" section of https://research.swtch.com/vgo-mvs:
By "right amount slower" I was referring to the key property that upgrades happen only when you ask for them, not when you haven't. That means that code only changes (in potentially unexpected and breaking ways) when you are expecting that to happen and ready to test it, debug it, and so on. See also the response #24301 (comment) by @Merovius. If $GOPATH is deprecated, where does downloaded code live?Code you check out and work on and modify can be stored anywhere in your file system, just like with essentially every other developer tool. Vgo does need some space to hold downloaded source code and install binaries, and for that it does still use $GOPATH, which as of Go 1.9 defaults to $HOME/go. So developers will never need to set $GOPATH unless they want these files to be in a different directory. To change just the binary install location, they can set $GOBIN (as always). Why are you introducing the
|
Discussion Summary (last updated 2017-04-25)This issue comment holds a summary of the discussion below. How can we handle migration?[#24301 (comment) by @ChrisHines.] Response #24301 (comment) by @rsc. The original proposal assumes the migration is handled by authors moving to subdirectories when compatibility is important to them, but of course that motivation is wrong. Compatibility is most important to users, who have little influence on authors moving. And it doesn't help older versions. The linked comment, now also #25069, proposes a minimal change to old "go build" to be able to consume and build module-aware code. How can we deal with singleton registrations?[#24301 (comment) by @jimmyfrasche.] Response #24301 (comment) by @rsc. Singleton registration collisions (such as http.Handle of the same path) between completely different modules are unaffected by the proposal. For collisions between different major versions of a single module, authors can write the different major versions to expect to coordinate, usually by making v1 call into v2, and then use a requirement cycle to make sure v2 is not used with older v1 that don't know about the coordination. How should we install a versioned command?[#24301 (comment) by @leonklingele.] Response #24301 (comment) by @rsc. In short, use go get. We still use $GOPATH/bin for the install location. Remember that $GOPATH now defaults to $HOME/go, so commands will end up in $HOME/go/bin, and $GOBIN can override that. Why are v0, v1 omitted in the import paths? Why must the others appear? Why must v0, v1 never appear?[#24301 (comment) by @justinian.] Added to FAQ above. Why are zip files mentioned in the proposal?[#24301 (comment) by @nightlyone.] The ecosystem will benefit from defining a concrete interchange format. That will enable proxies and other tooling. At the same time, we're abandoning direct use of version control (see rationale at top of this post). Both of this motivate describing the specific format. Most developers will not need to think about zip files at all; no developers will need to look inside them, unless they're building something like godoc.org. See also #24057 about zip vs tar. Doesn't putting major versions in import paths violate DRY?[#24301 (comment) by @jayschwa.] No, because an import's semantics should be understandable without reference to the go.mod file. The go.mod file is only specifying finer detail. See the second half of the semantic import versions section of the proposal, starting at the block quote. Also, if you DRY too much you end up with fragile systems. Redundancy can be a good thing. So "violat[ing] DRY" - that is to say, limited repeating yourself - is not always bad. For example we put the package clause in every .go file in the directory, not just one. That caught honest mistakes early on and later turned into an easy way to distinguish external test packages (package x vs package x_test). There's a balance to be struck. Which timezone is used for the timestamp in pseudo-versions?[#24301 (comment) by @tpng.] UTC. Note also that you never have to type a pseudo-version yourself. You can type a git commit hash (or hash prefix) and vgo will compute and substitute the appropriate pseudo-version. Will vgo address non-Go dependencies, like C or protocol buffers? Generated code?[#24301 (comment) by @AlexRouSg.] Non-Go development continues to be a non-goal of the That said, we certainly do understand that using protocol buffers with Go is too difficult, and we'd like to see that addressed separately. As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result. Won't minimal version selection keep developers from getting important updates?[#24301 (comment) by @TocarIP.] Added to FAQ. Can I use master to develop v1 and then reuse it to develop v2?[#24301 (comment) by @mrkanister.] Yes. Added to FAQ. What is the timeline for this?[#24301 (comment) by @flibustenet.] Response in #24301 (comment) by @rsc. In short, the goal is to land a "technology preview" in Go 1.11; work may continue a few weeks into the freeze but not further. Probably don't send PRs adding go.mod to every library you can find until the proposal is marked accepted and the development copy of cmd/go has been updated. How can I make a backwards-incompatible security change?[#24301 (comment) by @buro9.] Response in #24301 (comment) by @rsc. In short, the Go 1 compatibility guidelines do allow breaking changes for security reasons to avoid bumping the major version, but it's always best to do so in a way that keeps existing code working as much as possible. For example, don't remove a function. Instead, make the function panic or log.Fatal only if called improperly. If one repo holds different modules in subdirectories (say, v2, v3, v4), can vgo mix and match from different commits?[#24301 (comment) by @jimmyfrasche.] Yes. It treats each version tag as corresponding only to one subtree of the overall repository, and it can use a different tag (and therefore different commit) for each decision. What if projects misuse semver? Should we allow minor versions in import paths?[#24301 (comment) by @pbx0.] As @powerman notes, we definitely need to provide an API consistency checker so that projects at least can be told when they are about to release an obviously breaking change. Can you determine if you have more than one package in a build?[#24301 (comment) by @pbx0.] The easiest thing to do would be to use goversion -m on the resulting binary. We should make a go option to show the same thing without building the binary. Concerns about vgo reliance on proxy vs vendor, especially open source vs enterprise.[#24301 (comment) by @joeshaw.] Response: [#24301 (comment) by @rsc.] Proxy and vendor will both be supported. Proxy is very important to enterprise, and vendor is very important to open source. We also want to build a reliable mirror network, but only once vgo becomes go. Concerns about protobuild depending on GOPATH semantics.[#24301 (comment) by @stevvooe.] Response [#24301 (comment) by @rsc] asked for more details in a new issue, but that issue does not seem to have been filed. Suggestion to add special
|
. |
1 similar comment
. |
Change https://golang.org/cl/101678 mentions this issue: |
As a reminder, it's fine to make comments about grammar, wording, and the like on Gerrit, but comments about the substance of the proposal should be made on GitHub: golang.org/issue/24301. For golang/go#24301. Change-Id: I5dcf204da3ebe947eecad7d020002dd52f64faa2 Reviewed-on: https://go-review.googlesource.com/101678 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
This proposal is impressive and I like most everything about it. However, I posted the following concern on the mailing list, but never received any replies. In the meantime I've seen this issue raised by others in the Gophers slack channel for vgo and haven't seen a satisfactory answer there either. From: https://groups.google.com/d/msg/golang-dev/Plc42fslQEk/rlfeNlazAgAJ
The proposal states:
But I don't see a way for non-module-aware packages to import module-aware packages with transitive dependencies >= v2. That seems to cause ecosystem fragmentation in a way not yet addressed. Once you have a module-aware dependency that has a package >= v2 somewhere in its transitive dependencies that seems to force all its dependents to also adopt vgo to keep the build working. Update: see also #24454 |
It is unclear to me what this means and how it changes from the current situation. It would seem to me, that this describes the current situation as well: If I break this rule, upgrades and go-get will fail. AIUI nothing really changes and I'd suggest removing at least the mention of "more teeth". Unless, of course, this paragraph is meant to imply that there are additional mechanisms in place to penalize/prevent breakages? |
This would also affect things like database drivers and image formats that register themselves with another package during init, since multiple major versions of the same package can end up doing this. It's unclear to me what all the repercussions of that would be. |
Why is this? In the linked post, I only see the rationale that this is what developers currently do to create alternate paths when they make breaking changes - but this is a workaround for the fact that they don't initially plan for the tooling not handling versions for them. If we're switching to a new practice, why not allow and encourage (or even mandate) that new vgo-enabled packages include |
I generally like the proposal, but am hung up on requiring major versions in import paths:
I understand that scenarios like the |
First of all: Impressive work! One thing that is totally unclear to me and seems a bit underspecified: Why there is a zip files in this proposal? Layout, constraints and multiple use cases like when it is created and how it's life cycle is managed, what tools need support, how tools like linters should interact with it are also unclear, because they are not covered in the proposal. So I would suggest to either refer to a later, still unwritten, proposal here and remove the word zip or remove the whole part from the proposal text, if you plan not discuss it at all within the scope of this proposal. Discussing this later also enables a different audiences to contribute better here. |
Which timezone is used for the timestamp in the pseudo-version (v0.0.0-yyyymmddhhmmss-commit)? Edit: |
@rsc Will you be addressing C dependencies? |
Looks like Minimal version selection makes propagation of non-breaking changes very slow. Suppose we have a popular library Foo, which is used by projects A,B and C. Someone improves Foo performance without changing API. Currently receiving updates is an opt-out process. If project A vendored Foo, but B and C didn't, author only needs to send pr with update to vendored dependency to A. So non-api breaking contributions won't have as much effect on community and are somewhat discouraged compared to current situation. This is even more problematic for security updates. If some abandoned/small/not very active project (not library) declares direct dependency on old version of e. g. x/crypto all users of that project will be vulnerable to flaw in x/crypto until project is updated, potentially forever. Currently users of such projects will receive latest fixed version, so this makes security situation worse. IIRC there were some suggestions how to fix this in maillist discussion, but, as far as I can tell this proposal doesn't mention it. |
See the mention of |
I've seen it, but this is still an opt-in mechanism. |
If support for |
@leonklingele: from my understanding, |
I wonder how this will affect the flow of working with a Git repository in general, also building on this sentence from the proposal:
At the moment, it seems common to work on master (for me this includes short-lived feature branches) and to tag a commit with a new version every now and then. I feel this workflow is made more confusing with Go modules as soon as I release v2 of my library, because now I have a I know that the default branch can be changed from |
New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version. I think this aspect of the proposal sets the right incentive: it encourages upstream authors to think about how they can do changes in a backwards-compatible way, reducing overall ecosystem churn. |
You can instead create a |
According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit. |
This is a problematic style of thinking that I think has bitten Go hard in the past. For one person on one project, switching what branch is default is simple in the moment, yes. But going against workflow conventions will mean people forget, especially when they work in several languages. And it will be one more quirky example of how Go does things totally differently that newcomers have to learn. Going against common programmer workflow conventions is not at all a minor cost. |
Not following the conventional path is sometimes the necessary condition for innovation. |
@willfaught To us, dep breaking the build when a maximum version limitation is used to avoid a bad version is considered a success! This is what we want to happen. Russ correctly notes that this problem is not resolved automatically. A constraint on To repeat the point because it continues not to be understood: neither
I would add some color to this summary in that these constraints are needed to deal with the unhappy path of when compatibility rules are violated. I think @Merovius is on to a workable solution. MVS would proceed as specified, then after resolution is complete, vgo would check each resolved package for exclusions. If any are found, these are reported to the end user, who can choose to either alter their dependencies so as to meet these constraints or choose to override and ignore the constraints. I'm been on the flip side of this too and sometimes you know better than the maintainer; it works for your application and that's all you care about. This restores a path for intermediary libraries to communicate an incompatibility to end users. To repeat the helm example yet again: User@1.1: Helm @2.1.2 ==> User upgrades to Helm@2.1.3 intentionally. User: Helm @2.1.3, GRPC@1.4.0 ("Helm updated, so I can finally use grpc 1.4!") ==> Bug detected! User is seeing some problems with Helm, so check for a new version. User: Helm@2.1.4, GRPC@1.4.0 User sees a warning that GRPC 1.4.0 is rejected by the install of Helm@2.1.4. Since Helm is broken for me and that's what I'm trying to fix, I remove my dependency on GRPC 1.4.0 (sadly loosing some useful functionality) and rerun MVS. This time, MVS resolves GRPC to 1.3.5 and Helm to 2.1.4, rechecks all the weak constraints, finds they hold, and I'm done. I don't expect any tool to resolve these problems magically but I do expect some recourse as a middleware library. So far as I can tell, the only option in |
The point made in the talk is that vgo is no worse than dep in this scenario. In his example, the build isn't broken in the sense that dep can't find a solution; it's broken in the sense that dep does find a solution, and that solution includes the bad version, resulting in the same bad situation we wanted to avoid. You really should see the video, which walks through an excellent example, but here's the gist as I understood/remember it:
|
I mostly agree with the proposal to add maximum version exclusions, but I do have this worry: Suppose I put in my library "use gRPC >1.4, <1.8" then in gRPC 1.9, the authors decide, "you know what, Helm was right, we made a breaking change in 1.8, we're reverting to our prior behavior in 1.9.0." Now people trying to import Helm+gRPC won't be able to use 1.9 until Helm releases a version that says "use gRPC >1.4, except 1.8.0, 1.8.1, 1.8.2, but 1.9+ is cool". In other words, maybe |
I know essentially nothing about this space. But from reading the discussion here, it sounds like the biggest disagreement boils down to how to detect erroneous cases. That is, both MVS (vgo) and SAT (dep) handle normal situations more or less well, perhaps not identically, but well enough. SAT provides an ability that MVS does not: using SAT, the author of the library package P1 can declare "P1 requires P2, and works with P2Version > 1.1 and P2Version < 1.4". MVS can only declare "P1 requires P2, and works with P2Version > 1.1", and can not express the restriction "P2Version < 1.4". In the normal case, this doesn't matter. It only matters if some operation tries to upgrade P2 to version 1.4. In that case, SAT will report an error, while MVS will not. When using MVS, if the incompatibility is not a compilation error, it may cause a failure long after the fact. No doubt the SAT supporters see other major problems with MVS, but so far this is the one that I understand. I think it's worth noting that if the restriction expressions are themselves versioned--if they are part of a specific release of P1--then in the normal course of events, before P2 version 1.4 is released, P1 version 2.2 will happily say "P2Version > 1.1". It is only when P2 version 1.4 is released that the P1 authors will notice the incompatibility, and release P1 version 2.3 with "P2Version > 1.1 and P2Version < 1.4". So if you are using P1 version 2.2, neither SAT nor MVS will report any problem with upgrading P2 to version 1.4, although it will fail in some possibly subtle way. In other words, while it makes perfect sense for a release of P1 to list minimum compatible versions of P2, if the release does work correctly with the most recent version of P2, then it does not make sense for a release to list maximum compatible versions. The maximum compatible version will be either conservative, and therefore increasingly wrong as newer and better versions of P2 appear, or if P2 changes in some incompatible way in the future, will fail to specify that requirement since at the time of the release it doesn't exist. So if we want to have a system that defines anything other than minimum version requirements, then those requirements must not be part of a specific release, but must instead be part of some sort of metadata associated with the package, metadata that can be fetched at any time without updating the package itself. And that means that the operation "update this package" must be separate from the operation "check whether my current package versions are compatible." I would claim further--and this is definitely more tenuous than the above--that if "check whether my current package versions are compatible" fails, that it is in general unwise to trust any tool to resolve the problem. If the compatibility problem can not be solved by the simple operation "upgrade every relevant package to the current version", then it requires thought. A tool can guide in that thought, but in general it can't replace it. In particular it seems very unwise for a tool to start downgrading packages automatically. So if we think in terms of
then perhaps some of the major differences between MVS and SAT become less important. |
Thanks for saying that so well Ian. To follow up, once we have established versions and vgo, we absolutely want to have a new godoc.org (maybe a different name) that records additional information about packages, information that the go command can consult. And some of that information would be pair-wise incompatibility that the go command could report as warnings or errors in any particular build (that is, reporting the damage, not trying to hide it by working around it). But having versions at all in the core toolchain is the first step, and that, along with just minimal version requirements and semantic import versioning, is what has been accepted in this issue. We are committed to landing this as smoothly as possible. That will require additional tooling, more educational outreach, and PRs to fix issues in existing packages. All that was blocked on accepting this proposal, since it seemed presumptuous to move forward without the overall approach being accepted. But the proposal is accepted, and work will start landing more aggressively now that the uncertainty is over. |
I had the same thought about external info for version compatibility... since version compatibility must be constant across , it doesn't need to be in source control (and in fact being in source control is a definite disadvantage as stated above). It would be nice if there were a proposed solution for this, since it definitely seems to be the one major problem with MVS as proposed. |
It's awesome to see the discussion moving organically in this direction. It has been a one central thrust of my concerns, and it makes it so much easier to explain foundational issues when folks are already most of the way to it. @ianlancetaylor, i think you're spot on with this observation about needing to be able to make changes to constraint information on already-released versions. As @rsc indicated, such a service is something we've discussed/i suggested in our meetings. We could do it with godoc.org, or something else, sure. But i actually don't think it entails a separate service, and it would be better without one. I made a quick reference to this in the piece i published on Friday (just up from that anchor). If nothing else, in a service, there are questions that then have to be answered about whose declaration of incompatibilities should show up in warnings, which means handling identity, and how we scope declarations to particular situations in the depgraph. Keeping the declarations inside metadata files means we don't have to worry about any of that. But more on that in a sec. What's really important here is this point, though maybe not the way you intended it:
The suggestion of a meta-tool that does this search - yes, that's a SAT search - as a solution to the problems folks are identifying is telling. It's pretty much exactly what we'll have to turn dep into, if MVS goes ahead as described. And the first thing to note there is that, if we're so concerned about these incompatibilities that we're talking about a search tool, then what we're actually saying is that MVS becomes just a step in a larger algorithm, and the grokkability benefits go right out the window. Except it's worse than that, because no amount of meta tooling can get around the baked-in problem of information loss that arises from compacting minimum and current versions together. The big result of that is cascading rollbacks, which means that actually trying to remediate any of the incompatibilities in this list will very likely end up tossing back other parts of the dependency graph not-necessarily-related to your problem. And, developers won't be able to follow an update strategy that isn't harmful to others.. (Oh, and phantom rules, but that's just an MVS side effect in general). This is why i've asserted that MVS is an unsuitable intermediate layer on which to build a higher-order tool like this - "not fit for purpose." It's clear that folks believe these incompatibilities will occur, so MVS Is just taking a hard problem and making it harder. If instead, we unify the problem of an "incompatibility service" back into a metadata file, then i believe it's possible, using only a simple set of pairwise declarations, to achieve the same effect. (This is a draft of the concept, but it increasingly seems to hang together) It would entail that parts of MVS change, but MVS could still run atop the information encoded there. That'd mostly be useful if incompatibilities truly go nuts, and you want to just avoid all of them. But the primary algorithm would start from a baseline that looks like MVS, then switch to a broader search (to be clear, MVS itself should still be considered search), without the possibility of moving into absurdly old versions. (note, i'll be on vacation this week, so won't be responding till next weekend) |
Can you be more specific? The sentence you quoted is right after Ian suggesting a tool to report whether the selected versions are compatible - and to the best of my knowledge, that is the main alternative suggested here (it certainly is what I intended above). That problem most definitely is not a search and it's trivial and doesn't require solving SAT (it is just evaluating a boolean formula for a given set of values, not trying to find values that satisfy it). |
Right, simply reporting that there are some known-incompatible values in the formula does not require solving SAT. Taking any action on that basis, such as for a tool that assists in the process of finding a result with no such values in it.
I quoted that sentence not because i think it is indicative of people having accepted search as always necessary, but because if we believe that reporting those such conditions is important, then it is because we believe it is likely we will encounter such scenarios.
the problem is, once the plausibility and the importance of addressing those cases gets established, it looks like folks then make the erroneous jump that "we can just do all the search things on top of MVS, and it'll be fine." we can, but such attempts become much trickier to deal with because of the useful possible paths that MVS cuts off, by design.
…On May 28, 2018 4:02:13 PM EDT, Axel Wagner ***@***.***> wrote:
@sdboyer
> The suggestion of a meta-tool that does this search - yes, that's a
SAT search
Can you be more specific? The sentence you quoted is right after Ian
suggesting a tool to report *whether the selected versions are
compatible* - and to the best of my knowledge, that is the main
alternative suggested here (it certainly is what I intended above).
That problem most definitely is not a search and it's trivial and
doesn't require solving SAT (it is just evaluating a boolean formula
for a given set of values, not trying to find values that satisfy it).
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#24301 (comment)
|
To be clear: The suggestion of retrofitting upper bounds in this way is purely reactive to concerns brought up and to show that it can be done (to critically question the claim that MVS is fundamentally unfit for purpose). It seems a bit unfair to take that concession and willingness to compromise as proof that we think you were right all along. To me, that claim (that MVS is unfit and an essentially irreversible step in the wrong direction) is what I am personally challenging and the lens I am reading your arguments through. One of those arguments was, that it's a feature if we can declare incompatibilities and have the version selection algorithm fail when they are encountered. Another fair argument is, that if they occur, it would be nice to have the algorithm be able to solve them for us (which would indeed require a SAT solver). However, while I think those are valid and fair concerns, I don't believe they pass the bar of proving to me that MVS is fundamentally unfit. I still believe MVS as a starting point brings good and important features to the table. And that if those concerns turn out to cause significant pain in practice, there are still lots of ways we can iterate on that - from adding upper bounds (whether as part of |
It occurs to me that something outside of source control determining compatibility would change the determinism of an MVS system. If you have foo >= 1.5.0 as a constraint in one lib, and another lib has foo >= 1.6.0. Then put those two in a binary and they choose 1.6.0. In MVS this is all you need for a repeatable build. it'll always choose 1.6 But if you add external compatibility to the mix, then you could update that first library to say it's not compatible with 1.6, and then the algorithm would choose 1.7, even though the code hasn't changed... Which means you'd need a lock file again. For reference, I don't think a lock file is a bad thing. It's nice to have an explicit list of exactly what you need to build. And that should make it fast. No magic logic needed. |
@natefinch If the application's go.mod file was updated to require v1.7.0 because the external compatibility tool indicated v1.6.0 was incompatible, you wouldn't need a lock file. Because specifying v1.7.0 lives in the go.mod file the author could also add a comment saying why v1.7.0 is being used and that information would be useful to readers. |
@leighmcculloch , if any files in app are updated then it is a different build and totally outside the scope of "reproducible build without lockfile" problem. out-of-band compatibility information is proposed to reflect how knowledge develops: no incompatibilities were known at release time, but then it became apparent and extra information is published regarding already released versions. IMHO by definition this approach leads to a change how dependencies are pulled, otherwise why have this extract incompatibility information at all? |
The point of the incompatibility information is for authors of libraries to communicate incompatibility information to their users. Whether that information is used at build or at release time is a different question. In vgo's case, the current idea is to only display warnings (or potentially croak). But notably not to let it influence the choice of of versions used (as that would require solving SAT). So it actually doesn't matter, you can use it at either or both and it will fulfill its duty just fine, while retaining the property of repeatability¹. In dep, this information is only used at release time and then recorded to a lock file, which is used at build time. So it seems that we are considering a release-time use "good enough" anyway, at least when it comes to concerns of vgo vs. dep. I still don't think we have to actually answers those questions right now, though. [1] Personally, I'd argue that using it at release time and only if |
@rsc wrote:
I'm wondering if it is necessary to record pair-wise incompatibility. The way I see it currently is that any incompatibility between module A@vN and module B@vM is really because B made an incompatible change from some version vL where L < M. If module B did not make an incompatible change, then module A just has a bug. If it did, then the issue is about B itself, not about the pairing of A and B. So ISTM that any public repository of module metadata can record only incompatibilities of any module with previous versions of itself, which may make the problem more tractable. These incompatibility reports are quite similar to bug reports, although they're not resolvable because once a version is published, it cannot be changed. When you upgrade your module versions, the go tool could consider the metadata and refuse to consider a version that's incompatible with any currently chosen version. I think this avoids the need to solve SAT. It could also decide that a given module has too many incompatibility reports and refuse to add it as a dependency. A set of tuples of the form (module, oldVersion, newVersion, description) might be sufficient. |
Of course, this doesn't work when you're adding several dependencies, which between them end up requiring using mutually incompatible versions, because the new versions aren't part of the existing module, but there might be a reasonable heuristic available. It's not crucial AFAICS, because dependencies should be added relatively rarely. |
I worry that |
Thanks for the energy so many of you have brought to Go and this proposal in particular. The first goal of the proposal process is to "[m]ake sure that proposals get a proper, fair, timely, recorded evaluation with a clear answer." This proposal was discussed at length and we published a summary of the discussion. After six weeks and much discussion, the proposal review committee - stepping in as arbiter because I wrote the proposal - accepted the proposal. A single GitHub issue is a difficult place to have a wide-ranging discussion, because GitHub has no threading for different strands of the conversation and doesn't even display all the comments anymore. The only way a discussion like this works at all is by active curation of the discussion summary. Even the summary had gotten unwieldy by the time the proposal was accepted. Now that the proposal is accepted, this issue is no longer the right place for discussion, and we're no longer updating the summary. Instead, please file new, targeted issues about problems you are having or concrete suggestions for changes, so that we can have focused discussions about each specific topic. Please prefix these new issues with “x/vgo:”. If you mention #24301 in the text of the new issue, then it will be cross-referenced here for others to find. One last point is that accepting the proposal means accepting the idea, not the prototype implementation bugs and all. There are still details to work out and bugs to fix, and we'll continue to do that together. Thanks again for all your help. |
There's more work to be done (see the modules label) but the initial module implementation as proposed in this issue has been committed to the main tree, so I am closing this issue. |
proposal: add package version support to Go toolchain
It is long past time to add versions to the working vocabulary of both Go developers and our tools.
The linked proposal describes a way to do that. See especially the Rationale section for a discussion of alternatives.
This GitHub issue is for discussion about the substance of the proposal.
Other references:
go get golang.org/x/vgo
, the prototype implementationThe text was updated successfully, but these errors were encountered: