-
Notifications
You must be signed in to change notification settings - Fork 892
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overrides that can be checked in to version control #460
Comments
What exactly are you proposing? I can see some benefit to being able to pin the rustc version, although I'd hope that the backwards compatibility guarantees would make that unnecessary. However, if you plan to pin the toolchain, that seems very unhelpful for anyone trying to build the project on a different host. |
I'm proposing to move the configuration of "overrides" into source control. The only difference between the current overrides and this, would be that every developer working on the library or package, are forced to conform and build consistently.
Different hosts can have conforming build environments; Hence, Docker et al. Meanwhile you're right:
|
If you're going to use some form of virtualization/containerization (eg. docker) to get a consistent build environment regardless of the host then it seems like it would make sense to include the rustup/rust installation in that container?
Pretty much any open source project will have people contributing from a variety of build environments (with the exception of virtualized/containerized builds) and in the remaining cases you can always have a setup script which installs the appropriate override. Basically I don't see this being widely applicable, and I think it's more likely to be misunderstood as a feature, with people inadvertently checking in their |
Totally agreed! Is rustup only meant for open source projects? |
No, hence the second part of my comment: there are a select few cases where this would be useful (closed source projects which support only a single build target but don't use a containerized/virtualized build system), and in those cases there are already readily applicable solutions that aren't really any more effort (for example, checking in a setup script or post-checkout script that adds the relevant rustup override). |
Sure, this is very possible. However, why should this setup script not be solved by the tool meant to do Rust toolchain management? Why should this relatively common usecase require each team to create a custom script? Would you also be in favor of a setup script that managed your crate dependencies (it would be extremely simple to implement - curl into the appropriate dir), rather than Cargo.toml? I very much agree, a script to setup Rustup overrides is a lot easier to maintain than a script to set up Crate dependencies. However, it is still one more thing each team has to maintain. At a fundamental level the same reasons to have a consistent way to manage crate dependencies across a team, should apply to a consistent way to manage toolchain dependencies across a team. Does that make sense? If not, why do we have a disconnect? I'm assuming you prefer Cargo.toml to a custom setup script. If thats true, I'm very curious, why do you prefer Rustup users build a custom script, when Cargo users shouldn't? |
Setting the override declaratively is something supported by rbenv by writing to |
Because the Cargo.toml is completely platform independent, whereas a rustup.toml would be extremely platform specific. I feel there's a strong likelihood of it being used improperly. Additionally, it would result in some new rust version being silently installed on your system, which is not something I'd want to happen automatically. If we do this, I'd at least like to see it broken down by host architecture, so that for each host target, you specify a corresponding toolchain to use: if your host architecture is not present then it would not try to override your default toolchain. The host architecture it would match against would be that configured by the "default host" setting. |
The std-aware cargo RFC looks like it might incidentally add a type of version pinning: if you write |
I find myself needing to re-invent this particular wheel for emk/heroku-buildpack-rust, as discussed in emk/heroku-buildpack-rust#11, which needs to know what Rustup channel to use for deploying a given git repo as a Heroku application. I was considering using
Obviously, if multiple tools are going to need something like this, it would be better to standardize it in one place rather than re-inventing it everywhere. |
My big concern is making sure that different developer machines are running consistent versions. Suppose a bug is fixed in a version of Rust and I update my local copy of Rust to avoid it. Then I push out some code changes and another developer on the team accidentally forgets to update it. Then they make a build with the buggy version of Rust. I'd like to be able to at least enforce a minimum Rust version. |
@brson kind-of off topic. I know we have had this conversation before, but given the increasing interest in version pinning in general, and version pinning through cargo, are you reconsidering having rustup and cargo be separate tools[0]? [0] I know, rustup has a lot to do with toolchains and not just std (or compiler) version. While, the linked RFC doesn't currently discuss pinning toolchains, nor does it discuss toolchains at all. However, for me as a user: toolchain version, compiler version, and stdlib version (or picking the lack of std) feel like very similar concepts that I'd prefer to manage in one place. |
Overrides with a local file would solve many issues with Servo and related projects. |
My comment from the duplicate issue:
In short:
|
I'm inclined to do this in the same way rbenv seems to:
The @Diggsey's point about not checking in the target triple makes sense. I'm not sure if it's important to stop people from doing that, but if the file only contains a channel name, not a toolchain name, it should be correct by construction. Channels include We need to understand how To start with, we don't add any explicit support for this to the rustup cli. Just get it out there for the cases where people really need it, but the cli could be expanded in the future to work with this file.
Yes, right now I'm considering implementing this limited feature entirely in rustup to get it out there. The various proposals to put Rust version numbers in Cargo.toml are intertwined with lots difficult design decisions, and don't seem close to realization. |
Rather, I think that As as side effect, even if |
What about |
Ah that's a good point. |
Ah that's a good idea. It has parallels to the decision to put rustup's own config in ~/.rustup instead of ~/.cargo (which I would like to revisit). In that light it seems like it might be more consistent to keep rustup configs separate. I wonder if other tools are using the .cargo folder. |
@SimonSapin which toolchains does Servo want to use this for? It strikes me that Servo using "alternate" Rust builds that are not named release channels, and I'm not sure how or if to represent such a thing here. |
@brson I think dealing with alt toolchains at all in rustup is a separate issue, namely #1099. I’ve also opened https://internals.rust-lang.org/t/disabling-llvm-assertions-in-nightly-builds/5388 (which, for everyone else, has background on what "alt" builds are). To fully answer your question: Servo’s bootstrapping scripts (which I understand we’d like to replaced with rustup eventually) download and manage two Rust compilers, both pinned to specific versions (this is the part relevant to this issue) that we can upgrade when we feel like it. One from the Stable channel (e.g. For the latter, we used to pick a date from the Nightly channel. We switched to "alt" builds when they were made available. Since they are built for every pull request, they’re effectively following the |
When working with a team of developers, it is most effective to be building/testing an application consistently.
Hardcoding dependency versions "in code" (managed/shared through version control) has worked out well for all other dependency management. Primarily for reproducibility locally, and across a team.
I can't think of a reason why this same strategy wouldn't work well for toolchain dependency management.
The text was updated successfully, but these errors were encountered: