Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue for RFC 2196, "metabuild: semantic build scripts for Cargo" #14903

Open
1 of 3 tasks
Centril opened this issue Apr 9, 2018 · 21 comments
Open
1 of 3 tasks
Labels
A-build-scripts Area: build.rs scripts C-tracking-issue Category: A tracking issue for something unstable. Z-bindeps Nightly: binary artifact dependencies Z-metabuild Nightly: metabuild

Comments

@Centril
Copy link

Centril commented Apr 9, 2018

This is a tracking issue for the RFC "metabuild: semantic build scripts for Cargo" (rust-lang/rfcs#2196).

Steps:

Unresolved questions:

None

@Centril Centril added T-cargo Team: Cargo C-tracking-issue Category: A tracking issue for something unstable. labels Apr 9, 2018
@withoutboats
Copy link
Contributor

I wanted to register a concern on this RFC but I didn't realize how quickly it was approved.

I know that the goal of moving forward here is to make it the manner in which cargo processes native dependencies more declarative and easier to process by other build systems. I 100% approve of that goal. I'd love to see a future where the difference between a dependency implemented in C and one implemented in Rust was essentially insignificant to the end user.

The RFC states:

At the same time, we don't yet have sufficiently precise information about the needs of such systems to design an ideal set of Cargo metadata on the first try. Rather than attempt to architect the perfect solution from the start, and potentially create an intermediate state that will require long-term support, we propose to allow experimentation with declarative build systems within the crates.io ecosystem, in crates supplying modular components similar to build.rs scripts.

As a nightly-only means of experimenting toward finding a long term solution to native dependencies, I am totally behind this RFC. In contrast, I feel a lot of concern about providing "metabuild" in this form as a stable feature because of the other ways this feature can be used.

I find the idea of declaratively listing crates in Cargo.toml and calling them when you cargo build very opaque. For use cases that are not literally doing what cargo build says on the tin (ie building a dependency), I am worried about this being a confusing feature that obscures how the build pipeline for a project works. Other kinds of build processing that happen during building in my opinion ought to appear as code somewhere in the project - that is, in the build.rs. For use cases other than building dependencies, I'm more in favor of code gen solutions which keep that step very discoverable to end users than a declarative system like this.


Without making any sort of "slippery slope" analogy, I want to share a frustrating experience I had with a Ruby on Rails project because of the multiple layers of opaque "declarative" build/exec processing that have developed in that ecosystem.

The rspec command took around 10 seconds to initialize all of our app's dependencies before running tests for me, and I wanted to reduce that time. My problem was eventually solved by realizing my shell wasn't properly invoking the rails spring binaries, but I was trying to find a cleaner solution in which we just didn't initialize the entire application before running every test.

It took me quite a while to figure out how rspec even figured out it was supposed to load the application; eventually I discovered that the root directory had a .rspec file in it which listed subcommands rspec appended every time it was run. Once I deleted that file, I got the boot time down to about 2 seconds, which was still far too long for not doing any work at all.

Eventually, I discovered that the 2 seconds were because I was using rvm to manage my Ruby versions, and rvm dynamically injects some code into your version of rspec and ruby and so on in order to treat it as if when you call rspec you actually call bundle exec rspec, and the 2 seconds came from processing our project's Gemfile.

In other words, when I ran rspec, at two different layers (in a dotfile in the project and dotfiles in my home directory), different programs were declaratively injecting behavior into my command, neither of which were designed to be discoverable and obvious.


To recap, I want to draw a clear distinction between building native dependencies and arbitrary build-time processing. I think its completely correct for the first to be handled declaratively, even implicitly. But when it comes to executing arbitrary code at build time to do anything at all, I think it is important that it be obvious and discoverable what additional behavior is being run at build time. The build script solves this by having literal source code you can read. But having to spelunk into other repositories (if there are even repositories linked from crates.io for your metabuild dependencies) is a real step back in this regard.

@joshtriplett
Copy link
Member

joshtriplett commented Apr 9, 2018

@withoutboats First, I do want to emphasize that the goal is to experiment in the Cargo ecosystem, not to immediately stabilize it. That was what ultimately led to moving forward with the RFC: the desire to enable that experimentation and development.

I do understand the concern about builds becoming more opaque. On the other hand, if you see a metabuild key pulling in lalrpop-build or similar, you can feel confident that a crate uses the standard lalrpop-build mechanism to build a parser, not something custom or non-standard. And if there's a bug or performance issue, it can be fixed in that one place, not in numerous copy/pasted/tweaked build.rs scripts.

So I do want to see every component of build.rs using metabuild crates, not just dependencies. Those are just the most important target to standardize.

I don't think this obscures the build pipeline or makes it less discoverable, any more than having other functionality factored out into crates obscures the code using those crates.

@raphaelcohn
Copy link

To make builds truly reproducible and remove the sorts of issues @withoutboats experienced, one needs a reproducible build chain that is completely independent of the binaries on the host. That requires versioning of even the smallest build dependency - the version of an autoconf m4 macro (not the generated configure) or a hardcoded reference to /bin/sh - and can be extremely challenging indeed. It can be done - as my experimental Libertine Linux shows - but its very hard indeed. The most important principle in getting it right is to always cross-compile - even the build toolchain.

For a more general build system, features make it even harder. Take the DPDK project or rdma-core library - they have some many ways of building them there's no sane way to abstract in a way that sorts more than a very narrow subset of uses.

@withoutboats
Copy link
Contributor

@raphaelcohn I don't see the connection between reproducibility and the issue I was talking about - discoverability.

@raphaelcohn
Copy link

raphaelcohn commented Apr 11, 2018

"dotfiles in my home directory". Something which is not reproducible is not easily discoverable.

@ehuss
Copy link
Contributor

ehuss commented Jun 4, 2018

Is anyone working on this? I have some free time and am willing to help.

@aturon
Copy link
Member

aturon commented Jun 6, 2018

@ehuss not that I know of; it'd be great to see some action here! cc @joshtriplett

@ehuss
Copy link
Contributor

ehuss commented Jun 11, 2018

Thanks @aturon. I have posted a preliminary PR at #5628.

@ehuss
Copy link
Contributor

ehuss commented Sep 10, 2018

This is now available on nightly (documentation here). Some things that probably should be decided before stabilizing:

  • Should there be a structured way to pass metadata directly to the script so it doesn't need to parse the manifest?
  • Should there be a more explicit way indicate errors?
  • How should cargo metadata behave? Currently it includes a metabuild key in the package, but the metabuild target is hidden. This is my preference, but I'm not sure if that will confuse any tools to have a hidden target.
  • How should "build plan" behave? Currently it generates the metabuild script if necessary and includes instructions on how to build it. I think this is the best approach, since without that information I think it would be almost impossible to use crates with this feature with external build systems. However, it is a little strange to have a mostly internal implementation detail exposed like this.
  • How should JSON artifact messages behave? Currently the metabuild target is included in the JSON artifact message, along with the path to the internal file (target/.metabuild/metabuild-$PKG-$HASH.rs). I'm not sure what the use cases are for the JSON artifact messages, so I'm not sure if it should be hidden or not.

@nrc
Copy link
Member

nrc commented Jan 14, 2019

Has anybody tried using metabuild? Does anybody have any feedback for us?

@MaulingMonkey
Copy link

MaulingMonkey commented Sep 11, 2019

One result for cargo search metabuild so far: https://crates.io/crates/cdylib-link-lines . On the RFC subject of parsing Cargo.toml, https://crates.io/crates/cargo_metadata has treated me well so far.

I really want something like metabuild, but build.rs scripts aren't viable for a lot of my needs - and by extension neither is metabuild. What I need is more along the lines of a build workspace project, to be invoked ala cargo run build -- [...] instead of cargo [...]. I've been mulling over writing a custom cargo subcommand (bonus points: could work on stable) that would effectively generate this project in a similarly compositional means to metabuild.

I'm not sure if this feedback is in-scope for metabuild itself, but I'll lay out some example scenarios, none of which seem well covered. I've been handling them with non-portable windows .cmd scripts up until now, which is terrible. First, the TL;DR version:

  • No clean way to supress default behavior of invoking rustc
  • No clean way to invoke stuff after rustc
  • Not super clean/obvious how to hook into cargo test or other subcommands properly. Since that uses seperate profiles, maybe those could be detected/used...?

And then some of the concrete examples:

  1. I want to run a single consistent build command, instead of running / installing:

    • cargo web build for stdweb projects
    • wasm-pack for web-sys projects
    • cargo dinghy build for android projects
    • cargo build for regular desktop projects
    • Most of the above, possibly in different subdirs, for a single cross-platform project
    • Various inconsistent and incoherent flags to all of the above to specify debug vs release profiles
    • Various inconsistent and incoherent subcommands of the above for build vs test vs install vs ...
    • Setting up travis/appveyor/vscode to properly handle all of the above on a per-project basis is lame
  2. I want to run a regular cargo build... followed by embedding windows manifests, icons, code signing, packaging into zips, etc.

  3. I want to run multi-stage builds

    • E.g. in https://github.com/MaulingMonkey/jni-bindgen , I want to:
      • build+run jni-android-sys-gen , which generates/modifies the Cargo.toml for...
      • build jni-android-sys , now that it's Cargo.toml is updated.
    • In a game, I want to:
      • build/run asset generation tools (texture/shader compilers, etc.)
      • build the game, possibly embedding generated assets.
        Partially addressable with procedural macros...? Not super cleanly though.

@matklad
Copy link
Member

matklad commented Sep 11, 2019

@MaulingMonkey I think this indeed goes well beyond the scope of the metabuild. I think what you want here are cargo workflows, or cargo tasks. As far as I know, they didn't get past the vague ideas stage.

@Kixunil
Copy link

Kixunil commented Aug 21, 2021

I'd like to give some initial feedback on this:

I wasn't aware of this feature until now that I was going to propose almost the same thing. I think it may lack visibility in the community which is also the reason there are so few experiments. Now that I know I intend to implement metabuild() in configure_me_codegen ASAP. Edit: done. :)

It will be trivial for me because I already use a pattern similar to what is proposed here - basically just call configure_me_codegen::build_script_auto() from build.rs and I do read metadata from Cargo.toml.

Considering that I already do something that's almost identical to what's proposed here (and that being the very reason why I wanted to propose ~same thing), I believe my feedback is worth taking into account even without actually already using metabuild.

There was a concern in the RFC PR that parsing cargo manifest is too complicated. From my experience it was completely fine. I do agree that serde increases compile time but I needed it anyway and I believe this would be better solved by being able to cache built metabuild binaries. It should even be possible to cache them across rust versions (but there are crates that detect versions, more on that later). If there's desire to decrease friction of this maybe instead of calling metabuild() with no arguments pass in impl serde::Deserializer which the author can use to deserialize arbitrary struct. This does obviously introduce dependency on serde but I think it would be fine as serde is one of the most used and best crates in the ecosystem. I personally consider it my favorite.

Regarding discoverability, I don't see a difference between declaring a dependency and having a build.rs file which calls dependencies (most crates use cc instead of reimplementing compiling themselves). I even have case where a file related to configuration of my crate was too discoverable - in one project we added configuration specification file and multiple people thought it's configuration file and messed up. Eventually we had to move this file to internal directory with README saying "This is for developers only, not users" and adding explicit configuration example. :)

A quick experiment now showed that this feature, as-is, is not backwards-compatible - older versions of Cargo will reject the metabuild key. I would very much like it to be compatible - that means older Cargo versions would not know the key and emit a warning that it's unknown but build.rs could be still present to reimplement the same thing. The reason why someone may want to support both build.rs and Cargo.toml key is that future iterations of this feature could provide some additional options for people using them - see below.

There's one more reason I wanted to propose same feature: separating library-build scripts from codegen scripts. A real-world example is rocksdb-sys crate which uses both bindgen and cc. Today Cargo supports overriding build scripts. The documentation says:

With this configuration, if a package declares that it links to foo then the build script will not be compiled or run, and the metadata specified will be used instead.

However this is broken in practice because if you override build script for rocksdb-sys the bindings will not be generated and the code fails to compile. rocksdb-sys thus has to rely on environment variables to use system libraries. This works but is non-standard. This feature could resolve it if there was some way of marking a dependency as either codegen or builder.

I also guess that adding a dependency to both metabuild and build-dependencies will be annoying. I don't know right now.

A way to kill three (!) birds with one stone is to instead of adding the key to package allow metabuild key in build-dependencies. It can be set to "codegen" or "compile-lib" (maybe more in the future). codegen scripts will be guaranteed to run before compile-lib scripts and compile-lib scripts will be overridden by rustc-link-lib settings, while codegen will not be overridden.

There was an argument that build scripts can not be turned into declarative because of many quirks. Interestingly it may be an argument for this feature if categorization is implemented: one would get the library from system packages and only run codegen which can be declarative. Maybe cargo could support some kind of shim to find the OS library automatically.

In the future I could imagine being able to pre-install binaries from trusted sources (or compile them once myself) and then instruct cargo to only use them and not whatever a crate pulls in. Combined with some proc macro sandboxing this would fix the security issues around executing evil build scripts.

Finally, and this is likely orthogonal, I'd like to have a way of specifying that certain codegen script generates code that uses a specific library as a dependency - this is the case in configure_me, which is convenience library that reexports bunch of stuff and it's called from code generated by configure_me_codegen. I mention this mainly for completeness in case there are some important interactions with this feature.

@Kixunil
Copy link

Kixunil commented Aug 21, 2021

Report from testing:

  • I realized I return Result from my function - maybe we should make metabuild function return Result<(), Box<dyn std::error::Error> (or some other type) too? Edit: I no longer return Result because I report rich error using codespan_reporting.
  • I noticed I use some internal simple script for test cases. It could be kept as build.rs but maybe others will want to mix them with external crates?
  • I found out that if build.rs is present then compilation fails while RFC says it should be ignored. I would like to register this as a bug.

Other than that it seem to work fine. :)

@boozook
Copy link

boozook commented Apr 27, 2023

I've found one problem with dependency custom names.

cargo-features = ["metabuild"]

[package]
metabuild = ["foo-bar"]

[build-dependencies.foo-bar]
path = "../build"
package = "my-build"

So this way we will not get any error/warning and foo-bar::metabuild() will not run.
But this example package that uses metabuild will be built like it has no build-script or metabuild.

@matklad
Copy link
Member

matklad commented Aug 20, 2023

A relevant PEP: https://peps.python.org/pep-0725/

@Kixunil
Copy link

Kixunil commented Oct 14, 2023

@matklad I don't see how, it talks about specifying dependencies, not running them.

@epage epage transferred this issue from rust-lang/rust Dec 6, 2024
@epage epage added C-tracking-issue Category: A tracking issue for something unstable. A-build-scripts Area: build.rs scripts Z-metabuild Nightly: metabuild and removed T-cargo Team: Cargo C-tracking-issue Category: A tracking issue for something unstable. labels Dec 6, 2024
@epage epage added the Z-bindeps Nightly: binary artifact dependencies label Dec 6, 2024
@epage
Copy link
Contributor

epage commented Dec 6, 2024

Since this RFC was created...

We now have bindeps / artifact deps and I feel that pull in metabuild dependencies as artifact deps, rather than rlibs, would be better as it allows a shared build script to be compiled once. Yes, the rlib gets compiled once but we then still have to have the wrapper pull it in and that can be significant, depending on the profile settings.

You can now use dep::main; to define a main function.

We have a request for being able to run unit tests (#9942). If build scripts were dedicated packages in a workspace, then they would get that "for free". Granted, this then requires publishing the build script package.

There is more scrutiny of supply chains and auditing of code that gets run during builds (e.g. #5720 is talked about frequently in the community). This allowing us to only audit a common set of shared build scripts and not even wrapper scripts in user programs would be a big help.

There is a continued emphasis on build times and I would be concerned about this needing a toml parser, serde, and enough of the manifest schema to make this work. That could very impede this proposal being adopted in core crates. Yes, we could do intermediate solutions like pulling out package.metadata, serializing it to json, and passing that into the build script via an env variable, but that still requires serde, serde_json.

@epage
Copy link
Contributor

epage commented Dec 6, 2024

A counter proposal to the current metabuild design, broken out into milestones

Step 1: metabuild polyfill

Extend package.build to accept a list of build scripts

  • The user can define build scripts that are just use foo::main;
  • This also allows more purpose-built build scripts to audit, limit re-runs, etc
[package]
build = ["build-foo.rs", "build-bar.rs"]

This would allow some experimentation with this approach without fully defining the interface

Note: no auto-discovery of multiple build scripts is being considered because the end-goal is to shift the focus to metabuild

Step 2: metabuild polyfill arguments

Extend package.build to define arguments and/or env variables to be passed to the build script

[[package.build]]
path = "build-foo.rs"
args = ["--foo=bar"]
env = { FOO = "bar" }

The focus is on providing a low-level mechanism that users can do what they want with without much compile-time overhead.

Ideally, args would be parsed with a CLI parser which can pull in some bulk. Small parsers like lexopt help. Custom test harnesses (rust-lang/testing-devex-team#2) will also need a parser, so if we encourage reuse between them than that will reduce the pain.

This would allow more build scripts to experiment with this approach

Step 3: metabuild

Blocked on artifact dependencies

Add a dependency key to the [[package.build]] table

  • <dep> must be a build-dependency that includes artifact type bin
  • <dep> is skipped if the dependency is optional and not activated
[[package.build]]
dependency = "foo"
args = ["--foo=bar"]
env = { FOO = "bar" }

If the full table is not needed, maybe have a package.build = "dep:foo" shorthand

Evaluation

Benefits

  • We can gradually stabilize parts of this approach to allow experimenting with future parts
  • Input to build scripts is tied to the build script definition
    • Organizationally nicer
    • Makes it clear that configuration is for a build script and not a random tool
    • Better draws the eye to something weird happening here during builds, making this less obfuscated
  • Low compile-time overhead for a build scripts to meet the contractual obligations of this design
  • Lower compile-time overhead by removing the need for wrapper build scripts

Downsides

  • Does not allow arbitrary configuration unless the user encodes it somehow in args or env variables
    • We should probably define the use cases for arbitrary configuration before optimizing for it

@joshtriplett
Copy link
Member

I love the concept of this. Steps 1 and 3 together seem like a great design.

For step 2, I'm wondering if we need both env and args, particularly if cargo already passes all the environment variables it normally does to a build script.

Could we drop args and the fully general env support, and instead define a narrower config mechanism that we pass through via either the command line or the environment (one or the other, not both)?

For instance:

[[package.build]]
dependency = "foo"
config = { foo = "bar" }

We could either pass this through the environment as CARGO_BUILD_CONFIG_foo="bar", or pass it on the command line as foo=bar. Either way, constraining this somewhat seems like it'd give a simpler result and clearer guidance, rather than a completely general channel for arguments and environment variables.

We could, alternatively, do the same thing for passing in the contents of (some subset of) the metadata table. The downside of doing that would be if people had substantial content in their metadata table already and didn't want or expect it to be passed to their build scripts. The advantage and disadvantage of using metadata that would be sharing it amongst build scripts; that would avoid duplication but require scripts to coordinate their interpretations of common elements.

On balance I think we should include it under package.build.

@epage
Copy link
Contributor

epage commented Dec 9, 2024

I had both env and args as I was intentionally going very low level and decided to give the user all the knobs (well, except stdin). I'm fine dropping args.

As for env, I had considered going fancier. My questions are

  • Is it worth the design trade off?
  • How much do we need to design before moving forward?

I was hoping by going with env we could keep the design costs down and get this moving forward.

For benefits to a higher level config, I saw little. I didn't see basic validation getting users much since the build script will validate anyways. However, in thinking more on it, what we can get is

  • Making the config a closed set, catching typos
  • Potentially auto-generating documentation
  • Some basic semver checking across releases

The main cost I thought of was users dealing with the translation of the config to env variables (prefixes, case conversion).

However, if we are to get those benefits I named, we also need to define a set of parameters to a build script. We'd need a name for the table that has a clear role separate from features and whatever we call mutually exclusive features. It might even be per-[[bin]]? Maybe we can cheat and instead make the config be a field that uses the same syntax as check-cfg? If so, we might want it to be an array of strings because merging all of those into one string could get messy. Either way, would this format complexity be worth it for build script interface, an already low level mechanism that is a bit off in the weeds?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-build-scripts Area: build.rs scripts C-tracking-issue Category: A tracking issue for something unstable. Z-bindeps Nightly: binary artifact dependencies Z-metabuild Nightly: metabuild
Projects
Status: No status
Development

No branches or pull requests