-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add experimental api crate_deps
and all_crate_deps
macro for gathering package dependencies
#319
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't conducted a real serious review yet. Couple questions/suggestions about the generation of the bzl file.
@@ -0,0 +1,183 @@ | |||
# A mapping of package names to a set of normal dependencies for the Rust targets of that package. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of stability guarantee are we making for these maps? What about the functions? Their return types? Their argument sets?
I don't have strong opinions about any of these (except a general bias toward "stable" and "thought-out").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I quite understand. The functions have a full docstring explaining what they're expected to produce with explicit failure modes (some one or two which might be too aggressive). These maps should always be present in this file and I think bucketing the cargo dependencies into bazel packages works quite nicely since the definition of both a cargo and bazel pacakge are directory based (more or less in the case of cargo but definitely works for this case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're asserting here that we will have these values in this file indefinitely, but that's clearly not true. In the other comment you weren't sure whether or not users would even need the map you're providing.
Values that are being provided here are essentially part of our public api and will basically need to be preserved indefinitely (or, a migration approach provided if we find the need to change this).
The question remains -- what type of stability guarantee are we providing about the build macros here, about their arguments, and about their return types. If you're not sure, consider annotating these "experimental", so that we can break them later with impunity.
impl/src/context.rs
Outdated
@@ -136,6 +137,7 @@ pub struct CrateContext { | |||
pub source_details: SourceDetails, | |||
pub sha256: Option<String>, | |||
pub registry_url: String, | |||
pub is_proc_macro: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the right place for this? IIUC, "proc_macro"-ness is a property of a target within the crate, not the crate itself. I expect that a proc macro should contain a target with the corresponding kind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is meant to go along with pub lib_target_name: Option<String>,
which is currently set by both lib
and proc-macro
targets but the distinction is important to be able to identify later. Though the placement of that variable doesn't quite convey that so I can update that if you think this is something that could stay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the placement of this variable and made a comment. Hopefully this makes it more clear.
@acmcarther some questions for you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I shouldn't say this, but I feel based on this and prior discussions, that you take the concept of maintaining API compatibility too lightly. On prior occasions I've had to make the point against making sudden breaking changes, and I think here you're underestimating the new type of commitment-to-stability being made by generating this bzl file.
Perhaps I am too paranoid about this type of thing (or perhaps I've been traumatized too often by maintainers of upstream tools with this same attitude).
@@ -0,0 +1,183 @@ | |||
# A mapping of package names to a set of normal dependencies for the Rust targets of that package. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're asserting here that we will have these values in this file indefinitely, but that's clearly not true. In the other comment you weren't sure whether or not users would even need the map you're providing.
Values that are being provided here are essentially part of our public api and will basically need to be preserved indefinitely (or, a migration approach provided if we find the need to change this).
The question remains -- what type of stability guarantee are we providing about the build macros here, about their arguments, and about their return types. If you're not sure, consider annotating these "experimental", so that we can break them later with impunity.
Part of what might be accomplished by splitting the map generation out from the functions, is that it makes clear that there's another option here -- move the functions here into a bzl library which can be included and versioned separately. Despite my pushback about the map public/private-ness, I think we can much more easily make guarantees about stability about the map than we can about stability of the generated functions. Moving those functions into a separately versioned bzl library allows end users to deal with breakages there independent of the raze updates. |
@acmcarther I think you've lost me here. I don't understand the motivation for splitting the maps into their own files and I don't understand what you're referring to with the "experimental" annotation. To me, the giant banner in the README claiming this is not a google product communicates to me that there's some volatility here. I don't believe the maps are significant to users and I think it's better to make them private (just wasn't originally confident enough to assert that). I think my confusion primarily stems from my assumption that the information being put here is the exact same that's being rendered into the neighboring BUILD file in form of aliases. Perhaps you could elaborate on your concerns about stability? |
The use case here is users might want to stub out all the dependency sections of their crates such that subsequent runs of `cargo raze` will automatically add new dependencies.
There are two questions here:
and
What is the value in splitting the generation of the CRATE maps out from the rest of the bzl fileDescribed above:
What is the concern about "api stability" that I keep harping on about.This PR introduces new API surface area. As a general practice, when introducing API surface area, you have to be careful because users may begin depending on it immediately, and it can limit future evolution of the software. A guideline I follow is to add the minimal api surface area that works for the user requirements only when users actually need the new behavior. Though this contains the same "data" as what is in the generated build files it represents a new mechanism (by more than one definition of mechanism I think) by which end users can access and depend upon this data. I would prefer if you distinguish in your reply between the arguments above not being convincing, versus them just not being clear. I think repeatedly in this PR you've mean the former, but you've expressed it as the latter. |
@acmcarther Heads up, just rebased. That will be the only time I do this for this PR. |
re: #319 (comment) Everything up until that comment was not clear to me. I replied based on my understanding of what you wrote but wasn't sure if we were on the same page. I think related to both questions, I find it incredibly difficult to understand the impact of any change to this tool. From what I've gathered, other contributors are the only other users who use the latest releases. Releasing a new version with some change in behavior shouldn't break companies using this since they can easily install the tool with My question from here is how can I frame this API as experimental such that there's still flexibility to change it and I don't have to add new functionality to copy static files vs rendering semi static templated ones until there's more confidence in this API? |
Ah I think I see what you're saying, and why we differ on our assumption of impact. On the philosophyI think generally I have begun (since the ancient times of yore whence the first third party contributors appeared) to assume that some (ill advised) organizations have built nightmare scaffolding[1] upon what this tool generates. In practice, I've heard mixed signals from folks (some, such as Marco, expressing that they've been able to deal with incompatibilities easily, some from internal teams expressing appreciation that I keep an eye out for breaking changes) On unblocking this changeI don't know if this is a standard practice, but my suggestion above (re "marking this experimental"), can mean one of two things. Either extend the bzl macro documentation to say "EXPERIMENTAL -- MAY CHANGE AT ANY TIME", or prefix the function with "experimental". There may be a third option which is more conventional... I'm not experienced enough with bzl library maintenance to know for sure. How do you feel about going with the "mark this experimental for now" approach. I like this because I feel less bad if users depend upon it and then we break them. [1] meaning, incredibly complex bzl functions, code generators, and other shenanigans, likely not maintained by the original author. My day job is feature development for a c++ product, but I actually spend a lot of time dealing with 10+ year old "nightmare scaffolding" -- where we're just the middle layer between our dependencies and internal users. It's terrible! |
re: #319 (comment) I'm happy to update the documentation. I don't like changing the name of the names of the macros only because I think |
re: #319 (comment) @acmcarther Updated the API and added some comments. I hope that's what you wanted, if not, looking forward to your reply 😄 |
@acmcarther Some updates and some questions for you. |
@acmcarther Just a heads up, I gated this behind a new raze setting |
crates
and all_crates
macro for gathering package dependenciescrate_deps
and all_crate_deps
macro for gathering package dependencies
crate_deps
and all_crate_deps
macro for gathering package dependenciescrate_deps
and all_crate_deps
macro for gathering package dependencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the examples need to be regenerated (i see an instance of "all_crates".
else lgtm
@acmcarther I thought I updated the examples already. I'm not able to find an outdated name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks updated now.
A new experimental API has been added for accessing crates. If
experimental_api = true
is set inRazeSettings
, then the following macros will be rendered into thecrates.bzl
file for both Remote and Vendored genmodes.Examples of the macros can be seen below:
This should make it easier for users to manage their dependencies in situations where they have multiple
Cargo.toml
files representing in their workspace (ie: they're using cargo workspaces).