Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Package management rfc #1983

Merged
merged 11 commits into from
Oct 24, 2024
Merged

Package management rfc #1983

merged 11 commits into from
Oct 24, 2024

Conversation

jneem
Copy link
Member

@jneem jneem commented Jul 2, 2024

@jneem jneem requested review from yannham and vkleen July 2, 2024 18:14
@github-actions github-actions bot temporarily deployed to pull request July 2, 2024 18:17 Inactive
Copy link
Member

@yannham yannham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, thanks a lot for putting that together. I think it's clear and well-articulated. Something that might be improved a bit (I left a related comment) is to clearly state the goal and non-goal of what we're trying to achieve here, to delimit the scope more explicitly.

rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved
rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved
rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved
Co-authored-by: Yann Hamdaoui <yann.hamdaoui@tweag.io>
@github-actions github-actions bot temporarily deployed to pull request July 5, 2024 17:42 Inactive
@github-actions github-actions bot temporarily deployed to pull request July 5, 2024 19:45 Inactive
@github-actions github-actions bot temporarily deployed to pull request July 5, 2024 20:51 Inactive
@yannham yannham requested a review from aspiwack July 16, 2024 13:27
@jneem
Copy link
Member Author

jneem commented Jul 23, 2024

I've updated the RFC based on feedback here and discussion at the weekly.

@github-actions github-actions bot temporarily deployed to pull request July 23, 2024 07:38 Inactive
{
name | String,
version | Semver,
nickel-version | Semver,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it needed?
Should this be a version constraint? e.g. >=1, ^1, >1.1.3,<2 etc?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right. I guess version should be SemverConstraint (or VersionConstraint). nickel-version is the minimum nickel interpreter version required by the package. Maybe min-nickel-version is clearer?

rfcs/006-package-management.md Outdated Show resolved Hide resolved
rfcs/006-package-management.md Outdated Show resolved Hide resolved

## CLI interface

We will add a new CLI tool (called `plate`) that wraps the nickel CLI and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Funnily enough, Nickel Package Manager would be... npm ☠️

## CLI interface

We will add a new CLI tool (called `plate`) that wraps the nickel CLI and
adds package management. It will offer a superset of the nickel CLI's commands
Copy link
Contributor

@GuillaumeDesforges GuillaumeDesforges Jul 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be a huge argument in favor of embeding package management in nickel's CLI.

At some point, I can hardly imagine anybody not using package management. That would mean everybody would use plate eval instead of nickel eval anyways.

jneem and others added 2 commits July 23, 2024 16:44
Co-authored-by: Guillaume Desforges <guillaume.desforges.pro@gmail.com>
Co-authored-by: Guillaume Desforges <guillaume.desforges.pro@gmail.com>
@github-actions github-actions bot temporarily deployed to pull request July 23, 2024 09:47 Inactive
Copy link
Member

@aspiwack aspiwack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial comments (I'm coming at this completely fresh, I haven't read the discussion above)

Comment on lines +85 to +87
We discussed this point in office hours, and the general sentiment was that
it's ok to allow the manifest to be interpreted. If someone wants to use
that power to create a ridiculously complicated manifest, that's their problem.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be said that it's only “their” problem until the package manifest ends up in the package registry. Then, a manifest that takes time to compute may become everybody's problem if we're not careful. And will certainly become any of their dependents' problem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, this is one important thing I left out. The index entry in the registry (which is in json, not nickel) contains the dependency information from the manifest. So resolving dependencies for registry packages doesn't require evaluating their manifests. The slow evaluation only matters when they create their PR for uploading their package to the registry.

Comment on lines 136 to 140
### Alternative: entry points are top-level files

Instead of hardcoding `main.ncl`, we could say that every file in the package's
top-level directory is publicly accessible. Package authors could keep implementation
details private by putting code in subdirectories.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This certainly asks the question of what the semantics would bee. Presumably you couldn't import foo anymore, instead you'd import foo/"bar" or something.

Comment on lines +142 to +148
### Alternative: namespace the import tool

Our initial intention for packaging was to allow for the usage of multiple
different package management tools. This RFC only proposes one such tool, but
maybe the import syntax could be designed with other tools in mind. For example,
it could be `import foo from electroplate` with the idea that future nickel
versions might add, say, `import foo from nix-flake`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't sound particularly meaningful, anyway. If we have a different tool to manage packages (e.g. apt), then we don't want electroplate packages to break. Instead, we want apt to pass the same packages that plate usually does.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea isn't that we switch package manager entirely for a fixed set of pavkages, but rather that some packages are managed by package manager A and we want to add other packages are managed by package manager B, typically provided as a bare flake output. More of a multi package providers scenario. At least when we looked at not having a proper package manger just for Nickel but relying on existing ones. Maybe it's less relevant now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you might have to switch package manager anyway. At any rate, I don't think the tool used to provide package is relevant for namespacing.

The `import foo` expression evaluates to the contents of `main.ncl` in `foo`'s
root directory.

### Alternative: other entry points
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A missing alternative which I find tantalising, building on Nickel's expressiveness, is to not have an entry point file at all. Instead, the manifest has a field entrypoint which contains a Nickel value. And that value is what the package exports. You can always have entrypoint = import "main.ncl" to restore the shape that you propose.

In fact, we could even have

  entrypoint | default = import "main.ncl"

(as long as imports aren't evaluated at all unless forced, because main.ncl may not exist)

As you point out, it's isomorphic to your solution. But it does have one fewer baked-in name.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as long as imports aren't evaluated at all unless forced, because main.ncl may not exist

This might be an issue, because currently we do eagerly parse and typecheck imports.

Comment on lines 160 to 163
We could build package management straight into the nickel CLI. This might be
more convenient and more discoverable, but it comes with stability hazards:
we might want to evolve the `plate` interface more rapidly than the `nickel` one.
Building in package management would also bloat the nickel CLI.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really doesn't seem like a very good argument at all. You can always release a version of Nickel which only updates the package manager interface. It's not like Nickel is very ossified and your rolling up a new tool. The bloat thing sounds even less convincing somehow.

An argument in favour of a separate cli may be that it guarantees that the Nickel cli can support integrating with external tools. The risk of baking the package management in is that it makes it possible to have the built-in package manager use capabilities of Nickel which aren't available to other tools, and it'd be bad for Bazel and stuff (but, of course, a just as robust way to ensure this is to maintain rules_nickel carefully).


I'm honestly not convinced about this separate package manager. Here's my reasoning: it works fine for Node, Rust, Ocaml, … because the vast majority of project who use Node, Rust, Ocaml, … are Node, Rust, Ocaml,… projects. I expect that actual Nickel projects will, in contrast, be a tiny minority: Nickel is to usually be used in a something-else project. Therefore it doesn't really make a ton of sense to pull in an additional project-management tool besides Npm, Cargo, Opam,… to manage just the Nickel bits.

On the other hand, we must make sure that we can call Nickel without the package management. But maybe we can flip the intuitive behaviour around nickel eval could be what you call plate eval while nickel simple eval could be the current nickel eval (since it'll usually be called by other tools). Or something. I don't know the design space is huge.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'm redoing that section with the built-in package management as the proposed solution, and a separate command as an alternative.

Comment on lines +224 to +231
What happens if there's a lock-file, but someone modifies the manifest? The
lock-file might need to be regenerated: certainly it might need some new
entries, but also there might be new version conflicts that require a different
resolution. In this case, we treat the lock-file as a suggestion instead of a
hard constraint: during resolving, when choosing the next package version to
try, we try to pick the locked version first. But if we run into a resolution
conflict, we allow a different version to be chosen (and notify the user
that a package was changed).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative is to never do any dependency resolution at all: target a particular commit of the registry, and just use the latest version of each package according to said commit. This is the Nixpkgs way (and also Stackage). It has pros and cons, but it has a fairly clear story for lockfile updates (i.e. just create lockfile entries for the new packages, never touch the existing package (unless asked to by the user, of course)).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lack of dependency resolution in flakes leads people to implement their own layer (https://discourse.nixos.org/t/introducing-flakehub/32044) which the usual lot of community split, bikeshedding and drama. I don't know about the Stack situation, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I purposely didn't speak about flakes. Maybe inviting this comment 🙂 . I would argue that the lack of name resolution is a feature of flakes: you only depend on a couple of flakes, the rest, you get from Nixpkgs, this does means that anything not in Nixpkgs is a second class citizen (and I very much believe it's the right thing for Nix). Same with Stackage: you depend on a version of Stackage and possibly a couple other packages (including packages which are in Stackage, but you want another version), whose version you precisely specify.

I don't know which of resolution or no resolution is best for Nickel. But I'd like to hear some arguments.

One of my reason to like no-resolution is that bound management is a lot of busy work. And bounds are always wrong. But one of the thing that makes Nixpkgs and Stackage work, is that it's possible to detect when things don't build together, notify their maintainers, and fix it. Maybe it's harder with Nickel, I don't know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the flakes and Stack models rely on the existence of a globally coherent versioned set of packages (is that what Stackage is)? In Nixpkgs those are NixOS releases. I agree this is usually great from a user experience point of view, but also a non negligible burden put on the people in charge of curating, updating and releasing this set.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, Stackage is a collection of so-called snapshots. Each snapshot has at most one version of each package, which are tested to be able to build together.

I think one of the biggest obstacle that you'd have for Nickel is probably the testing bit. Since you don't necessarily have types to guarantee that everything works together well (and evaluating everything like Nixpkgs does is pretty prohibitive).

Possibly the approach of allowing several copies of the same package alleviates a large chunk of the issues with bounds and resolution. But I think this alternative should at least be addressed in the document.

PS: if we go with version bounds, we need to have an answer to the question: what is the process when we discover after the fact that a bound is too permissive?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an "Alternative" section about global snapshots as a placeholder, but it's probably worth discussing it in more detail in the weekly.

Anecdotally, I find that the rust defaults (dependencies use the "^" mode by default, and you can have multiple copies of a package that don't share the same semver bin) work pretty well. The point is that the default dependency bounds align with the no-duplication bounds, and so if those bounds are too restrictive then you just end up with multiple copies of a package.

The bounds are usually not too permissive, because they are then someone has done semver wrong. But if it does happen, you can often work around it by pinning a dependency. For example, suppose I have a dependency on dep, which in turn depends on dep2 ^0.1.0, but then dep2 releases a broken 0.1.1. I can force version resolution to fall back to the old version of dep2 by adding a dependency on dep2 =0.1.0.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, my question was ambiguous. Here's the scenario that I have in mind: we have packages A<-B uploaded in the registry. And we discover that the latest version of B, says it's compatible with versions of A which it actually isn't compatible with (whatever the reason). We now need to update the registry somehow. It could be by altering the how the current version of B depends on A. Or it can be by marking this version of B as incorrect somehow, so that it's never picked up by the solver (and uploading a new version).

But, one way or another, we need to make sure that the bad combination is forbidden, otherwise, it'll be possible in the future for the solver to pick that combination, even if there are newer versions of B.

We need policy and workflows for to handle this case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right. I'll add a section on that. I like the "mark B as incorrect" option, because it can also be used to work around other kinds of breakage.

How should we manage the global registry? There's a potential for incurring
substantial maintenance costs here, so we should be careful.

We will provide a git repo, hard-coded to live at `github.com/tweag/nickel-mine`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or github.com/nickel-lang/nickel-mine.

Comment on lines +290 to +295
This repo will contain the "index", but not the actual package contents. It
will contain one file per package, each of which contains a line per version.
Each entry specifies the location of the package (currently required to be on
github) and its git tree hash. This ensures that packages are immutable, but it
doesn't stop them from disappearing: we don't keep a copy of the actual package
contents.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This representation strategy means that the manifest files (hence the packages' metadata) isn't kept in the registry. It means that querying the registry can be quite expensive since it requires many calls to many Git repositories. Is this reasonable? (genuine question, I don't really have an opinion on this)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also related to the thing I left out, that the registry index has enough information for dependency resolution. We don't store the whole manifest in the registry, but enough that we can do version resolution just by querying the index (and manifests from git/path dependencies, of course).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But maybe we don't just want dependency resolution? What if we want to query package descriptions, for instance? What meta-data should be in the repo is something we definitely ought to answer, oughtn't we?

PS: I'm not even sure “oughtn't we” is the right way to end this sentence. But it sounds super cool, so I went with it anyway!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. I guess the index ought to include whatever metadata we want to be easily queryable.

Comment on lines 311 to 312
If package volume is low enough (which it probably is, at first), index updates
could be done manually via pull requests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opam manages very well by pull requests. The cli can automate writing a pull request for you. If the volume becomes too high, we can also automate merging the pull request. This sounds way easier than having to maintain a backend service. So I'm heavily biased toward this alternative.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this was the general feeling when discussed orally in the weekly: this is the simplest and most reasonable way to start.

Comment on lines 328 to 335
For (2), we divide the package management implementation into two parts: the `plate`
command has all the logic for consuming the manifest, fetching packages, and so on.
The other part is to teach the nickel interpreter about "package maps," which is
just a map associating a filesystem path to each pair
`(package filesystem path, package-local name)`. When the nickel interpreter
is running a file that came from the package at path `/home/jneem/foo`, and that
file contains `import bar`, the nickel interpreter looks up `(/home/jneem/foo, bar)`
in the package map to figure out where to import from.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two big technical missing bits in your RFC:

  • How does the nickel command line understand packages? This is your package-map bit, but extended beyond the interpreter: how do we provide said package map to the interpreter to begin with?
  • Where/how are the packages stored on disk (I think this is easier than for most programming languages since we store the packages verbatim, so we don't have this thing where the same version of the same package is actually two different artefact because they were built with two different compilers).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I've added some more details to the section on the CLI interface
  • I added an extra section at the end on how we store packages

@github-actions github-actions bot temporarily deployed to pull request August 5, 2024 08:08 Inactive
Copy link
Contributor

github-actions bot commented Aug 5, 2024

🐰 Bencher Report

Branch1983/merge
Testbedubuntu-latest

⚠️ WARNING: The following Measure does not have a Threshold. Without a Threshold, no Alerts will ever be generated!

Click here to create a new Threshold
For more information, see the Threshold documentation.
To only post results if a Threshold exists, set the --ci-only-thresholds CLI flag.

Click to view all benchmark results
BenchmarkLatencynanoseconds (ns)
fibonacci 10📈 view plot
⚠️ NO THRESHOLD
503,920.00
pidigits 100📈 view plot
⚠️ NO THRESHOLD
3,229,400.00
product 30📈 view plot
⚠️ NO THRESHOLD
809,160.00
scalar 10📈 view plot
⚠️ NO THRESHOLD
1,495,200.00
sum 30📈 view plot
⚠️ NO THRESHOLD
799,940.00
🐰 View full continuous benchmarking report in Bencher

Copy link
Member

@aspiwack aspiwack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates. A few extra comments below.

Comment on lines +106 to +107
Since we expect registry imports to be the common case, maybe it's worth having
a shorthand?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably is, to be honest. But on the other hand, you'll have to be careful to avoid making “weird”. The shortcut and the full method should look decently similar. Or something.

Comment on lines +357 to +362
### Question: should we store a content hash too?

We're storing a git tree hash in the index, but if we ever want to store package
contents in the future, maybe we should also store a hash of the git tree
contents? This would allow verification of package tarballs, without needing the
whole git repo.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Storing a content hash is a good idea. But there's more than one way to do so, so you may end up not getting much more compatibility than you planned.

That being said, there is one content-hash standard, namely SWHID from our good friends at Software Heritage. It's specified here https://www.swhid.org/specification/v1.1/ . Maybe worth looking into it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like SWHID is very easy to support. As far as file and tree identifiers go (which I think are the ones we care about; the registry only needs to identify the tree, not the whole history), it's 100% git-compatible. That is, if the tree if is d198bc9d7a6bcf6db04f476d29314f157507d505 then the SWHID is
swh:1:dir:d198bc9d7a6bcf6db04f476d29314f157507d505

Comment on lines +407 to +411
Index and git dependencies need to be downloaded before they are used. We will
use a single global cache directory for all dependencies. It will use the
[`directories`](https://docs.rs/directories/latest/directories/struct.ProjectDirs.html#method.cache_dir)
crate to deduce a platform-appropriate location (`~/.cache/nickel-lang/` on
linux, unless `$XDG_CACHE_HOME` is set). Within this directory, git checkouts
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need some garbage-collection/cache-eviction policy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but it might also be ok to punt on it. The local cache doesn't have an index, so it's safe to just delete things manually.

Comment on lines +224 to +231
What happens if there's a lock-file, but someone modifies the manifest? The
lock-file might need to be regenerated: certainly it might need some new
entries, but also there might be new version conflicts that require a different
resolution. In this case, we treat the lock-file as a suggestion instead of a
hard constraint: during resolving, when choosing the next package version to
try, we try to pick the locked version first. But if we run into a resolution
conflict, we allow a different version to be chosen (and notify the user
that a package was changed).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, my question was ambiguous. Here's the scenario that I have in mind: we have packages A<-B uploaded in the registry. And we discover that the latest version of B, says it's compatible with versions of A which it actually isn't compatible with (whatever the reason). We now need to update the registry somehow. It could be by altering the how the current version of B depends on A. Or it can be by marking this version of B as incorrect somehow, so that it's never picked up by the solver (and uploading a new version).

But, one way or another, we need to make sure that the bad combination is forbidden, otherwise, it'll be possible in the future for the solver to pick that combination, even if there are newer versions of B.

We need policy and workflows for to handle this case.

Copy link
Member

@aherrmann aherrmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interesting proposal!
I've added some comments from the Bazel perspective.

rfcs/006-package-management.md Show resolved Hide resolved

### Alternative: toml manifest

Maybe the manifest should be in some plain-data format like toml. This would
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see it mentioned explicitly, so I wanted to raise it. Will import be forbidden in the manifest itself? I would imagine that that would be a good idea. But I haven't thought about it very deeply.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I had originally thought that the manifest wouldn't support package management, but it would be allowed to import normal paths. But maybe let's start by forbidding imports altogether.

Comment on lines +182 to +186
There will be command line flags to fine-tune this behavior. For example, there
could be a `--locked` flag that triggers a failure if the lock-file is not
present and up-to-date, or an `--offline` flag that triggers a failure if the
dependencies aren't already available. There could also be a flag (`--no-electroplate`?)
to disable package-management altogether.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Such features would be great for integration with other package managers or build systems.

Comment on lines +331 to +332
How should we manage the global registry? There's a potential for incurring
substantial maintenance costs here, so we should be careful.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you considering support for custom registries?

When Bazel introduced its new dependency manager bzlmod they also allowed users to define custom registries and even use multiple registries. Commercial users often like this because they can place proprietary code into their own private registry.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll look into that.

Comment on lines +294 to +301
I think we want to allow multiple versions of a package; the alternative can be
fragile and annoying. But then we need to figure out how many different versions
to allow. There's a trade-off: if we allow pulling in a different version
every time a package gets imported, solving the dependency graph is easy.
But it increases the chance of getting incompatibilities at runtime: we might
accidentally get a value from `util@1.1` and try to pass it to an incompatible
function defined in `util@1.2`. Pulling in too many different versions also
increases the total number of packages in the dependency graph.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for reference, the Bazel bzlmod solution is to use Go's MVS and fail if they encounter incompatible version requests for the same package. But, the root package can declare single or multi version overrides.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a very interesting read, thanks! This approach and the one of allowing duplicate semver-compatible versions seem to be pretty much mutually exclusive, so I guess we need to pick one or the other?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the user controlled override is compatible with either approach. But, otherwise, yes, I think so.

Comment on lines 371 to 372
For (2), we divide the package management implementation into two parts: the `plate`
command has all the logic for consuming the manifest, fetching packages, and so on.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build systems like Bazel prefer to do the fetching themselves. Is the intention that plate can just produce the URLs for dependencies but leave the fetching to an external tool?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we can have a nickel package list-all-urls (or something) subcommand that outputs the information needed to fetch dependencies. Would that meet the needs of build systems?

@github-actions github-actions bot temporarily deployed to pull request September 24, 2024 04:25 Inactive
rfcs/006-package-management.md Show resolved Hide resolved
rfcs/006-package-management.md Show resolved Hide resolved
one version from each bin. That is, we can have a `util@2.2` and a `util@1.2` in
the same dependency tree, but not a `util@1.2` and a `util@1.1`.

### Alternative: global snapshots à la Stackage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading again this section, I wonder if version resolution and snapshots are necessarily incompatible. We could get started with version resolution but at some point provide a snapshot to take stuff from, I suppose. A bit like foo.workspace = true in cargo, but with snapshot-foo instead of workspace.

rfcs/006-package-management.md Show resolved Hide resolved
aren't in the lock-file, but if resolution fails then go back and try allowing
them. This would allow resolution to succeed, for example, if your manifest
asks for the latest version of a package *and* that package was yanked *and*
you don't have a lock-file. But this seems like maybe adding too many corner-cases.
Copy link
Member

@yannham yannham Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how useful this is. That is, I don't expect it would happen a lot that there's a unique version that can be picked and it's a yanked version. Often either the yanked version is immediately followed by a backward-compatible update - minor or patch -, or the yanked version wasn't a X.0.0 version and there are other previous versions of the same major version.

I think a yanked version would be the only choice if people started to depend on this specific version too fast (otherwise a previous version of the same bin could still be picked) AND the correction is a breaking change (otherwise the more up-to-date correction could still be picked). Which doesn't sound common enough to mandate an ad-hoc solution, IMHO.


These two use-cases should be configured differently, because the first is
a global setting and the second is per-package. For the global setting, we
add a `source-replacement` field to the manifest: `std.package.Manifest` becomes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I don't understand: you say it's a global setting, but we still configure it at the level of the manifest of a package? Or by global do you mean it applies to all the dependencies of one package, instead of potentially one of the dependency of one package?


```nickel
source-replacement.registry."https://github.com/nickel-lang/nickel-mine.git"
= "https://my-site/nickel-mine"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have any form of security here? Adding a source-replacement has the potential to substitute an existing package with pretty much anything. The security model is not as bad in Nickel as in say Nix, which can install arbitrary executable or inject hidden malware in your glibc, but in Nix you do need to add a public key and a there is mechanism to make sure the user really intended to use a substituter (need to be a trusted user, and will ask on the CLI if you want to use the substituters configured at the flake level).

The answer might be that we don't care in our case, I'm thinking out loud here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it isn't a security risk because source replacement is only configurable at the top-level: it can't be triggered by some package deep in your dependency tree. If the top-level package wants to be malicious, it probably has sneakier ways of being so.

actual evaluation. They will start by searching for an `electroplate.ncl` file.
If one is found, we will evaluate it. We will then search for a lock-file.
If one is found, it will be used to guide dependency resolution; if not, we will
do dependency resolution from scratch and write out the generated lock-file.
Copy link
Member

@yannham yannham Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what's the cost of doing that on each evaluation. As opposed to compiled languages, we might have to pay the dependency resolution step on each run, including for small-ish configurations (as opposed to say cargo when you pay the price at build time but not at run time). Do you think that would be an argument in favor of better separated package-management, where nickel eval would just pick the existing lockfile or fail but not perform the update automatically? Or at least an argument for not updating the lockfile automatically? Or do you expect those costs to be negligible?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I guess we'd have to measure it to be sure. I expect version resolution to be quick, especially if there's an up-to-date lockfile but I don't actually have data.

Having to manually trigger a re-resolution is definitely an option; I think poetry and npm do this. I personally find it a bit annoying, as it's an extra step that's easy to forget.

Copy link
Member

@yannham yannham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the last review, and discussion in team meeting, we decided that the RFC is in a good state and answered most direction questions. We shall go forward with this version (although @jneem has minor updates to do still before merging).

@jneem jneem enabled auto-merge October 24, 2024 09:24
@jneem jneem added this pull request to the merge queue Oct 24, 2024
Merged via the queue into master with commit 0798460 Oct 24, 2024
5 checks passed
@jneem jneem deleted the package-management-rfc branch October 24, 2024 09:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants