-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require all packages to solve / compile and include all valid compilers in their metadata #669
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've included a few review comments that describe how various parts of the code work. But I'm also happy to jump on a call in the PureScript chat to walk through this and answer any questions.
publish :: forall r. PackageSource -> PublishData -> Run (PublishEffects + r) Unit | ||
publish source payload = do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We no longer need the PackageSource
type because we no longer have exemptions for "legacy" vs. "current" packages. All packages must solve and compile. We had exemptions before because we weren't sure what compiler version to use to publish legacy packages but we now manually verify one that works before we ever run publish.
app/src/App/API.purs
Outdated
Operation.Validation.validatePursModules files >>= case _ of | ||
Left formattedError | payload.compiler < unsafeFromRight (Version.parse "0.15.0") -> do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in the comment above, this code will fail packages that have syntax that our version of language-cst-parser
doesn't support. In our case that's 0.15.0+. I've therefore relaxed this requirement for packages before 0.15.0 or else they would be spuriously rejected.
We could potentially sub-in a regex check for "module where", even though it's fragile, for pre-0.15.0 packages.
app/src/App/API.purs
Outdated
@@ -504,20 +515,30 @@ publish source payload = do | |||
Right versions -> pure versions | |||
|
|||
case Map.lookup manifest.version published of | |||
Nothing | payload.compiler < unsafeFromRight (Version.parse "0.14.7") -> do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
purs publish
will fail for packages prior to 0.14.7 because that's when we added support for the purs.json file format. Before that the compiler looks for specific Bowerfile fields that aren't present in the purs.json file. Since all these packages ought to already be published to Pursuit I think this is fine. We can't do anything about it anyway until #525.
case compilationResult of | ||
Left error | ||
-- We allow legacy packages to fail compilation because we do not | ||
-- necessarily know what compiler to use with them. | ||
| source == LegacyPackage -> do | ||
Log.debug error | ||
Log.warn "Failed to compile, but continuing because this package is a legacy package." | ||
| otherwise -> | ||
Except.throw error | ||
Right _ -> | ||
pure unit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is no longer needed because packages must compile.
@@ -168,7 +168,6 @@ handleMemoryFs env = case _ of | |||
case inFs of | |||
Nothing -> pure $ reply Nothing | |||
Just entry -> do | |||
Log.debug $ "Fell back to on-disk entry for " <> memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are just so noisy. Maybe we can introduce a Log.superDebug
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this log useful at all? I think it's ok to just remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think they're not really useful now that we're confident the cache works correctly. I had them in there from when I first developed it and would either sometimes see things I thought should be cached not get cached, or I wanted to make sure something I removed from the cache really was.
scripts/src/LegacyImporter.purs
Outdated
publishLegacyPackage :: Manifest -> Run _ Unit | ||
publishLegacyPackage (Manifest manifest) = do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where we solve, compile, and then publish each package in turn. Publish failures are saved to cache and, at the end of the process, written to a publish-failures.json
file that records every version that failed and its reason so we can hand-review it. I've run this on a few hundred packages and it's looking correct.
An example from the publish-failures.json file: {
"string-parsers": {
"3.0.1": {
"reason": "No versions found in the registry for lists in range\n >=4.0.0 (declared dependency)\n <5.0.0 (declared dependency)",
"tag": "SolveFailed"
},
"3.1.0": {
"reason": "No versions found in the registry for lists in range\n >=4.0.0 (declared dependency)\n <5.0.0 (declared dependency)",
"tag": "SolveFailed"
}
},
} |
I've uncovered two rare but significant issues affecting the legacy importer (unrelated to this PR). Topological sorting
This run fails to produce a valid index because functors@4.1.1 is unsatisfied in its dependency on contravariant, but of course we see contravariant get inserted a little later on. The solution is to always consider version bounds when reading an index from disk where we expect bounds to be at least reasonably correct. I've implemented and tested that and situations like this no longer happen. The reason we ignored ranges at first is because we had far fewer checks around correct bounds and because in the package sets we want to explicitly ignore ranges when working with an index (when doing, for example, the 'self-contained' check). I've preserved that behavior — you can always opt-in to either considering or ignoring ranges when working with an index. Incorrect dependencies detected in legacy manifests derived from package sets Second, in some cases a package like strings@3.5.0 will have its dependency list pruned to only those listed in its package sets entry, and those turn out to be overly-restrictive. In this case, specifically, the dependency on This shouldn't happen because strings@3.5.0 has a Bowerfile that explicitly lists a dependency on integers, which we trimmed out by deferring to the package sets entry. We did this assuming package sets entries are correct and because we didn't want overly-constrained dependency lists. The second concern is no longer valid because with #667 we will remove unused dependencies. The first concern is no longer reasonable because we have at least one example of package sets dependency lists being incorrect. The solution is simple: instead of preferring package sets entries over other manifests, just union them all and defer to the 'unused dependencies' pruning in the publishing pipeline to trim out ones that aren't actually needed. |
…most manifest index ops
@@ -428,7 +428,7 @@ validatePackageSet (PackageSet set) = do | |||
-- We can now attempt to produce a self-contained manifest index from the | |||
-- collected manifests. If this fails then the package set is not | |||
-- self-contained. | |||
Tuple unsatisfied _ = ManifestIndex.maximalIndex (Set.fromFoldable success) | |||
Tuple unsatisfied _ = ManifestIndex.maximalIndex ManifestIndex.IgnoreRanges (Set.fromFoldable success) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We always ignore ranges in package sets, but we should rely on them otherwise, especially now that we're actually solving packages as part of publishing and can be more trusting that they aren't bogus.
scripts/src/LegacyImporter.purs
Outdated
let metadataPackage = unsafeFromRight (PackageName.parse "metadata") | ||
Registry.readMetadata metadataPackage >>= case _ of | ||
Nothing -> do | ||
Log.info "Writing empty metadata file for the 'metadata' package" | ||
let location = GitHub { owner: "purescript", repo: "purescript-metadata", subdir: Nothing } | ||
let entry = Metadata { location, owners: Nothing, published: Map.empty, unpublished: Map.empty } | ||
Registry.writeMetadata metadataPackage entry | ||
Just _ -> pure unit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We agreed to not reserve package names pre-0.13.0, so this reserves only metadata for the "metadata" package used by legacy package sets.
Can confirm that the fix is working with regards to generating manifests with full dependency lists to be pruned later — this is {
"name": "strings",
"version": "3.5.0",
"license": "MIT",
"description": "String and char utility functions, regular expressions.",
"location": {
"githubOwner": "purescript",
"githubRepo": "purescript-strings"
},
"dependencies": {
"arrays": ">=4.0.1 <5.0.0",
"either": ">=3.0.0 <4.0.0",
"gen": ">=1.1.0 <2.0.0",
"integers": ">=3.2.0 <4.0.0",
"maybe": ">=3.0.0 <4.0.0",
"partial": ">=1.2.0 <2.0.0",
"unfoldable": ">=3.0.0 <4.0.0"
}
} ...and the manifest index sorting is working as far as I can tell as well. |
We also need to support spago.yaml files in the legacy importer, as some packages now use that format. Otherwise they will be excluded with a |
Here's another fun one: some packages, like That means we can't get away with simply removing unused dependencies because we may also remove direct imports that were pulled in transitively by the unused dependency. Either we give up on removing unused dependencies, or we both remove unused dependencies and insert direct imports that weren't mentioned. For dependencies we insert into a manifest we have their exact version via the solver; we could potentially do a I think that's preferable to giving up on the unused dependencies check but I'm curious if you disagree @f-f or @colinwahl. |
As of the latest commit: we no longer simply remove unused dependencies. Instead, we loop. We remove unused dependencies, then bring in any transitive dependencies they would have brought in, and then check the new dependency list for unused dependencies and so on. The result is that we remove unused dependencies while preserving any transitive dependencies they brought it which are used in the source code. Note that we don't go through and add all packages your code directly imports, we just do this for dependencies that are being removed. |
With the patches in for rito / deku / bolson:
As usual: |
As discussed in the PureScript chat, the latest commit makes a small tweak to preserve packages that are from the core org or its derivatives, or which has had a tag since the 0.13 release date (May 29, 2019). These 49 packages are now reserved with empty metadata: Some notable newly-reserved names include 454 package names will be freed: This is the full list of packages that made it to the 0.13 or organization cutoff, along with their latest tag dates: |
I had a look at the above files, and they look good to me - the list of packages that we'll reserve is minimal and meaningful, and the list of ~450 freed packages seems sensible (I have checked quite a few manually and they all seem to be old enough to not be worth supporting, as people won't be able to build with them anyways) I started looking at the code but it's a sizeable patch so it will take a few days |
Let me know if you encounter tricky bits and would like an explanation. Also happy to jump on a quick call and walk through sections of the code. |
Thanks! I think the logistics of that will be tricky over the Christmas days, but I let's see if that works for next week |
@@ -234,11 +234,12 @@ For example: | |||
|
|||
All packages in the registry have an associated metadata file, which is located in the `metadata` directory of the `registry` repository under the package name. For example, the metadata for the `aff` package is located at: https://github.com/purescript/registry/blob/main/metadata/aff.json. Metadata files are the source of truth on all published and unpublished versions for a particular package for what there content is and where the package is located. Metadata files are produced by the registry, not by package authors, though they take some information from package manifests. | |||
|
|||
Each published version of a package records three fields: | |||
Each published version of a package records four fields: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely we can make this more future proof 😄
Each published version of a package records four fields: | |
Each published version of a package records the following fields: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think you're right 😆
|
||
- `hash`: a [`Sha256`](#Sha256) of the compressed archive fetched by the registry for the given version | ||
- `bytes`: the size of the tarball in bytes | ||
- `publishedTime`: the time the package was published as an `ISO8601` string | ||
- `compilers`: compiler versions this package is known to work with. This field can be in one of two states: a single version indicates that the package worked with a specific compiler on upload but has not yet been tested with all compilers, whereas a non-empty array of versions indicates the package has been tested with all compilers the registry supports. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be tidier to only allow a non-empty array instead of several possible types? After all, the state with multiple compilers listed is going to be a superset of the first state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with the non-empty array is that it isn't clear whether an array of a single element represents one of:
- a package that has been published with the given compiler, but which hasn't been tested against the full set of compilers
- a package that has been tested against the full set of compilers and only works with one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When are we going to end up in a situation where we don't test the package against the whole set of compilers? My reading of the PR is that we always do?
In any case, we'll always have packages that are not "tested against the full set of compilers": when a new compiler version comes out, then all packages will need a retest, and if a package doesn't have the new compiler in the array then we don't know if it's not compatible or if it hasn't been tested yet.
Maybe we need another piece of state somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When are we going to end up in a situation where we don't test the package against the whole set of compilers? My reading of the PR is that we always do?
Yes, as implemented here we just go ahead and test everything as soon as we've published. However, I split out the state because in our initial discussions we worried about how long it takes for the compiler builds to run (it takes publishing from N seconds to N minutes in some cases — large libraries or ones that leverage a lot of type machinery). We'd originally talked about the compiler matrix being a cron job that runs later in the day. I just made it part of the publishing pipeline directly because it was simpler to implement.
If we decide that it's OK for publishing to take a long time then we can eliminate this state and just test the compilers immediately. In that case we'd just have a non-empty array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any case, we'll always have packages that are not "tested against the full set of compilers": when a new compiler version comes out, then all packages will need a retest, and if a package doesn't have the new compiler in the array then we don't know if it's not compatible or if it hasn't been tested yet.
Yea, that's a good point. You don't know if the metadata you're reading just hasn't been reached yet by an ongoing mass compiler build to check a new compiler.
Maybe we need another piece of state somewhere else?
Off the top of my head I don't know a good place to put some state about possible compiler support; the metadata files are not helpful if a new compiler comes out and we're redoing the build since they're only aware of the one package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we decide that it's OK for publishing to take a long time then we can eliminate this state and just test the compilers immediately. In that case we'd just have a non-empty array.
I'm cool with this if you are.
We'll always have packages that are not "tested against the full set of compilers" [...] maybe we need another piece of state somewhere else?
We could either a) say that the supported list of compilers for a package can potentially be missing the current compiler if the matrix is currently running and not bother with state or b) put a JSON file or something in the metadata directory that indicates whether the compiler matrix is running. Then consumers can look at that.
Personally the matrix runs infrequently enough (just new compiler releases!) that I would rather opt for (a).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pondered this for a few days and I think it's complicated?
Since we're going towards a model where we'd only run one registry job at a time and queue the rest (to prevent concurrent pushes to the repo), I'm afraid that running the whole matrix at once would make publishing very slow.
Something that we could do to counteract this could be to split the "publish" and the "matrix runs": on publishing we'd just add the package metadata with one compiler, and at the end of the publishing job we'd queue a series of "compiler matrix" jobs, each testing one compiler. These jobs would be of low priority, so new publishes would get in front of the queue, and things can stay snappy.
Personally the matrix runs infrequently enough (just new compiler releases!) that I would rather opt for (a).
The approach detailed above implies that we're in a world where we do (a), i.e. the list of compilers is always potentially out of date, and that's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional note about the above: since the above would be introducing an "asynchronous matrix builder", we need to consider the dependency tree in our rebuilding: if a package A is published with compiler X, and then a package B depending on it is immediately published after it (a very common usecase since folks seem to publish their packages in batches), then we'd need to either make sure that matrix-build jobs for B are always run after matrix-build jobs for A, or retry them somehow.
@@ -14,6 +14,7 @@ let PublishedMetadata = | |||
{ hash : Sha256 | |||
, bytes : Natural | |||
, publishedTime : ISO8601String | |||
, compilers : < Single : Version | Many : List Version > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dhall supports NonEmpty
@@ -168,7 +168,6 @@ handleMemoryFs env = case _ of | |||
case inFs of | |||
Nothing -> pure $ reply Nothing | |||
Just entry -> do | |||
Log.debug $ "Fell back to on-disk entry for " <> memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this log useful at all? I think it's ok to just remove it
, hash :: Sha256 | ||
, publishedTime :: DateTime | ||
|
||
-- UNSPECIFIED: Will be removed in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I once again forgot why we are removing this 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's because we only need it when we have the importer running off of Git tags but I don't fully remember either. I'm still in favor of recording the full location of a package in each published version so we can always reconstruct where it came from.
@@ -171,6 +163,44 @@ fetchLegacyManifest name address ref = Run.Except.runExceptAt _legacyManifestErr | |||
|
|||
pure { license, dependencies, description } | |||
|
|||
-- | Some legacy manifests must be patched to be usable. | |||
patchLegacyManifest :: PackageName -> Version -> LegacyManifest -> LegacyManifest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This piece of code is quite nice and tidy for all the work it's doing!
-- | Verifies that the manifest lists dependencies imported in the source code, | ||
-- | no more (ie. unused) and no less (ie. transitive). The graph passed to this | ||
-- | function should be the output of 'purs graph' executed on the 'output' | ||
-- | directory of the package compiled with its dependencies. | ||
noTransitiveOrMissingDeps :: Manifest -> PursGraph -> (FilePath -> Either String PackageName) -> Either (Either (NonEmptyArray AssociatedError) ValidateDepsError) Unit | ||
noTransitiveOrMissingDeps (Manifest manifest) graph parser = do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's cool - once we merge this I'll try to fit it in Spago, hopefully we can reuse some of the graph code
-- then we don't add it to the dependencies to avoid over- | ||
-- constraining the solver. | ||
compilers <- Either.hush eitherCompilers | ||
-- Otherwise, we construct a maximal range for the compilers the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to make it 100% correct then we can choose one of the subsets of the range (presumably the most recent). E.g. in the case of a package supporting 0.15.0
and 0.15.2
but not 0.15.1
, then we'd pick 0.15.2
only.
But I agree that it's unlikely that this would happen in the wild, so I'd not worry about that until we stumble on this issue.
@@ -1038,7 +1045,7 @@ publishToPursuit { packageSourceDir, dependenciesDir, compiler, resolutions } = | |||
Left error -> | |||
Except.throw $ "Could not publish your package to Pursuit because an error was encountered (cc: @purescript/packaging): " <> error | |||
Right _ -> | |||
Comment.comment "Successfully uploaded package docs to Pursuit! 🎉 🚀" | |||
FS.Extra.remove tmp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't notify anymore for successful docs publishing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do, the comment is just done outside of this function now. See e.g. line 581.
let metadataPackage = unsafeFromRight (PackageName.parse "metadata") | ||
let pursPackage = unsafeFromRight (PackageName.parse "purs") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need these to be considered separately right? I.e. they can't be in the section above with the filterPackages_0_13
- we have versions for metadata
but we want none, and purs
hasn't been published at all.
We should probably reserve purescript
too?
@f-f I think I've responded to your comments above, but GitHub isn't showing all of them in this main PR view so I'm not completely sure. Please let me know if I missed one! |
@f-f Did you have any other questions or comments about this work? |
@thomashoneyman I am through with the review - the overall shape of the code is good, and the only thing left to really resolve is the thing about "testing against the full set of compilers" |
This reverts commit ab184f2.
Resolved merge conflicts. Unfortunately, the integration test fails on my home system (which is pop_os, rather than NixOS) so that's going to be annoying to debug in the future; fails to git clone the effect repository due to the tmp directory already including it. We're at least in a good position to resume work to get this over the line. |
Integration test was already passing in CI, but it was failing for me locally. Diagnosed the issue as arising from the |
Now that I finally got around to running the importer all the way through and have a local cache I can get back to splitting out the compiler jobs to be separate from the normal publish pipeline as discussed.
package-failures.json |
Fixes #577. Fixes #255. Merging blocked by #696.
The core problem solved here is identifying what compilers are compatible with a specific package version, such as
aff@7.0.0
. We need this to support an oft-requested feature for Pursuit: filtering search results by a compiler version (or range). It's also useful for other things; for one, it allows us to add the compiler version as a constraint to the solver to produce a build plan that works with the specific compiler given.Metadata files now include a
compilers
key in published metadata that lists either a bare version (the version used to publish the package) or an array of versions (the full set of compilers known to work with the package). The reason for two representations is that computing the full set of compilers can take a long time; this approach lets us upload the package right away and compute the rest of the valid compilers in a fixup pass. A bare version means the full set has not been computed yet.All packages must now be solvable. We can't compile a package version if we can't fetch its dependencies, so this becomes a requirement for all packages.
There are only 2 scenarios in which we need to compute the available compilers for a package version:
This PR is focused on the first case, and we should do a followup for the second case. (The second case is straightforward, and @colinwahl's compiler versions script already essentially implements it. It's been omitted from this PR for brevity).
A new package version can be published via the legacy importer or via a user submitting an API call, but the result is the same: eventually the
publish
pipeline is run. For that reason I've decided to compute the compiler versions for a package version as part of the publish pipeline where we're already determining resolutions and building with a specific compiler. That centralizes the logic to a single place.Therefore this PR centers on two things: trying compilers to find all that work for a package version at the end of publishing, and updating the legacy importer to determine a valid compiler version and resolutions before calling
publish
.I've added some tests and I've run the legacy importer locally; it's about 500 packages in so far and every failure appears to be correct. More comments in the PR.