-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require all packages to solve / compile and include all valid compilers in their metadata #669
base: master
Are you sure you want to change the base?
Changes from 24 commits
0b6dd4e
e15e4a8
d8e7e41
8e069b6
5348ee2
630c0bf
77d6e68
8749bea
be93d18
5a15433
559275c
09d515a
98ef892
ae621da
5c54103
441b960
3495edb
7ceab4c
3b85cd5
3fa90b5
628fdf0
0d3cef9
4e8cb87
d7c4180
81c85a4
10bccee
26c5aa0
b11917e
3ddde82
ec388d1
5b17cb3
f924b31
3cdb9b9
d0181e5
6f9f0cd
2721c6a
f8d0f80
e2d6e87
b8a21a8
3d7ab49
6bc8d09
9acbc94
d2c3b9a
bea2013
c942722
637a757
ab184f2
9cc56e7
c05fcb9
637488d
662dd00
8156aa2
ed7913c
ec8e3ff
d7d5e49
7d74da3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -234,11 +234,12 @@ For example: | |
|
||
All packages in the registry have an associated metadata file, which is located in the `metadata` directory of the `registry` repository under the package name. For example, the metadata for the `aff` package is located at: https://github.com/purescript/registry/blob/main/metadata/aff.json. Metadata files are the source of truth on all published and unpublished versions for a particular package for what there content is and where the package is located. Metadata files are produced by the registry, not by package authors, though they take some information from package manifests. | ||
|
||
Each published version of a package records three fields: | ||
Each published version of a package records four fields: | ||
|
||
- `hash`: a [`Sha256`](#Sha256) of the compressed archive fetched by the registry for the given version | ||
- `bytes`: the size of the tarball in bytes | ||
- `publishedTime`: the time the package was published as an `ISO8601` string | ||
- `compilers`: compiler versions this package is known to work with. This field can be in one of two states: a single version indicates that the package worked with a specific compiler on upload but has not yet been tested with all compilers, whereas a non-empty array of versions indicates the package has been tested with all compilers the registry supports. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wouldn't it be tidier to only allow a non-empty array instead of several possible types? After all, the state with multiple compilers listed is going to be a superset of the first state. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The issue with the non-empty array is that it isn't clear whether an array of a single element represents one of:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When are we going to end up in a situation where we don't test the package against the whole set of compilers? My reading of the PR is that we always do? In any case, we'll always have packages that are not "tested against the full set of compilers": when a new compiler version comes out, then all packages will need a retest, and if a package doesn't have the new compiler in the array then we don't know if it's not compatible or if it hasn't been tested yet. Maybe we need another piece of state somewhere else? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, as implemented here we just go ahead and test everything as soon as we've published. However, I split out the state because in our initial discussions we worried about how long it takes for the compiler builds to run (it takes publishing from N seconds to N minutes in some cases — large libraries or ones that leverage a lot of type machinery). We'd originally talked about the compiler matrix being a cron job that runs later in the day. I just made it part of the publishing pipeline directly because it was simpler to implement. If we decide that it's OK for publishing to take a long time then we can eliminate this state and just test the compilers immediately. In that case we'd just have a non-empty array. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yea, that's a good point. You don't know if the metadata you're reading just hasn't been reached yet by an ongoing mass compiler build to check a new compiler.
Off the top of my head I don't know a good place to put some state about possible compiler support; the metadata files are not helpful if a new compiler comes out and we're redoing the build since they're only aware of the one package. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I'm cool with this if you are.
We could either a) say that the supported list of compilers for a package can potentially be missing the current compiler if the matrix is currently running and not bother with state or b) put a JSON file or something in the metadata directory that indicates whether the compiler matrix is running. Then consumers can look at that. Personally the matrix runs infrequently enough (just new compiler releases!) that I would rather opt for (a). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I pondered this for a few days and I think it's complicated? Since we're going towards a model where we'd only run one registry job at a time and queue the rest (to prevent concurrent pushes to the repo), I'm afraid that running the whole matrix at once would make publishing very slow.
The approach detailed above implies that we're in a world where we do (a), i.e. the list of compilers is always potentially out of date, and that's fine. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Additional note about the above: since the above would be introducing an "asynchronous matrix builder", we need to consider the dependency tree in our rebuilding: if a package A is published with compiler X, and then a package B depending on it is immediately published after it (a very common usecase since folks seem to publish their packages in batches), then we'd need to either make sure that matrix-build jobs for B are always run after matrix-build jobs for A, or retry them somehow. |
||
|
||
Each unpublished version of a package records three fields: | ||
|
||
|
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -168,7 +168,6 @@ handleMemoryFs env = case _ of | |
case inFs of | ||
Nothing -> pure $ reply Nothing | ||
Just entry -> do | ||
Log.debug $ "Fell back to on-disk entry for " <> memory | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These are just so noisy. Maybe we can introduce a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this log useful at all? I think it's ok to just remove it There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yea, I think they're not really useful now that we're confident the cache works correctly. I had them in there from when I first developed it and would either sometimes see things I thought should be cached not get cached, or I wanted to make sure something I removed from the cache really was. |
||
putMemoryImpl env.ref unit (Key memory (Const entry)) | ||
pure $ reply $ Just $ unCache entry | ||
Just cached -> | ||
|
@@ -226,8 +225,7 @@ getMemoryImpl ref (Key id (Reply reply)) = do | |
let (unCache :: CacheValue -> b) = unsafeCoerce | ||
cache <- Run.liftEffect $ Ref.read ref | ||
case Map.lookup id cache of | ||
Nothing -> do | ||
Log.debug $ "No cache entry found for " <> id <> " in memory." | ||
Nothing -> | ||
pure $ reply Nothing | ||
Just cached -> do | ||
pure $ reply $ Just $ unCache cached | ||
|
@@ -236,7 +234,6 @@ putMemoryImpl :: forall x r a. CacheRef -> a -> MemoryEncoding Const a x -> Run | |
putMemoryImpl ref next (Key id (Const value)) = do | ||
let (toCache :: x -> CacheValue) = unsafeCoerce | ||
Run.liftEffect $ Ref.modify_ (Map.insert id (toCache value)) ref | ||
Log.debug $ "Wrote cache entry for " <> id <> " in memory." | ||
pure next | ||
|
||
deleteMemoryImpl :: forall x r a. CacheRef -> MemoryEncoding Ignore a x -> Run (LOG + EFFECT + r) a | ||
|
@@ -275,7 +272,6 @@ getFsImpl cacheDir = case _ of | |
let path = Path.concat [ cacheDir, safePath id ] | ||
Run.liftAff (Aff.attempt (FS.Aff.readFile path)) >>= case _ of | ||
Left _ -> do | ||
Log.debug $ "No cache found for " <> id <> " at path " <> path | ||
pure $ reply Nothing | ||
Right buf -> do | ||
pure $ reply $ Just buf | ||
|
@@ -284,7 +280,6 @@ getFsImpl cacheDir = case _ of | |
let path = Path.concat [ cacheDir, safePath id ] | ||
Run.liftAff (Aff.attempt (FS.Aff.readTextFile UTF8 path)) >>= case _ of | ||
Left _ -> do | ||
Log.debug $ "No cache file found for " <> id <> " at path " <> path | ||
pure $ reply Nothing | ||
Right content -> case Argonaut.Parser.jsonParser content of | ||
Left parseError -> do | ||
|
@@ -307,7 +302,6 @@ putFsImpl cacheDir next = case _ of | |
Log.warn $ "Failed to write cache entry for " <> id <> " at path " <> path <> " as a buffer: " <> Aff.message fsError | ||
pure next | ||
Right _ -> do | ||
Log.debug $ "Wrote cache entry for " <> id <> " as a buffer at path " <> path | ||
pure next | ||
|
||
AsJson id codec (Const value) -> do | ||
|
@@ -317,7 +311,6 @@ putFsImpl cacheDir next = case _ of | |
Log.warn $ "Failed to write cache entry for " <> id <> " at path " <> path <> " as JSON: " <> Aff.message fsError | ||
pure next | ||
Right _ -> do | ||
Log.debug $ "Wrote cache entry for " <> id <> " at path " <> path <> " as JSON." | ||
pure next | ||
|
||
deleteFsImpl :: forall a b r. FilePath -> FsEncoding Ignore a b -> Run (LOG + AFF + r) a | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -428,7 +428,7 @@ validatePackageSet (PackageSet set) = do | |
-- We can now attempt to produce a self-contained manifest index from the | ||
-- collected manifests. If this fails then the package set is not | ||
-- self-contained. | ||
Tuple unsatisfied _ = ManifestIndex.maximalIndex (Set.fromFoldable success) | ||
Tuple unsatisfied _ = ManifestIndex.maximalIndex ManifestIndex.IgnoreRanges (Set.fromFoldable success) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We always ignore ranges in package sets, but we should rely on them otherwise, especially now that we're actually solving packages as part of publishing and can be more trusting that they aren't bogus. |
||
|
||
-- Otherwise, we can check if we were able to produce an index from the | ||
-- package set alone, without errors. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely we can make this more future proof 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think you're right 😆