Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support MD5 hashes #1556

Merged
merged 1 commit into from
Feb 17, 2024
Merged

Support MD5 hashes #1556

merged 1 commit into from
Feb 17, 2024

Conversation

charliermarsh
Copy link
Member

Summary

We can add other hashes if necessary, but I don't know that they're really used in practice.

Closes #1547.

@charliermarsh charliermarsh added the compatibility Compatibility with a specification or another tool label Feb 17, 2024
@charliermarsh charliermarsh enabled auto-merge (squash) February 17, 2024 00:20
@charliermarsh charliermarsh merged commit c1eb613 into main Feb 17, 2024
7 checks passed
@charliermarsh charliermarsh deleted the charlie/md5 branch February 17, 2024 00:25
BurntSushi added a commit that referenced this pull request Feb 19, 2024
This PR introduces more robust cache healing when `uv` fails to
deserialize an existing cache entry.

("Cache healing" in this context means that if `uv` fails to
deserialize a cache entry, then it will automatically invalidate that
entry and re-generate the data. Typically by sending an HTTP request.)

Previous to some optimizations I made around deserialization, we were
already doing this. After those optimizations, deserializing a cache
policy and the payload were split into two steps. While deserializing
a cache policy retained its cache healing behavior, deserializing the
payload did not. This became an issue when #1556 landed, which changed
one of our `rkyv` data types. This in turn made our internal types
incompatible with existing cache entries. One could work-around this
by clearing `uv`'s cache with `uv clean`, but we should just do it
automatically on a cache entry by entry basis.

This does technically introduce a new cost by pessimistically cloning
the HTTP request so that we can re-send it if necessary (see the commit
messages for the knot pushing me toward this approach). So I re-ran my
favorite ad-hoc benchmark:

```
$ hyperfine -w10 --runs 50 "uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" "uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" ; A bart
Benchmark 1: uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
  Time (mean ± σ):     114.4 ms ±   3.2 ms    [User: 149.4 ms, System: 221.5 ms]
  Range (min … max):   106.7 ms … 122.0 ms    50 runs

Benchmark 2: uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
  Time (mean ± σ):     114.0 ms ±   3.0 ms    [User: 146.0 ms, System: 223.3 ms]
  Range (min … max):   105.3 ms … 121.4 ms    50 runs

Summary
  uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
    1.00 ± 0.04 times faster than uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```

Which is about what I expected.

We should endeavor to have a better testing strategy for these kinds of
bugs, but I think it might be a little tricky to do. I created
#1699 to track that.

Fixes #1571
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compatibility Compatibility with a specification or another tool
Projects
None yet
Development

Successfully merging this pull request may close these issues.

uv pip install fails when using Nexus due to md5 hash
1 participant