fix: compute sha1 on .jar files of size up to buffer.constants.MAX_LENGTH #600
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Because we currently decompress .jar files using adm-zip that does not support streaming,
we load the .jar file in memory using the Buffer class. Due to the current design,
we can read .jar files of up to buffer.constants.MAX_LENGTH. On 32-bit architectures, this value
currently is 230 - 1 (about 1 GiB). On 64-bit architectures, this value currently is 232 (about 4 GiB).
What does this PR do?
Refactors
buffertoSha1
function to support hashing of large buffers by reading the buffer in chunks. This fixes the case the crypto module is throwingThe RangeError: data is too long
because it cannot handle the size of data passed to it in one go. However due to the current design, we support hasing .jar files of up tobuffer.constants.MAX_LENGTH
.Additional questions