-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decryption result is not checked before running decompression #101
Comments
Hi Olivier, What compressor are you using- Snappy or LZ4? Reason I'm asking: this was a known issue with Snappy, and one of the motivations for switching from Snappy to LZ4 as the default Nippy compressor. |
Hi Peter, yeah we are using LZ4 like this for encryption/compression :
So it seems there is the same issue with LZ4. I think this issue might not be resolved just by changing the compression but by checking that the decryption was successful. |
Any news on this issue ? If I understand correctly we could add a new encryptor-id like |
Hi Olivier, Thanks a lot for bringing this to my intention, this is something that should be fixed. Haven't had an opportunity to think about this yet. Some kind of hash/checksum would certainly be one option. Haven't had much time for open source recently, but will make this my next priority when I have some free time 👍. |
Why? - AES-GCM is faster and can be more secure, Ref. https://goo.gl/Dsc9mL, etc. - AES-GCM is an authenticated[1] encryption mechanism, providing automatic integrity checks. This is relevant to [#101]. What's the issue with #101? - We compress then encrypt on freeze ; Reverse would make compression useless - So we decrypt then decompress on thaw Attempting CBC decryption with the wrong password will often but not *always* throw. Meaning it's possible for decompression could be attempted with a junk ba. And this can cause some decompressors to fail in a destructive way, including large allocations (DDoS) or even taking down the JVM in extreme cases. Possible solutions? - We could add our own HMAC, etc. - And/or we could use something like AES-GCM which offers built-in integrity and will throw an AEADBadTagException on failure. There may indeed be reasons [2,3,4] to consider adding a custom HMAC - and that's still on the cards for later. But in the meantime, the overall balance of pros/cons seems to lean in the direction of choosing AES-GCM as a reasonable default. Note that the change in this commit is done in a backward-compatible way using Nippy's versioned header: new payloads will be written using AES-GCM by default. But old payloads already written using AES-CBC will continue to be read using that scheme. References [1] https://en.wikipedia.org/wiki/Authenticated_encryption [2] https://www.daemonology.net/blog/2009-06-24-encrypt-then-mac.html [3] https://blog.cryptographyengineering.com/2011/12/04/matt-green-smackdown-watch-are-aead/ [4] HMAC vs AEAD integrity, https://crypto.stackexchange.com/q/24379 [5] AES-GCM vs HMAC-SHA256 integrity, https://crypto.stackexchange.com/q/30627
Hi @obohrer, I've just pushed As you'll see in the relevant commit, this should help address the problem you're seeing. I'll note: the integrity/tag check will only apply to newly written/frozen data. Hope that helps? |
Hi,
Thank you for creating nippy. It is a great library which works quite well for us apart from a recent issue we encountered :
We are experiencing an issue with nippy when trying to thaw a byte-array which has been encrypted with a different encryption key.
The 5th byte of the decrypted payload contains the
len-decomp
but if the wrong encryption key was used this length can be huge, resulting in ajava.lang.OutOfMemoryError: Java heap space
. We expect nippy to throw an exception but not blow up the JVM.Do you think a new version of nippy could check the decryption result before trying to decompress it ? (For example by verifying a magic series of bytes after the
NPY
header ?)The text was updated successfully, but these errors were encountered: