-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvdb-rocksdb uses an old rocksdb, leading to two separate security issues #81
Comments
prometheus-client has updated. libp2p has yet to update to the new prometheus client, and I don't care to fork libp2p to do so.
The following patch DOES use the new code. |
libp2p is up to date and can be patched if we're willing to patch our copy of substrate. It's not an immediate priority, yet good to see. |
https://github.com/paritytech/substrate/tree/polkadot-v0.9.29 is available. It doesn't address the above issues but we should still update to match these releases with parity. |
The new kvdb-rocksdb was just tagged 🎉 |
prometheus-client was updated by libp2p yet they haven't released the version with it (0.8.0 for libp2p-metrics). |
Updating kvdb requires updating all of parity-common's crates. Best to wait a bit longer, I guess. |
paritytech/substrate@fc67cbb has been merged into their master. |
paritytech/parity-common#659 for its usage of owning_ref, which is also used by prometheus-client (prometheus/client_rust#77).
paritytech/parity-common#437 was closed 6 months ago as wontfix due to how rocksdb would be "deprecated" for paritydb, despite there still being no timeline on when and a lack of further reasoning directly stated (which would've been appreciated, albeit may just be available elsewhere and accordingly fine).
kvdb-rocksdb also uses rocksdb 0.18, which has a vulnerability in a certain API. While that may not be relevant, I'd appreciate using 0.19 to close out the advisory.
We'd have to fork parity-common to resolve this while staying on rocksdb, and we're reliant on prometheus's update anyways so there's no reason to yet. They thankfully are handling this properly.
Relevant to #28.
The text was updated successfully, but these errors were encountered: