-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Make BEEFY client able to generate either ECDSA or (ECDSA,BLS) Finality proof. #13311
Make BEEFY client able to generate either ECDSA or (ECDSA,BLS) Finality proof. #13311
Conversation
- break the 85000 char line in the commitment test multiple lines.
- remove H256, H512 unused hash types from bls
… skalman-bls-beefy
bls_verify` - implement bls wrapper for application-crypto
aggregatble signature. - Flexiblize Witness to have different Aggregator (either Merkle or BLS).
because it is not possible with bitfield aggregation. Compiles but tests do not compile yet.
using bls aggregator
…an/substrate into skalman-bls-beefy-client
CryptStoe into approperate key store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple of comments. Most of them are about some duplicated code that's a result of previous implementations. But I think it would be better to fix them in a separate PR that would be smaller and easier to review.
Cargo.toml
Outdated
@@ -321,3 +321,15 @@ inherits = "release" | |||
lto = "fat" | |||
# https://doc.rust-lang.org/rustc/codegen-options/index.html#codegen-units | |||
codegen-units = 1 | |||
|
|||
[patch.crates-io] | |||
bls-like = {git = "https://github.com/w3f/bls", branch = "skalman-hash-to-curve-wb"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: This doesn't seem to be needed since everywhere it's used like bls-like = {git = "https://github.com/w3f/bls", branch = "skalman-hash-to-curve-wb"}
. However it would make sense to use it like bls-like = {git = "https://github.com/w3f/bls", branch = "master"}
in other places in which case, this patch would be needed.
Also is this library reviewed and production-ready ? Or should we also review it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably deprecate that whole crate myself. It's a legacy design that pre-dates our gossip optimizations in https://eprint.iacr.org/2022/1611 and the apk proofs work.
As syed do the hash-to-curve work, it should work fine, and comply with the proposed standard such as it is, but it makes verifying BLS signatures quite slow (like everyone else's BLS.. We know how to do this faster.
|
||
/// Verify a signature on a message. Returns true if the signature is good. | ||
fn verify<M: AsRef<[u8]>>(sig: &Self::Signature, message: M, pubkey: &Self::Public) -> bool { | ||
Self::verify_weak(&sig.0[..], message.as_ref(), pubkey) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: message.as_ref()
doesn't seem to be needed. We can just use message
.
} | ||
|
||
#[cfg(feature = "full_crypto")] | ||
impl TraitPair for Pair { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some methods seem to be duplicated for bls
, ecdsa
and ed25519
. So maybe we could have a default definition for them on the TraitPair
trait.
For example:
-
generate_with_phrase
-
also
from_seed
could be defined onTraitPair
with some small modifications:
fn from_seed(seed: &Self::Seed) -> Self {
Self::from_seed_slice(seed.as_ref()).expect("seed has valid length; qed")
}
-
from_phrase
as well with some small modifications -
derive()
could be deduplicated as well if we define for example aBaseDeriveError
that is common to all structs implementingTraitPair
.
Maybe other methods can be deduplicated as well.
fn try_from(data: &[u8]) -> Result<Self, Self::Error> { | ||
if data.len() != Self::LEN { | ||
return Err(()) | ||
} | ||
let mut r = [0u8; Self::LEN]; | ||
r.copy_from_slice(data); | ||
Ok(Self::unchecked_from(r)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic seems the same as the one for ed25519
and sr25519
. Maybe we could move it to a shared method or a trait.
fn try_from(data: &[u8]) -> Result<Self, Self::Error> { | ||
if data.len() == BLS377::SIGNATURE_SERIALIZED_SIZE { | ||
let mut inner = [0u8; BLS377::SIGNATURE_SERIALIZED_SIZE]; | ||
inner.copy_from_slice(data); | ||
Ok(Signature(inner)) | ||
} else { | ||
Err(()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic seems the same as the one for ed25519
and sr25519
. Maybe we could move it to a shared method or a trait. In general I would deduplicate as much as possible.
primitives/beefy/src/witness.rs
Outdated
@@ -57,18 +60,18 @@ impl<TBlockNumber, TMerkleRoot> SignedCommitmentWitness<TBlockNumber, TMerkleRoo | |||
/// and a merkle root of all signatures. | |||
/// | |||
/// Returns the full list of signatures along with the witness. | |||
pub fn from_signed<TMerkelize, TSignature>( | |||
pub fn from_signed<TSignatureAggregator, TSignature, TAggregatableSignature>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: TAggregatableSignature
doesn't seem to be used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was supposed to be populated by apk prover after the received enough signatures (see the test) because multiple aggregations can not be aggregated.
Nonetheless, I think we might the commitment here instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm maybe you mean in this function? isn't the return value of TAggregatableSignature
type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm maybe you mean in this function?
Yes, exactly
isn't the return value of TAggregatableSignature type?
Doesn't seem to be used for the return type. If I understand correctly, TAggregatedSignature
is used in the return type. And the code compiles without it.
fn bls_public_keys(&self, key_type: KeyTypeId) -> Vec<bls::Public> { | ||
self.0 | ||
.read() | ||
.raw_public_keys(key_type) | ||
.map(|v| { | ||
v.into_iter() | ||
.filter_map(|k| bls::Public::from_slice(k.as_slice()).ok()) | ||
.collect() | ||
}) | ||
.unwrap_or_default() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code seems similar with the one used for ecdsa, ed25519, sr25519. I would try to deduplicate it. For example I would define a method:
fn public_keys<Public: ByteArray>(&self, key_type: KeyTypeId) -> Vec<Public> {
self.0
.read()
.raw_public_keys(key_type)
.map(|v| {
v.into_iter()
.filter_map(|k| Public::from_slice(k.as_slice()).ok())
.collect()
})
.unwrap_or_default()
}
on LocalKeystore
and then I would reimplement:
fn ecdsa_public_keys(&self, key_type: KeyTypeId) -> Vec<ecdsa::Public> {
self.public_keys::<ecdsa::Public>(key_type)
}
...
fn bls_public_keys(&self, key_type: KeyTypeId) -> Vec<bls::Public> {
self.public_keys::<bls::Public>(key_type)
}
fn bls_generate_new( | ||
&self, | ||
id: KeyTypeId, | ||
seed: Option<&str>, | ||
) -> std::result::Result<bls::Public, TraitError> { | ||
let pair = match seed { | ||
Some(seed) => self.0.write().insert_ephemeral_from_seed_by_type::<bls::Pair>(seed, id), | ||
None => self.0.write().generate_by_type::<bls::Pair>(id), | ||
} | ||
.map_err(|e| -> TraitError { e.into() })?; | ||
|
||
Ok(pair.public()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same comment as the one for bls_public_keys
. I would try to deduplicate this code.
let own_ecdsa_key = self.both().0.authority_id(&ecdsa_pubkeys); | ||
let own_bls_key = self.both().1.authority_id(&bls_pubkeys); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that for this keystore the key is made of a BLS key and a ECDSA key. Do this 2 keys need to be matching somehow (generated from the same seed or something) ? If they need to be matching, does this logic here guarantee that we are retrieving 2 matching keys from the ECDSA keystora and BLS keystore ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do this 2 keys need to be matching somehow (generated from the same seed or something) ?
afaict, it's immaterial if the seeds/privkeys are distinct since to the outside, what matters is the registered ECDSA & BLS pubkeys. if there's need for more, you could achieve this with a dleq proof.
_auth_id: PhantomData<AuthId>, | ||
_signature: PhantomData<TSignature>, | ||
_keystor: PhantomData<BKS>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw a couple of places where we have fields declared like this. Do we plan to use them in the future or is it just a way of declaring phantom data?
If we plan to use them in the future, I think it would be helpful to write in the description of the PR what is the current status, what remains to be done, and maybe also mark the PR as a draft if needed.
//we are going to aggregate the signatures here | ||
let mut aggregatedsigs: SignatureAggregatorAssumingPoP<BLS377> = | ||
SignatureAggregatorAssumingPoP::new(); | ||
sigs.iter().filter_map(|sig| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not only I assumed the commitment is the aggregated signature, I tought is should be computed only by the prover not each validator. I think @AlistairStewart says that this is the commitment and it should be computed and sign by all validators:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the commitment to the set of public keys here: https://github.com/paritytech/substrate/pull/13311/files#diff-90d0377841064e4bcca6f0956bf16bd71e11159f9dd6d5975b91f2b4d901b0a4R200
It is not clear for me where it should be actually computed and updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
main requested change is using consistent naming conventions; I'm otherwise happy with direction of the PR :)
/// Check if the keystore contains a private key for one of the public keys | ||
/// contained in `keys`. A public key with a matching private key is known | ||
/// as a local authority id. | ||
/// | ||
/// Return the public key for which we also do have a private key. If no | ||
/// matching private key is found, `None` will be returned. | ||
fn authority_id(&self, keys: &[(ECDSAPublic, BLSPublic)]) -> Option<Self::Public> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this currently checks though whether both keys are present, else returning None
. So either expressed intent or impl should be adjusted.
let own_ecdsa_key = self.both().0.authority_id(&ecdsa_pubkeys); | ||
let own_bls_key = self.both().1.authority_id(&bls_pubkeys); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do this 2 keys need to be matching somehow (generated from the same seed or something) ?
afaict, it's immaterial if the seeds/privkeys are distinct since to the outside, what matters is the registered ECDSA & BLS pubkeys. if there's need for more, you could achieve this with a dleq proof.
id: KeyTypeId, | ||
public: &bls::Public, | ||
msg: &[u8], | ||
) -> Result<Option<bls::Signature>, Error>; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: in the same spirit as @serban300's deduplication recommendation elsewhere, since the function interfaces look the same here, I'd unify all the SIGSCHEME_{public_keys, generate_new, sign...}
implementations with generic function interfaces and concrete instantiations for every signature scheme.
@@ -50,6 +50,10 @@ schnorrkel = { version = "0.9.1", features = [ | |||
"preaudit_deprecated", | |||
"u64_backend", | |||
], default-features = false, optional = true } | |||
bls-like = {git = "https://github.com/w3f/bls", branch = "skalman-hash-to-curve-wb", default-features = false} | |||
#bls-like = {version="*", default-features = false } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
#bls-like = {version="*", default-features = false } |
or replace the actual bls-like
dependency once your branch is merged.
struct BLSAggregatableSignature(BLSSignature); | ||
|
||
#[derive(Clone, Debug, PartialEq, codec::Encode, codec::Decode)] | ||
struct ECDSABLSSignaturePair(ecdsa_crypto::Signature, BLSSignature); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
impl BeefyBLSnECDSAKeystore { | ||
fn both(&self) -> (BeefyECDSAKeystore, BeefyBLSKeystore) { | ||
(BeefyECDSAKeystore(self.0.clone()), BeefyBLSKeystore(self.0.clone())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're using both ...BLSnECDSA...
and ...ECDSAnBLS...
for naming. Here for instance as elsewhere, the ordering of the tuple is in fact (...ECDSA..., ...BLS...)
. This seems prone to developer bugs whenever type system doesn't come to the rescue, so I'd stick with one of the naming conventions. I'd personally prefer alphabetically BLS then ECDSA, but ECDSA then BLS requires less refactoring.
&sig.clone().into(), | ||
&msg, | ||
&Keyring::Alice.public().into(), | ||
/// Auxiliray tairt for ECDSAnBLS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// Auxiliray tairt for ECDSAnBLS | |
/// Auxiliary trait for ECDSAnBLS |
I've not kept up with this sorry.
A: Nope & nope. Ideally we'd benchmark this to see if its really worthwhile, but maybe we'll be lazy and not do this ever. I donno..
A: Nope & nope. This seems essential, as hash-to-G2 is extremely slow.
A: BLS12-377 but Alistair recently changed his mind, and wants BLS12-381. Otoh hash-to-G2 maybe makes more sense for Ethereum, since they fucked ups themselves and picked hash-to-G2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest you move non-beefy-specific BLS support to a new PR that we can review and merge to substrate/master without worrying about breaking BEEFY.
I'm referring to the generic BLS (bls12-377 in particular) support. So that would be changes under:
client/keystore/
primitives/application-crypto/
primitives/core/
primitives/io/
primitives/keystore/
Then we can focus on the integration with BEEFY.
@drskalman wdyt?
@@ -25,6 +25,8 @@ pub const ED25519: KeyTypeId = KeyTypeId(*b"ed25"); | |||
pub const SR25519: KeyTypeId = KeyTypeId(*b"sr25"); | |||
/// Key type for generic ECDSA key. | |||
pub const ECDSA: KeyTypeId = KeyTypeId(*b"ecds"); | |||
/// Key type for generic BLS12-377 key. | |||
pub const BLS: KeyTypeId = KeyTypeId(*b"bls7"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll likely choose bls12-381 over bls12-377 now.
}; | ||
use bls_like::{ | ||
pop::SignatureAggregatorAssumingPoP, EngineBLS, SerializableToBytes, Signed, BLS377, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see pub type BLS377 = UsualBLS<ark_bls12_377::Bls12_377, ark_bls12_377::Config>;
and UsualBLS
uses 48 byte keys and hashes-to-G2, but..
We do not want to hash-to-G2. We want to both hash-to-G1 and also run apk proofs on G1.
This requires that public keys be on both G1 and G2 and contain a DLEQ proof between them. This is the point of https://eprint.iacr.org/2022/1611
#[derive(Debug, Default, Clone, PartialEq, Eq)] | ||
pub struct AuthoritySetCommitment(pub KeysetCommitment); | ||
|
||
impl Encode for AuthoritySetCommitment { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did haphazard job implementing Scale for the APK Commitment so I can compile the code. I'm sure someone else on the team knows better how to implement Scale and Serialization related stuff, so please have a go at it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to this, @swasilyev is reluctant to add Scale to KeysetCommitment as it is substrate specific. Plus we should have Scale for Arkworks' CanonicalSerialize as we are scaling arkworks object other place (like new crypto host function for pairing etc).
|
I agree. Still we need a generic BLS12-377 signer/verifier to be able to run the beefy tests (unless we want to simulate it with BEEFY with let say (ECDSA, ED25519) crypto but that sound like a terrible idea and lots of wasted of time and energy) I'll make another PR to improve BLS crypto and do not bother the beefy team with it. We can put a warning on the bls crypto modules that they are not production ready meannwhile. |
/// The full content of the commitment. | ||
pub commitment: Commitment<TBlockNumber>, | ||
|
||
/// The bit vector of validators who signed the commitment. | ||
pub signed_by: Vec<bool>, // TODO [ToDr] Consider replacing with bitvec crate | ||
|
||
/// A merkle root of signatures in the original signed commitment. | ||
pub signatures_merkle_root: TMerkleRoot, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to @andresilva the Merkle root of validator set is not part of the beefy message and so adding multiple commitment to the validator set doesn't affect BEEFY messages size. But I'm confused here as it seems to me that it is included here. Isn't SignedCommitmentWitness being being gossiped?
Alright, we do not need a migration path here because we'll only keep the overall framework for doing two BEEFY signatures, and the actual BLS code here can be entirely replaced before going to kusama, yes? Cool |
@drskalman if you didn't start working on the deduplication comments that I left and if it's ok for you, I would be interested in implementing them myself in a separate PR sine I have some ideas there. Please just let me know. Thanks ! |
- rename TAggregatableSignature to TSignatureAccumulator. - Fix and pass all tests in primites/beefy. - remove unused TAggregatableSignature type
The CI pipeline was cancelled due to failure one of the required jobs. |
…3311 applied to master's head
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
We'll reopen when we get there |
* Cherry pick all crypto related changes from pull-request #13311 applied to master's head * Import some stuff just if 'full_crypto' is on * Remove copyright year * Cleanup * First generic BLS draft * Finalize generic implementation * Restore tests * Fix rust docs * Fix after master merge * Fix after master merge * Use double bls with G1 as signature group and verify individual signatures using DLEQ proof. * Fix inclusions and types used within substrate * Remove unused cruft * Restore usage of upstream crates * Fix test * Reduce the diff by aligning Cargo.lock to master * Application-crypto provides bls381 * Implement bls381 for local keystore * Use new generic keystore features * import DoublePublickey[Scheme] from the bls-like root to be less confusing. * fix compilation * Apply suggestions from code review Co-authored-by: Robert Hambrock <roberthambrock@gmail.com> * Clean leftovers * - update bls test vector after applying spec change recommendation. - send message as ref. * Different hard junction ids for different bls12 types * update to new bls-like * bls-like → w3f-bls * Make clippy happy * update test vector after replacing hash and crop with hash to field. * cargo fmt * account for #13972 * hide BLS behind "bls_non_production" feature flag * Remove Cargo.lock entries duplicated in merge * add bls377 to primitives/keystore and client/keystore add bls377 to primitives/application-crypto/ add bls_non_production to primitives/keystore and client/keystore bump up w3f-bls version * rename feature `bls_non_production` to `bls-experimental` --------- Co-authored-by: Davide Galassi <davxy@datawok.net> Co-authored-by: André Silva <andrerfosilva@gmail.com> Co-authored-by: Robert Hambrock <roberthambrock@gmail.com>
* Cherry pick all crypto related changes from pull-request paritytech#13311 applied to master's head * Import some stuff just if 'full_crypto' is on * Remove copyright year * Cleanup * First generic BLS draft * Finalize generic implementation * Restore tests * Fix rust docs * Fix after master merge * Fix after master merge * Use double bls with G1 as signature group and verify individual signatures using DLEQ proof. * Fix inclusions and types used within substrate * Remove unused cruft * Restore usage of upstream crates * Fix test * Reduce the diff by aligning Cargo.lock to master * Application-crypto provides bls381 * Implement bls381 for local keystore * Use new generic keystore features * import DoublePublickey[Scheme] from the bls-like root to be less confusing. * fix compilation * Apply suggestions from code review Co-authored-by: Robert Hambrock <roberthambrock@gmail.com> * Clean leftovers * - update bls test vector after applying spec change recommendation. - send message as ref. * Different hard junction ids for different bls12 types * update to new bls-like * bls-like → w3f-bls * Make clippy happy * update test vector after replacing hash and crop with hash to field. * cargo fmt * account for paritytech#13972 * hide BLS behind "bls_non_production" feature flag * Remove Cargo.lock entries duplicated in merge * add bls377 to primitives/keystore and client/keystore add bls377 to primitives/application-crypto/ add bls_non_production to primitives/keystore and client/keystore bump up w3f-bls version * rename feature `bls_non_production` to `bls-experimental` --------- Co-authored-by: Davide Galassi <davxy@datawok.net> Co-authored-by: André Silva <andrerfosilva@gmail.com> Co-authored-by: Robert Hambrock <roberthambrock@gmail.com>
To enjoy the benefit and efficiency of APK proof for light client we need BEEFY to generate BLS signatures. We also like ECDSA signature because they are easier to verify both for the gossiper and for the Ethereum light client. This PR makes BEEFY Gadget generic over the signing key and signature and implement tests for both proofs signed only by ECDSA and also for proofs signed by ECDSA and BLS keys.
It implements #10469 as a side effect.