-
Notifications
You must be signed in to change notification settings - Fork 10
Signature Aggregation #1
Comments
Thank you for this Brian, I think there is a set of articles by Blockstream and Dan Boneh (I will try to look for them), which discuss aggregation (which is not multisig) to save blocksize |
All aggregation scheme may assume a PKI-like system. In other words, looking up public keys as well as signing with public keys can be assumed. Links: Forum discussion about BLS signatures |
Note: there is a namespace collision in literature. Signature aggregation often refers to Multi-signatures. As far as we can tell, there is only one signature aggregation (compression) scheme that meets our requirements: BLS Signature aggregation. Viable Methods for Signature AggregationBLS Signature aggregationGiven a collection of Generate Aggregate:
Verify Aggregate:
Notes: all Pros
Cons
Performance of each aggregation schemeThe following results were generated using code found here. The machine used was a 2018 Macbook Pro with i7-8th gen processor-4 cores (only 1 core used). BLS signatures were computed using Chia's python library that binds to their c++ library. ECDSA was computed using python-cryptography; which ports to OpenSSL. ECPY was used to compute EC key generation, ECPY - ECDSA, and ECPY - Schnorr. In an apples-to-apples comparison,
What to glean from the results:These results tell us that for 1 core on an i7-8xxx, Generation:
Verification:
Things to considerPer conversation with @whyrusleeping, we will need to be mindful that in order for a block to finalize, we need signature verification to be very fast. Each miner in the network will need to verify a block and pass the block along; this chain and resulting chain-delay can add to long finality times. We might be able to speed things up by taking advantage of parallelization. |
to clarify the consideration at the bottom of the last comment: We can repropogate blocks after just validating the signature on the block and the ticket (meaning, that the miner in question did actually win a block). But we have to fully validate it before mining on top of it. |
As it turns out the BLS curves used in @dignifiedquire is investigating using our curve implementation to speed up BLS signature aggregation and verification. The reason we believe
|
I made an implementation based on the pairing library in rust, which gets these times a little nicer https://github.com/filecoin-project/bls-signatures
|
Confirming @dignifiedquire's results with same machine that generate previous benches.
Compared to the Chia library, @dignifiedquire is much faster. Total time to verify is 8.8s (hashing + verify) here and 11.2s in Chia. Hashing represents the bulk of the work at ~6s here. Deserialization is very expensive in Chia at 3s vs. 0.0001s here. This makes the total verification time 8.9s here and 14s in Chia. If we can optimize g2_hash (the hashing step here) then we could potentially gain a few more seconds. |
Updates: BLS signature verification can be made very fast by hashing a message into G1 (small group); this has to do with G1 having a smaller cofactor groups than G2. The trade-off is much larger public keys (2x size). We will need to figure out how to store those pubic keys and look them up. We will also need to figure out a key recover options. In ECDSA one can extract a public key from a signature; In BLS that does not appear to be immediately possible.
|
Presentation of current state to be done at the research retreat Dec. 7-11 #65 |
Requires people resources to continue. |
Resources identified. BLS Effort kicking off the week of 7 Jan. 2019. |
Link to workplan notes: https://docs.google.com/document/d/1iATTJY0bd-psc-PaCfXbtMsI0DrYgA1IV097QL4lPWg/edit |
Go-filecoin issue: filecoin-project/venus#1566 |
moving to "in progress", per research week discussion |
hey @whyrusleeping we believe there are no more research open issues in here and I am seeing a set of issues on signatures, if specs is covering this elsewhere, please close this issue (and link). |
I am closing this issue, unless the stakeholders will find this relevant again. This is being tracked in the specs repo. |
MASTERPLAN
Storage Limits & Signature Aggregation
As @whyrusleeping has shown in his analysis ([1], [2], [3], [4]), we currently have a limitation on the total storage that FIL can support. In part this limitation is due to the number of signatures and other associated data posted to the blockchain [2]. Signature aggregation is one proposal to increase this storage limit. For the purpose of this issue, we will only consider aggregation as a mitigation to the storage limit; we discuss other methods here
What is signature aggregation?
In short, signature aggregation is the method by which we take a collection of signatures
S
and somehow collect them together so that they for a kind-of bulk signature for an item.For example, let's consider a scenario where a group wants to all members to sign a message
M
. In a classic signature scheme, each member would compute,where
i
represents a particular member andSK_i
is the secret key of the i-th member. Collecting these signatures together we have,where
n
is the number of members.S
will have size,Thus, the total storage needed for anyone to verify
S
isSz
+PK_all
, wherePK_all
is the collection of all public keys used inS
. If signatures are of size 64 bytes andPK_i
is of size 64 bytes then total storage is n*(64+64) or n*128 bytes. in the case of 1-24 participants the size ofS
and associated data becomes 128KB. In the context of FIL with a block size of ~400KB this can become problematic quickly. Note: the actual messagesm
are left out of this calculation. Now let us consider what this scenario would look like in an aggregation scheme.Suppose we designate an aggregator
A
.A
then asks for all signaturess_i
where the signing operation performs a special function to mapm_i
into a group element that we can operate on and adds a secretalpha_i
[6]. This function is called hashing into a group and in our particular case is called hashing into an elliptic curve. For an explanation on how this works see hashing example in golang. The aggregator then collects these signatures together and performs a special aggregation function; in BLS, this function is the pairing operation over all public keys and associated messages. The result of this message is a signatureSagg
such that the total size of the signature is on the order of a classical signature. Using the size analysis from above, the signature would be 64 bytes. The total size of the signature and other components isSagg
+PK_all
. This is about n*64 + 64 or (n+1)64 or about (n+1)/2128 w.r.t. classical signatures.Thus we can see that we have effectively halved the total signature storage cost (recall that we need to store the public keys). W.r.t just the signature bytes we have reduced the storage cost from
n*s_i
to just `s_i'.Some notes on computation. And these are the gotchas. Notice that while we saved a ton of space, we did so at the cost of computation. In particular,
h0
= hashing into an elliptic curve andpr
= pairing. I note that there are efficient methods forh0
. Individualpr
may not be computationally expensive--further analysis will need to be done to determine if this is true-- butn
computations ofpr
can become burdensome--again this will require prototyping and performance analysis.A note of dev. Development of BLS signatures into production code can be...tricky. Elliptic curves are finicky, complicated, and prone to subtle but catastrophic security errors. Extra careful attention and testing will be needed to ensure that any aggregation scheme is correct where correct means that in all cases the actual computation is identical to the math that defined it.
Methods:
BLS
Schnorr
???
Information theory limit/ MPC
Wouldn't it be great of we could compress all those pesky public keys? It would and we can but this requires multi-party computation (MPC). A quick analysis of signatures and aggregations will convince you that without public keys, we would break the information theory limit for storing data--which is pretty cool to think about in the cryptographic context. We can, however, perform multiparty computation to generate a single signature with one public key but MPC has it's own considerations.
References:
1 - observable/visual analysis of storage limitations
2 - specs/storage market draft
3 - specs/thoughts on aggregation
4 - aq/issue on size constraints
Signature aggregation:
5 - specs/BLS signature discussion
6 - stanford/new BLS scheme explanation
7 - medium/signature aggregation
The text was updated successfully, but these errors were encountered: