You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under the VADCOP paradigm, it would be very interesting (from both a theoretical and practical perspective) to be able to generate a VADCOP proof where different subproofs work over different finite fields.
Motivation
The idea is to be able to efficiently handle specific-purpose "state machines". For example, let's say that we want to have a state machine that works over the elliptic curve scenario; either because it's a state machine that is able to verify MSM's or a state machine that verifies pairings computations. In any of the two cases, these state machines have to work over a large prime field that needs to be sufficiently secure, meaning that we cannot work over our standard Goldilocks fields or the upcoming Baby Bear or Mersenne-32 fields.
For this to be considered secure, we would need to have a subproof working over the large prime field; and for the whole scheme to keep being efficient, we would need to have the remaining subproofs working over the chosen small field (either Goldilocks or the 32-bit variants). At the end, we will end up having the necessity of generating a VADCOP proof where two (or more) subproofs are working over different finite fields.
Problems
The fundamental problem of this enhancement is the cross-field overhead that one needs to deal with when aggregating two subproofs working over different finite field. One needs to mesure if the generated overhead worths the inclusion of the new state machine against possible alternatives to this model.
A second issue that I see is that, at the end, the VADCOP proof works over a single and sufficiently secure finite field for each of the subproofs above it. This means that if, for instance, we have a subproof over the Goldilocks field and another over the base field of the BN254 curve, then we'll probably need to see if the VADCOP proof (and therefore the space over which the verifier samples the randomness and is used to generate each of the subproofs) is generated over a Goldilocks extension, the base field of the BN254, or any other finite field. This could lead to a restriction of the chosen field and therefore an efficiency lose of the overall proving system.
Ideas
A bigger prime
One idea as a "common" prime (the prime that is going to be used to aggregate the proofs under different prime field) is to take a prime $p_1$ such that:
$$p_1-1 = 2^{k} \cdot p_2 \cdot c$$
such that $2^k$ is a "big" power of two divisible by the FFT-friendly prime field, $p_2$ is the (FFT-unfriendly, but ECC secure) prime field and $c$ is the cofactor that makes $p_1$ a prime number, ECC-secure and pairing friendly. The question that I have here is: does this prime even exists? And if so, is it "too" large to be taken seriously in practice?
The ECFFT
The ECFFT1 algorithm can be used to compute FFT's over prime fields that are not FFT-friendly. The algorithm uses isogenies defined over an elliptic curve defined over the FFT-unfriendly prime field, but whose order is FFT-friendly. Then, it uses the $x$-projection of the isogeny to obtain a sequence of subsets that are amenable to a recursive application of the ECFFT algorithm. It runs on time $O(n \log^2{n})$ (slightly worse than the $O(n \log{n})$ running time of the classical FFT).
Equipped with ECFFT, we can choose the common prime to be the (larger) FFT-unfriendly prime $p_2$ and run a STARK proof using the ECFFT instead of the FFT.
Use Cases
If we could natively prove elliptic curve operations over the BLS12-381 curve, then we could efficiently prove data availability claims. On clear example is a withdraw: there, the data availability is huge and we cannot prove merkle proofs so efficiently (at least, more than we are able to do assuming the merkle tree structure). So, it would be very interesting that in the VADCOP setting, we could have support for the BLS12-381, change the merkle trees by verkle trees and therefore prove that claim as cheap as possible.
Footnotes
See this post for an excellent explanation of the algorithm. To understand the underlying math, take a look at the original paper↩
The text was updated successfully, but these errors were encountered:
Under the VADCOP paradigm, it would be very interesting (from both a theoretical and practical perspective) to be able to generate a VADCOP proof where different subproofs work over different finite fields.
Motivation
The idea is to be able to efficiently handle specific-purpose "state machines". For example, let's say that we want to have a state machine that works over the elliptic curve scenario; either because it's a state machine that is able to verify MSM's or a state machine that verifies pairings computations. In any of the two cases, these state machines have to work over a large prime field that needs to be sufficiently secure, meaning that we cannot work over our standard Goldilocks fields or the upcoming Baby Bear or Mersenne-32 fields.
For this to be considered secure, we would need to have a subproof working over the large prime field; and for the whole scheme to keep being efficient, we would need to have the remaining subproofs working over the chosen small field (either Goldilocks or the 32-bit variants). At the end, we will end up having the necessity of generating a VADCOP proof where two (or more) subproofs are working over different finite fields.
Problems
The fundamental problem of this enhancement is the cross-field overhead that one needs to deal with when aggregating two subproofs working over different finite field. One needs to mesure if the generated overhead worths the inclusion of the new state machine against possible alternatives to this model.
A second issue that I see is that, at the end, the VADCOP proof works over a single and sufficiently secure finite field for each of the subproofs above it. This means that if, for instance, we have a subproof over the Goldilocks field and another over the base field of the BN254 curve, then we'll probably need to see if the VADCOP proof (and therefore the space over which the verifier samples the randomness and is used to generate each of the subproofs) is generated over a Goldilocks extension, the base field of the BN254, or any other finite field. This could lead to a restriction of the chosen field and therefore an efficiency lose of the overall proving system.
Ideas
A bigger prime
One idea as a "common" prime (the prime that is going to be used to aggregate the proofs under different prime field) is to take a prime$p_1$ such that:
such that$2^k$ is a "big" power of two divisible by the FFT-friendly prime field, $p_2$ is the (FFT-unfriendly, but ECC secure) prime field and $c$ is the cofactor that makes $p_1$ a prime number, ECC-secure and pairing friendly. The question that I have here is: does this prime even exists? And if so, is it "too" large to be taken seriously in practice?
The ECFFT
The ECFFT1 algorithm can be used to compute FFT's over prime fields that are not FFT-friendly. The algorithm uses isogenies defined over an elliptic curve defined over the FFT-unfriendly prime field, but whose order is FFT-friendly. Then, it uses the$x$ -projection of the isogeny to obtain a sequence of subsets that are amenable to a recursive application of the ECFFT algorithm. It runs on time $O(n \log^2{n})$ (slightly worse than the $O(n \log{n})$ running time of the classical FFT).
Equipped with ECFFT, we can choose the common prime to be the (larger) FFT-unfriendly prime$p_2$ and run a STARK proof using the ECFFT instead of the FFT.
Use Cases
If we could natively prove elliptic curve operations over the BLS12-381 curve, then we could efficiently prove data availability claims. On clear example is a withdraw: there, the data availability is huge and we cannot prove merkle proofs so efficiently (at least, more than we are able to do assuming the merkle tree structure). So, it would be very interesting that in the VADCOP setting, we could have support for the BLS12-381, change the merkle trees by verkle trees and therefore prove that claim as cheap as possible.
Footnotes
See this post for an excellent explanation of the algorithm. To understand the underlying math, take a look at the original paper ↩
The text was updated successfully, but these errors were encountered: