-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core,host): initial aggregation API #375
base: proof-aggregation
Are you sure you want to change the base?
Conversation
…er is sent (#377) * traversal back if no inclusion block is sent Signed-off-by: smtmfft <smtm@taiko.xyz> * refine log Signed-off-by: smtmfft <smtm@taiko.xyz> * fix lint Signed-off-by: smtmfft <smtm@taiko.xyz> --------- Signed-off-by: smtmfft <smtm@taiko.xyz>
host/src/server/api/v3/proof/mod.rs
Outdated
let mut is_registered = false; | ||
let mut is_success = true; | ||
|
||
for task in tasks.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feel like this is a burst resource consumption (network/cpu/mem), do you think the streaming style is better than this? i.e:
start-1 ..general 12s interval.. 3 ..interval.. 5 .. 7-end .
Pros is no burst, and resource consumption of single proof task stays in same level as before (if aggregation does not require lots of resources)
Cons is complex status management, lots of corner cases I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do believe that the streaming style will be better. How difficult is it for the client to work with the streaming approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the complexity comes from both the API and internal status management. API is kind of negotiation design between us raiko & client: a general example be like client asks 1,2,3,4,5, but 3 failed, we then have to generate [1,2,4,5] proofs and send the aggregation proof back with [1,2,4,5] block numbers, status management issues includes how long we keep those individual proofs (avoid disk overflow), if persistent intermedia proof storage is needed (avoid requests lost after raiko restart), etc.
core/src/interfaces.rs
Outdated
/// All the proofs to verify | ||
pub proofs: Vec<Proof>, | ||
/// The block numbers and l1 inclusion block numbers for the blocks to aggregate proofs for. | ||
pub block_numbers: Vec<(u64, u64)>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also added this in the design document, but doesn't it make more sense for this to just take a list of proofs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cp the notion reply and let's discuss it here.
Yes, this is what I called streaming style, raiko generating proofs at the same pace as before, but report aggregated one if it is asked for aggregation proof. better for runtime, less burst resource requirement, but bit complex-er status management.
We talked that with client before, they thought the list is easier for initial test, less effort in dev, so we decided to fast impl this to let aggregation go devnet quickly. But I think that streaming style will eventually replace it.
I think the streaming style is what you want, but perhaps I mis-understand your optimistic mode, correct me if so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Brechtpd It does make more sense to keep it as a separate route to only aggregate existing proofs, but as @smtmfft and I talked about, for the initial version it is easier for the client to test it with a similar request style which proves all the requested blocks.
I will create a aggregate-only endpoint to handle the optimal scenario, but for now just allowing a quicker path to integration unless we want to go with the more optimal approach right away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are other considerations than of course okay with me with the current approach, just trying to avoid any additional work if going from the current block indices to the proofs one would be significantly different and so some code being written now would be deprecated quite quickly.
Signed-off-by: smtmfft <smtm@taiko.xyz>
* fix(raiko): refine error return * fix lint Signed-off-by: smtmfft <smtm@taiko.xyz> * fix lint Signed-off-by: smtmfft <smtm@taiko.xyz> * fix zk entrypoint log level Signed-off-by: smtmfft <smtm@taiko.xyz> * Update host/src/server/api/v2/mod.rs Co-authored-by: Petar Vujović <petarvujovic98@gmail.com> --------- Signed-off-by: smtmfft <smtm@taiko.xyz> Co-authored-by: Petar Vujović <petarvujovic98@gmail.com>
Signed-off-by: smtmfft <smtm@taiko.xyz>
Signed-off-by: smtmfft <smtm@taiko.xyz>
Signed-off-by: smtmfft <smtm@taiko.xyz>
…aiko into proof-aggregation-api
Signed-off-by: smtmfft <smtm@taiko.xyz>
Signed-off-by: smtmfft <smtm@taiko.xyz>
Signed-off-by: smtmfft <smtm@taiko.xyz>
link to #347