Expected serialization performance #559
-
Hey, I have a quick question regarding what's the expected performance when serializing a big (30M constraints, is this big?) circuit: I'm running the prover in a 48 CPU, 96GB RAM machine and everything is fine. On the other hand, when I try to serialize the circuit to disk, the memory blow. My question would be why is the machine able to compile and run the prover within this boundaries, but not able to stream the circuit to disk? Thanks for your help |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
hey -- there was couple of issues on serialization (which we did in a very basic way) in the past (for example we incorrectly serialized a lot of debug info with string stack trace up until a PR from Dec 2022), and similarly, we recently noticed that circuits with hints can serialize many times the same info (hence memory blow up). That's something we need to address internally for PlonK in the next few weeks -- for circuits larger than 30M -- I suspect the improvements and design choices here will ripple through to the R1CS builder too. |
Beta Was this translation helpful? Give feedback.
hey -- there was couple of issues on serialization (which we did in a very basic way) in the past (for example we incorrectly serialized a lot of debug info with string stack trace up until a PR from Dec 2022), and similarly, we recently noticed that circuits with hints can serialize many times the same info (hence memory blow up).
That's something we need to address internally for PlonK in the next few weeks -- for circuits larger than 30M -- I suspect the improvements and design choices here will ripple through to the R1CS builder too.