-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Stress-06] Price Finalize Opcodes Based On Operand Types #2281
Conversation
…arkVM into feat/fee-based-on-size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a minor style comment. I would also encourage we implement a maximum finalize fee, similar in spirit to https://github.com/AleoHQ/snarkVM/pull/2271
synthesizer/src/vm/helpers/cost.rs
Outdated
|
||
use std::collections::HashMap; | ||
|
||
// Finalize Costs for compute heavy operations. Used as BASE_COST + PER_BYTE_COST * SIZE_IN_BYTES. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How were the numbers backed into? I realize the target was ~100 credits per second of finalize runtime, but was there a formula for determining the base cost and per byte cost in relation to the observed runtime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition, did we take into account that the scaling of runtime was linear for increases to the number of bytes of the operands? Because the fee model assumes the linear growth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition, did we take into account that the scaling of runtime was linear for increases to the number of bytes of the operands? Because the fee model assumes the linear growth.
Looking at the data, seems like the runtimes scale fairly linearly with the input size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How were the numbers backed into? I realize the target was ~100 credits per second of finalize runtime, but was there a formula for determining the base cost and per byte cost in relation to the observed runtime?
The values were regressed over (using linear regression) using bytes
as the independent variable and benchmarked runtime
as the dependent variable.
The resulting equations are of the form are:
runtime(bytes) = A*bytes + C
- both A & C were then extracted to be the cost multiplier for the opcode & the base cost for the opcode respectively. These values were scaled to get ~100 credits per second.
In addition, did we take into account that the scaling of runtime was linear for increases to the number of bytes of the operands? Because the fee model assumes the linear growth.
If you do look at the data you will indeed find there are some outliers, but as you say it scales "mostly" linearly
Converting the base to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded to the questions about how the pricing equations were arrived at by @raychu86
…t/fee-based-on-operand-size
e6b29ad
to
40e29a7
Compare
^Note: The cost of |
(Leaving a gist for myself) Fee amount changes based on analysis and benchmarks (see Python notebook) 3 considerations:
|
Prelude
This PR presents initial suggestions for revised finalize pricing. This is in no way final or authoritative, but rather a basis for community discussion on how pricing of operations in finalize scopes should work.
Motivation
All opcodes currently have a fixed pricing regardless of inputs. For opcodes that encapsulate simple operations, this is
acceptable. However, for opcodes that encapsulate complex computations such as cryptographic hashes or mapping reads & writes, runtime is significant and affected non-trivially by the size of the opcode's operands.
This motivates pricing opcodes based on the size of their operands so that network performance is not impacted by inefficient or malicious use of complex opcodes.
Analysis
To quantify the runtime costs of Aleo opcodes, a benchmark analysis of each opcode was performed on hardware with
recommended validator specifications (Intel Xeon Platinum 8175 - 128CPU - 128GB RAM)
The analysis can be found here (Credit: @miazn - @iamalwaysuncomfortable)
Results
Key results from the benchmark analysis are summarized below.
Simple Opcodes
Most opcodes have runtimes under 10000 nanoseconds, with most in the range of 1000-3000 nanoseconds.
Outliers include:
mul(field,field)
~ 100000 nsinv
~ 10000 nsdiv(field,field)
~ 15000nssquareroot
~ 25000nspow(field)
~ 15000nsHashes & Commits
per byte.
Set/Get/Contain
Initial Pricing Conclusions
1. Pricing should allow finalize scopes to be affordable to use
One of the core motivations of this PR is to discourage overuse of high runtime opcodes in finalize scopes to ensure
network performance and stability through pricing these opcodes per byte.
The other core motivation is to price opcodes fairly so that effective use of this scope for useful applications is
affordable for the foreseeable future. This motivates pricing computationally cheap opcodes at an affordable flat rate.
In pricing both "simple" and "complex" opcodes, this PR suggests that 1 second of runtime in a finalize scope should
cost roughly
~100 credits
.2. Flat Pricing for Simple Opcodes
Runtimes for simple opcodes are low enough to justify grouping these opcodes together and pricing them at a flat low
rate. The rate in this PR is proposed to be
500
microcredits per opcode as an average pricing.Outliers
Suggested pricing per byte for outliers mentioned above:
mul(field,field)
~ 10000 microcreditsinv
~ 1000 microcreditsdiv(field,field)
~ 1500 microcreditssquareroot
~ 2500 microcreditspow(field)
~ 1500 microcredits3. Byte based pricing for complex opcodes (cast, commit, hash, set/get/contain)
For Casts, Commits, Hashes and GET/SET/CONTAIN operations, pricing should be based on the number of bytes in the operands. Since each opcode has a base cost each should follow an equation of
base_cost + cost_per_byte * num_bytes
.6 cost tiers are proposed based on benchmark analysis:
10,000 + 30*input_bytes
(i.e. 0.8 microcredits for BASE hashes over 8192 bytes)50,000 + 300*input_bytes
(i.e. 16 microcredits for BHP hashes over 8192 bytes)40,000 + 75*input_bytes
(i.e. 4 microcredits for PSD hashes over 8192 bytes)500 + 30*input_bytes
(i.e. ~0.25 microcredits for CASTs over 8192 bytes)10,000 + 10*input_bytes
(i.e. 0.8 microcredits for GETs/CONTAINs over 8192 bytes)10,000 + 100*input_bytes microcredits/byte
(i.e. 2.4 microcredits for SETs over 8192 bytes)SET operations are limited to under 10 invocations per finalize block, so despite incurring a heavy cost per byte, they are priced lower due to their central importance for Aleo applications.