You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A year ago, in 7367f00, I added a calibration for Ed25519 operations that was based on a range of possible scalars fed into a Curve25519 operation, such as the one that an Ed25519 signature verification does. This is visible in the EdwardsPointCurve25519ScalarMulRun calibration routine. Later in March of this year this was removed along with the EdwardsPointCurve25519ScalarMul cost type, in 349dab8. The idea I think of adding it was to use it to charge more-carefully for the cost of an Ed25519 signature verification, somehow?
Anyway, looking at the git history, it looks like this finer-grained cost type was never actually used when charging for Ed25519 signature verification. This might be ok, or it might not be. It depends on the variability of signature verification. It is a variable-time operation: it varies by the size of the scalar component. Which I think is usually "a randomly distributed 256-bit number" so perhaps 50%-likely to have its high bit set, and presumably that might hit the worst, or at least a dominant-cost case? But I'm not sure. It might be the case that the worst-case scalar multiplication cost is abnormally high -- higher than the average or whatever the calibration code is testing -- and users could cause us to undercharge if they submit this worst case. This should be investigated.
The text was updated successfully, but these errors were encountered:
### What
Setup bench framework to run experiments
- Experimental bench code is located in `experimental` (`benches/common`
and `src/cost_runner` dirs)
- Aggregate `ContractCostType`, `WasmInsnType` and a new
**`ExperimentalCostType`** into top level `CostType`
Resolves#1030
- Bring back `EdwardsPointCurve25519ScalarMul` as a experimental cost
type and run variation histogram on it.
### Why
[TODO: Why this change is being made. Include any context required to
understand the why.]
### Known limitations
[TODO or N/A]
A year ago, in 7367f00, I added a calibration for Ed25519 operations that was based on a range of possible scalars fed into a Curve25519 operation, such as the one that an Ed25519 signature verification does. This is visible in the
EdwardsPointCurve25519ScalarMulRun
calibration routine. Later in March of this year this was removed along with theEdwardsPointCurve25519ScalarMul
cost type, in 349dab8. The idea I think of adding it was to use it to charge more-carefully for the cost of an Ed25519 signature verification, somehow?Anyway, looking at the git history, it looks like this finer-grained cost type was never actually used when charging for Ed25519 signature verification. This might be ok, or it might not be. It depends on the variability of signature verification. It is a variable-time operation: it varies by the size of the scalar component. Which I think is usually "a randomly distributed 256-bit number" so perhaps 50%-likely to have its high bit set, and presumably that might hit the worst, or at least a dominant-cost case? But I'm not sure. It might be the case that the worst-case scalar multiplication cost is abnormally high -- higher than the average or whatever the calibration code is testing -- and users could cause us to undercharge if they submit this worst case. This should be investigated.
The text was updated successfully, but these errors were encountered: