Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post tx-cost benchmark results verbatim into PR #340

Merged
merged 13 commits into from
May 5, 2022

Conversation

ch1bo
Copy link
Collaborator

@ch1bo ch1bo commented May 3, 2022

❄️ Adds a github action to publish the benchmark results

❄️ Reduce number of data points in tx-cost

@github-actions
Copy link

github-actions bot commented May 3, 2022

Unit Test Results

    5 files  ±0    80 suites  ±0   6m 29s ⏱️ +16s
213 tests ±0  211 ✔️ ±0  2 💤 ±0  0 ±0 

Results for commit 4eeaa0c. ± Comparison against base commit e69644a.

♻️ This comment has been updated with latest results.

@github-actions
Copy link

github-actions bot commented May 3, 2022

Transactions Costs

Sizes and execution budgets for Hydra protocol transactions. Note that unlisted parameters are currently using arbitrary values and results are not fully deterministic and comparable to previous runs.

Metadata
Generated at 2022-05-05 08:11:52.005469248 UTC
Max. memory units 14000000.00
Max. CPU units 10000000000.00
Max. tx size (kB) 16384

Cost of Init Transaction

Parties Tx size
1 4646
2 4845
3 5044
5 5445
10 6446
30 10447
59 16248

Cost of Commit Transaction

Uses ada-only UTxO for better comparability.

UTxO Tx size % max Mem % max CPU
1 5862 17.98 8.81

Cost of CollectCom Transaction

Parties Tx size % max Mem % max CPU
1 12000 20.77 11.21
2 12682 40.90 22.52
3 12561 53.92 29.59
4 13332 84.39 46.92

Cost of Close Transaction

Parties Tx size % max Mem % max CPU
1 7681 7.34 3.95
2 7948 9.83 5.27
3 7906 8.59 4.59
5 8331 14.78 7.88
10 9450 23.35 12.46
30 12458 48.67 25.91
94 16111 0.00 0.00

Cost of Abort Transaction

Parties Tx size % max Mem % max CPU

Cost of FanOut Transaction

Involves spending head output and burning head tokens. Uses ada-only UTxO for better comparability.

UTxO Tx size % max Mem % max CPU
1 11530 8.41 4.87
2 11624 12.34 7.17
3 11630 13.63 7.91
5 11696 16.16 9.33
10 11863 23.98 13.85
50 13240 83.92 48.28
100 14863 52.94 32.37
100 14888 53.19 32.51

@ch1bo ch1bo marked this pull request as ready for review May 4, 2022 07:09
@ch1bo ch1bo force-pushed the ch1bo/benchmark-results branch from e29313d to 939216e Compare May 4, 2022 10:19
@ch1bo ch1bo self-assigned this May 4, 2022
@ch1bo ch1bo force-pushed the ch1bo/benchmark-results branch from 939216e to 52a2da3 Compare May 4, 2022 11:36
@ch1bo ch1bo requested review from abailly-iohk and KtorZ May 4, 2022 12:52
)
where
compute numUTxO sz = do
-- FIXME: genSimpleUTxOOfSize only produces ada-only, so the separation of
-- generating and resizing is moot
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abailly-iohk what do you think about this? Shall we move to just ada-only utxos and simplify the results by having a single dimension?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, makes a lot of sense, let's do that

@ch1bo ch1bo force-pushed the ch1bo/benchmark-results branch from 011008d to 7f6429c Compare May 4, 2022 14:09
We had used ada-only outputs already so no point in having these two
dimensions.
Copy link
Contributor

@abailly abailly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This begs for outputting the benchmark's results in JSON for easier manipulation and formatting, so that we can only publish the highest values for each type of tx but that's totally addressable in another PR

@@ -167,6 +167,51 @@ jobs:
name: benchmarks-and-haddocks
path: ./docs

publish-benchmark-results:
name: Publish benchmark results
# TODO: this is actually only requires the tx-cost benchmark results
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: remove is

else pure Nothing
)
computeCloseCost = do
interesting <- catMaybes <$> mapM compute [1, 2, 3, 5, 10, 30]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice variable name :)

@KtorZ
Copy link
Collaborator

KtorZ commented May 5, 2022

Maybe the Merkle-tree results are a bit off-topic? We could move them back to the plutus-merkle-tree package and leave them there.

@KtorZ KtorZ force-pushed the ch1bo/benchmark-results branch from 6233e08 to c4f03b2 Compare May 5, 2022 07:56
@KtorZ KtorZ force-pushed the ch1bo/benchmark-results branch from c4f03b2 to 4eeaa0c Compare May 5, 2022 07:56
@ch1bo
Copy link
Collaborator Author

ch1bo commented May 5, 2022

Maybe the Merkle-tree results are a bit off-topic? We could move them back to the plutus-merkle-tree package and leave them there.

I would agree, but was not prioritizing this work for now. There is an item for this in our red bin

Edit: I see you have addressed this. Nice

@ch1bo ch1bo merged commit a023269 into master May 5, 2022
@ch1bo ch1bo deleted the ch1bo/benchmark-results branch May 5, 2022 08:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants