-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance dashboard: track and document performance improvements. #689
Comments
I've had a bit of a discussion with Petar on how to tackle this; he found a Continuous Benchmarking GitHub action that we could use or at least base ourselves off of. It has ways to display graphs and highlight regressions in benchmarks, so I think it's one important first step in doing this. I'll set up the action sometime, while you can expect many upcoming benchmarks from Petar to give us metrics to measure :) |
Great, let's give it a try for a while and see how it goes. |
I gave a try to the continuous benchmarking action you mentioned here. I discovered some issues:
|
Seems weird that we're getting blocked due to using too many resources (looking through your tests they were killed at different times using SIGTERM?). Is this documented somewhere on github actions' doc? (I couldn't find anything). Might be worth a shot trying to If we do have to take the self-hosted route, I think we can at the very least have them run on master without issue. As for on pull requests, AFAIK action runs from external contributors still have to be manually launched from the PR, so it may not be so much of an issue after all? |
This is the only info I found about the runners dying with code 143: actions/runner-images#6680
Yeah, I was thinking of that as a solution. Every code merged into master could be considered safe to run. |
It's pretty BS that this behaviour is only documented on that issue in vague terms. Nice to see Microsoft hasn't abandoned its old mantra of sweeping bad bits of their software under the rug until they come up as unpleasant surprises. I think if we set up a machine only to do benchmarking it should be fine to use it also on PRs, since external PRs require team approval to run workflows anyway. |
We've created a new repository for benchmarks and tools to track everything: https://github.com/gnolang/benchmarks. |
Recognize the significance of performance in our project by creating dedicated documentation, review rules, and contributing guidelines.
I request @peter7891 to define basic rules to experiment in his PRs.
Idea curation framework: propose defining a framework for tracking performance improvements. While there are many good ideas to improve performance, it is important to prioritize them and consider reasons not to optimize certain parts.
Review framework: establish review rules, including the format for sharing performance improvement differences and a clear set of constraints associated with each improvement. Updates to the
CONTRIBUTING.md
?Evolution framework: It would be beneficial to have a performance history with graphs depicting progress over time to identify regressions and highlight performance improvements.
Bonus: Write a technical blog post outlining upcoming challenges, our proposed framework, and diving into select parts of the project.
Relevant discussions:
The text was updated successfully, but these errors were encountered: