-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confirm collected pystats are reliable #511
Comments
I think I've found the crux of the problem: when the process is in the "stats off" state (after calling There's a couple ways to solve this:
@markshannon Thoughts? |
Stats should be dumped, even if stats are off. Because stats are on at startup, any non-benchmark process will dump stats, so that also needs to be fixed. Maybe start with stats off by default, and turn them on during startup if an That way the stats should work for pyperformance as it is now, but we still have the option to gather stats for a whole program. |
It seems the .json files are not comparable to each other because they have keys like Couldn't |
The I'm not sure there's a lot of value in making this data format well-defined and permanent. The use case for comparing really only makes sense with adjacent revisions anyway. Personally, I see the requirement to use a matching version of the script as a feature -- we can change the semantics of how this works as we understand the problems better without concern for backward compatibility. All that said, we should probably agree on a policy about this and document it somewhere. |
Now that pystats collection is automated, we should confirm that we trust the results. We did notice that the new results are quite a bit different from the manually generated ones from a few months ago. The expectation is that the new ones don't include the pyperformance/pyperf machinery itself, but the differences seem wildly off.
The text was updated successfully, but these errors were encountered: