Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance tests may not be capturing file open time in brackets-shell #12192

Open
core-ai-bot opened this issue Aug 31, 2021 · 9 comments
Open

Comments

@core-ai-bot
Copy link
Member

Issue by peterflynn
Tuesday Sep 18, 2012 at 22:31 GMT
Originally opened as adobe/brackets#1687


Looking at our file-open performance data:
https://docs.google.com/spreadsheet/ccc?key=0Aras0diokeHxdEc5RGtOeVI0V0xGU3FPUXBuX3ZYTlE#gid=5

There was a dramatic drop in the recorded time to open large files when we cut over to brackets-shell. There's now almost no difference between different file sizes.

That seems a little suspicious given that most disk operations have seemed slightly slower in -shell. I'm wondering if the greater degree of asynchronicity could be exposing a bug in where we've placed the instrumentation. Seems worth investigating.

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Tuesday Sep 18, 2012 at 23:56 GMT


Hi@peterflynn I would eventually consider this a medium priority, we need to trust the performance measurements. What's your take?

@core-ai-bot
Copy link
Member Author

Comment by peterflynn
Wednesday Sep 19, 2012 at 00:43 GMT


@pthiess: sure, medium seems fair. There's also a slight chance this is a symptom of a functional async bug too, so it definitely seems worth investigating.

However, IMHO we shouldn't really trust any of these performance measurements :-) Because they capture so little of the rendering/layout cost, which can be substantial. I think the camera tests suggested that these file-open measurements inaccurate by 10-30%, depending on file size.

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Wednesday Sep 19, 2012 at 01:16 GMT


Agreed, we probably have no handle on being accurate about the layout cost, maybe using a set of files to calculate a compound average could minimize the experimental variance.

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Wednesday Sep 19, 2012 at 01:20 GMT


@peterflynn@jasonsanjose Raised priority to medium.

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Wednesday Sep 26, 2012 at 22:18 GMT


Reviewed

@core-ai-bot
Copy link
Member Author

Comment by jasonsanjose
Tuesday Oct 16, 2012 at 23:27 GMT


I'm not seeing any functional issues in the perf instrumentation as-is. I traced through and debugged the call stack to be sure that we do add the editor to the DOM synchronously before we stop our perf timer. I didn't notice the disk slowdown that@peterflynn mentioned when we moved to CEF3. Is there something else that I'm missing?

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Monday Oct 22, 2012 at 18:57 GMT


@jasonsanjose Maybe do a perceptional testing using an older build with brackets-app vs. the current.

@core-ai-bot
Copy link
Member Author

Comment by lkcampbell
Sunday Jan 05, 2014 at 17:45 GMT


@jasonsanjose and@pthiess, this issue is really stale and doesn't seem like it is medium priority anymore. Do we ever reassign priorities based on the age of an issue? It seems like a really old medium priority issue should eventually turn into either a low priority issue or a high priority issue.

@core-ai-bot
Copy link
Member Author

Comment by pthiess
Monday Jan 06, 2014 at 18:27 GMT


@lkcampbell Great question - I frequently ask the team to review their medium priority bugs.@jasonsanjose I wonder if we should just close this issue for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant