Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyze React+Redux perf timings using recording analysis #9410

Closed
wants to merge 41 commits into from

Conversation

markerikson
Copy link
Collaborator

This PR:

  • Adds a new "React+Redux Perf" sidebar panel
  • Adds a button that, when clicked:
    • Finds all Redux dispatches
    • Finds all calls to useSelector
    • Finds all unique selector functions passed to useSelector
    • Finds all individual calls to each selector function and calculates their durations
    • Sums up the total execution time for each selector function
    • Finds all calls to notify React-Redux subscribers and checks the time spent notifying subscribers
    • Finds all calls to re-render a React component
    • Buckets the data by dispatch() start/end times, and groups all the selector calls, notification calls, and queued re-renders triggered by that dispatch

In other words, this is a major first step towards a useful React+Redux perf analysis tool.

Caveats:

  • Right now I'm just logging the grouped data, and haven't tried to display or visualize any of this yet
  • It currently takes ~30s to calculate the results for a 2s focus region with ~40 dispatches. The most expensive parts are running evaluations to find all the selector functions, and calling Debugger.findStepOutTarget to determine the execution time for a couple thousand function calls.

It's very likely that the execution speed would be faster if we moved this into a backend postprocessing routine, and also added some additional backend APIs to allow requesting step-out results in batches instead of individual requests.

Here's an example of what the output looks like for a given dispatch in our own front-end codebase:

image

That one dispatch:

  • did queue a re-render (by scheduling one component to update)
  • we spent 15.3ms notifying subscribers
  • we cascaded through 10 nested connect components notifying their children
  • we ran 114 separate selector calls
  • of those, only 14 selector calls had a meaningful non-zero duration
  • total time of selectors was 0.3ms, which means that there's 15.0ms of subscriber overhead (!)

@vercel
Copy link

vercel bot commented Jun 29, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
devtools ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 27, 2023 6:05pm

@replay-io
Copy link

replay-io bot commented Jun 29, 2023

E2E Tests

75 replays were recorded for fa1db6b.

image 0 Failed
image 75 Passed

View test run on Replay ↗︎

Snapshot Tests

90 replays were recorded for fa1db6b.

image 0 Failed
image 90 Passed

View test run on Replay ↗︎

@ryota-murakami
Copy link
Contributor

@markerikson packages/replay-next meant Next.js exclusive package?

@markerikson
Copy link
Collaborator Author

@ryota-murakami : no. All our app source code originally lived in the src folder. When Brian began rewriting some of the pieces using newer React techniques like Suspense, he created a packages/bvaughn-architecture folder and put the new code there. Later, we renamed it to packages/replay-next, to indicate that it's "the next generation of Replay app source code".

@markerikson
Copy link
Collaborator Author

I just updated this to capture details on the start and end of the related React commit, plus counts of how many function and class components got rendered overall:

image

as well as inspecting the rendered components and determining the details for each unique component type:

image

This gives us a foundation for being able to do real-world React render performance analysis, on production apps, no further profiling setup needed!

@ryota-murakami
Copy link
Contributor

@markerikson Great Job!

Brian began rewriting some of the pieces

Might be this tweet 😅 https://twitter.com/brian_d_vaughn/status/1674214956668014593

@bvaughn
Copy link
Contributor

bvaughn commented Jul 6, 2023

Is this worth keeping open as a draft or was it just something you wanted to test/share?

@markerikson
Copy link
Collaborator Author

Keep it as a draft for now. Like the other "React Render" panel, it's something I intend to merge and feature flag, but I need to do some cleanup before it's ready to at least have in the codebase.

@bvaughn
Copy link
Contributor

bvaughn commented Aug 22, 2023

Cleaning up the PR queue by closing old/stale PRs. Re-open it once it's ready to be reviewed :)

@bvaughn bvaughn closed this Aug 22, 2023
@markerikson
Copy link
Collaborator Author

Time to play with this again...

@markerikson
Copy link
Collaborator Author

There's a lot of conceptual overlap between what I was doing in the first iteration of this "Redux+React Perf Profiler" PR, and what I did in the original POC for the "React Render Timeline" panel.

Both rely on fetching hit points for internals. Both require fetching points for the current focus window. And my POC for the Redux perf work wanted to know when React had rendered, and when we queued renders, so that we could know if the dispatch did actually result in components rendering.

Over this weekend, I've updated this PR:

  • Rebased against main
  • Extracted the basic Redux dispatch() checks and timings as an interval cache
  • Made improvements to the function detail formatting logic
  • And tonight I rewrote the React Render Timeline to also use an interval cache, use the new Session.getPointStack API for stack frames, and rewrote the items as a virtualized list.

The result is that the React Render Timeline is showing up significantly faster (5-10s for several hundred hits), whereas before it would take seemingly forever for values to slowly come in.

I don't have any UI for the Redux data yet, but that should be trivial to add now that I've got the cache for the values.

From there, the next step would be to start adding logic to query for perf stats related to a given dispatch: was a render queued, how long did it take, how many components rendered, how long did those take, which ones were most expensive, and what selectors ran and their timings. Rather than fetch all of that up front, fetching it for just one render ought to be reasonably faster.

Here's what the render timeline looks like atm:

image

@replay-delta
Copy link

replay-delta bot commented Sep 12, 2023

@markerikson
Copy link
Collaborator Author

I've finally built out some UI for the Redux Dispatch Perf panel:

image

things I can now do with this info:

  • set focus ranges from the timestamps
  • determine if any React renders were queued by the notifications
  • fetch info about selectors, although that gets trickier because I really need to know what selectors were in use prior to the dispatch

and similarly, with React renders, I could pull together info on:

  • which components rendered
  • how long those each took

I know there's all kinds of caveats on sync renders vs concurrent rendering, components being thrown away, etc. Not anywhere close to being a "real" profiler. But I've got demo code (even if commented out atm) that shows I can get plausible interpretations of that info. So it's a starting point.

@markerikson
Copy link
Collaborator Author

Superseded by #9775 . The Redux perf half went into #9776 , but we've decided not to merge that into the codebase for now. I'll rebase it, then close the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants