-
Notifications
You must be signed in to change notification settings - Fork 47.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Experiment] Reuse memo cache after interruption #28878
Conversation
Adds an experimental feature flag to the implementation of useMemoCache, the internal cache used by the React Compiler (Forget). When enabled, instead of treating the cache as copy-on-write, like we do with fibers, we share the same cache instance across all render attempts, even if the component is interrupted before it commits. If an update is interrupted, either because it suspended or because of another update, we can reuse the memoized computations from the previous attempt. We can do this because the React Compiler performs atomic writes to the memo cache, i.e. it will not record the inputs to a memoization without also recording its output. This gives us a form of "resuming" within components and hooks. This only works when updating a component that already mounted. It has no impact during initial render, because the memo cache is stored on the fiber, and since we have not implemented resuming for fibers, it's always a fresh memo cache, anyway. However, this alone is pretty useful — it happens whenever you update the UI with fresh data after a mutation/action, which is extremely common in a Suspense-driven (e.g. RSC or Relay) app. So the impact of this feature is faster data mutations/actions.
Comparing: f5ce642...eb8b3e8 Critical size changesIncludes critical production bundles, as well as any change greater than 2%:
Significant size changesIncludes any change greater than 0.2%: Expand to show
|
Awesome! My intuition is that this is overall better, and it certainly is an improvement in the test case. The main concern would be a situation where the memo slots that are being updated in the low and high pri updates overlap. So the low-priority update starts and clears out some values but suspends partway, then the high-pri update happens and has to redo some work (that would have been memoized still, w the feature flag off), then the low-pri update resumes and has to redo that work again (since the high-pri update just undid it). Again, my intuition is that this scenario is rare enough that this is overall a win, just wanted to flag it. It makes sense to test this, though we have enough things up in the air right now that it might not be the best time. We'll chat in our internal sync next week and follow-up with you. |
Yeah I remember you and I discussed this at some point, maybe even in the original Since it's rare for high pri updates to "switch back" to the current state (compared to the scenario in the test case I added), I agree this is the better trade off. (Also, comparing only against the current UI values is already a form of this trade off, since theoretically we could cache every previous set of inputs we ever received.) |
Adds an experimental feature flag to the implementation of useMemoCache, the internal cache used by the React Compiler (Forget). When enabled, instead of treating the cache as copy-on-write, like we do with fibers, we share the same cache instance across all render attempts, even if the component is interrupted before it commits. If an update is interrupted, either because it suspended or because of another update, we can reuse the memoized computations from the previous attempt. We can do this because the React Compiler performs atomic writes to the memo cache, i.e. it will not record the inputs to a memoization without also recording its output. This gives us a form of "resuming" within components and hooks. This only works when updating a component that already mounted. It has no impact during initial render, because the memo cache is stored on the fiber, and since we have not implemented resuming for fibers, it's always a fresh memo cache, anyway. However, this alone is pretty useful — it happens whenever you update the UI with fresh data after a mutation/action, which is extremely common in a Suspense-driven (e.g. RSC or Relay) app. So the impact of this feature is faster data mutations/actions (when the React Compiler is used). DiffTrain build for commit ea26e38.
Adds an experimental feature flag to the implementation of useMemoCache, the internal cache used by the React Compiler (Forget). When enabled, instead of treating the cache as copy-on-write, like we do with fibers, we share the same cache instance across all render attempts, even if the component is interrupted before it commits. If an update is interrupted, either because it suspended or because of another update, we can reuse the memoized computations from the previous attempt. We can do this because the React Compiler performs atomic writes to the memo cache, i.e. it will not record the inputs to a memoization without also recording its output. This gives us a form of "resuming" within components and hooks. This only works when updating a component that already mounted. It has no impact during initial render, because the memo cache is stored on the fiber, and since we have not implemented resuming for fibers, it's always a fresh memo cache, anyway. However, this alone is pretty useful — it happens whenever you update the UI with fresh data after a mutation/action, which is extremely common in a Suspense-driven (e.g. RSC or Relay) app. So the impact of this feature is faster data mutations/actions (when the React Compiler is used). DiffTrain build for [ea26e38](ea26e38)
Adds an experimental feature flag to the implementation of useMemoCache, the internal cache used by the React Compiler (Forget). When enabled, instead of treating the cache as copy-on-write, like we do with fibers, we share the same cache instance across all render attempts, even if the component is interrupted before it commits. If an update is interrupted, either because it suspended or because of another update, we can reuse the memoized computations from the previous attempt. We can do this because the React Compiler performs atomic writes to the memo cache, i.e. it will not record the inputs to a memoization without also recording its output. This gives us a form of "resuming" within components and hooks. This only works when updating a component that already mounted. It has no impact during initial render, because the memo cache is stored on the fiber, and since we have not implemented resuming for fibers, it's always a fresh memo cache, anyway. However, this alone is pretty useful — it happens whenever you update the UI with fresh data after a mutation/action, which is extremely common in a Suspense-driven (e.g. RSC or Relay) app. So the impact of this feature is faster data mutations/actions (when the React Compiler is used). DiffTrain build for commit ea26e38.
Adds an experimental feature flag to the implementation of useMemoCache, the internal cache used by the React Compiler (Forget).
When enabled, instead of treating the cache as copy-on-write, like we do with fibers, we share the same cache instance across all render attempts, even if the component is interrupted before it commits.
If an update is interrupted, either because it suspended or because of another update, we can reuse the memoized computations from the previous attempt. We can do this because the React Compiler performs atomic writes to the memo cache, i.e. it will not record the inputs to a memoization without also recording its output.
This gives us a form of "resuming" within components and hooks.
This only works when updating a component that already mounted. It has no impact during initial render, because the memo cache is stored on the fiber, and since we have not implemented resuming for fibers, it's always a fresh memo cache, anyway.
However, this alone is pretty useful — it happens whenever you update the UI with fresh data after a mutation/action, which is extremely common in a Suspense-driven (e.g. RSC or Relay) app.
So the impact of this feature is faster data mutations/actions (when the React Compiler is used).