Replies: 8 comments 17 replies
-
i really like this! i would love to test this (on the client). currently some libraries that i work on use use-asset which either closures a cache or uses a naive global cache. to let users create cache boundaries would be more than welcome. |
Beta Was this translation helpful? Give feedback.
-
Obligatory obvious question: without getting into any implementation details, would a Redux store or similar be a potentially valid "backing mutable data store" in this scenario? |
Beta Was this translation helpful? Give feedback.
-
I think it might be useful to include a more thorough upfront explanation of the problem(s) that the built-in cache is solving. At first glance, the two statements seem sort of contradictory and make me wonder: if I’ve already got a layer outside of React, why do I need one inside? Of course the answer is for eviction + consistency etc but maybe going into a little more detail into why those are trickier with concurrency might be useful. |
Beta Was this translation helpful? Give feedback.
-
I've edited the post to include a few sandboxes: https://codesandbox.io/s/sad-banach-tcnim Hope these help give some intuition behind how this API works. (At least for now.) |
Beta Was this translation helpful? Give feedback.
-
Does the Cache API support cancellation? One challenge right now is that libraries don't know when to cleanup resources that are allocated during render, requiring things like setting a long timeout to clear an entry if it doesn't end up being used. This can happen when a tree starts rendering, suspends, and then never finishes (due to some other state change). Our hope was that the Cache API would offer some cleanup mechanism but I don't see that described here or in code - thoughts on this? |
Beta Was this translation helpful? Give feedback.
-
This feature is interesting to me!
I would like to understand this. So, does this mean, even if A codesandbox repro: https://codesandbox.io/s/reverent-easley-x2uxo |
Beta Was this translation helpful? Give feedback.
-
Can't refresh individual Suspense cachesThe current behavior of When does this matter?For example, the React DevTools uses the new Suspense APIs to "inspect" a component (load its props/state for inspection) as well as to load source and parse hook names. (These features use separate Suspense caches.) When an element's props/state change, DevTools "refreshes" that cache in order to load new data. This has the unfortunate side effect of also blowing away the cached hook names (which are much more expensive to recompute). In this case, both caches are managed by DevTools, so it could better coordinate the refreshing of one cache to e.g. pre-seed the other, but what if I were using an external library (like Relay?) that also used the new caching APIs? It seems like it would be easy for application and library code (or different libraries) to interfere with each other in unexpected ways. Current workaroundsOne work around, which I believe is similar to what Relay itself uses under the hood, would be to manage a 2nd (module level?) cache for named hooks data. Then the Suspense code related to it would be:
This seems pretty complicated for a user-space solution, although maybe we don't anticipate the cache API being used directly by applications very often? Preferred built-in solutionThe behavior I anticipated from the refresh API was that React would only refresh the cache identified by the Example of using the "refresh" function returned by
|
Beta Was this translation helpful? Give feedback.
-
@acdlite @gaearon i have tried to implement the new caching model in the only regression i can make out is preloading and peeking. is there a plan for that? previously i could run // Global space
asset.preload(id)
or in the render phase (fetching the next 3 items for instance, which completely removed waiting interruptions), i used to do this inside a useEffect, now i get this: function Post({ id }) {
useEffect(() => {
// Pre-load next 3 posts
for (let i = 0; i < 3; i++) asset.preload(id + 1 + i);
}, [id]);
i take it i am allowed to preload in the render function, is that correct? preload won't throw, so it should be safe? function Post({ id }) {
// Pre-load next 3 posts
// Not 100% sure if running this in the render function is considered ok
for (let i = 0; i < PRELOAD; i++) asset.preload(id + 1 + i) it works, though. so it's probably fine ... |
Beta Was this translation helpful? Give feedback.
-
We recently landed an initial implementation of an experimental, built-in Suspense cache.
PR: facebook/react#20456
If you have experience building a Suspense cache in user space, you will likely find this feature interesting.
The built-in Cache isn't part of the alphas (yet, at least) because we haven't tested it anywhere except for the Server Components demos. We have had some initial discussions with the Relay team about how they might use it in their implementation, but that's it. There are also some known missing gaps in functionality that we need to fill.
However, I think what's already implemented could be useful enough on its own to start delivering value. Or at least to start considering about how it might into your existing cache implementations.
If we can get some folks to try out the new APIs and give us feedback, that would help us fill in those gaps, especially if you already maintain a Suspense-based data framework.
I've likely omitted or glossed over some important detail in this summary, so please ask for clarifications when necessary. There's a lot to share and some parts are more refined than others.
In addition to reading this overview, the best way to familiarize yourself with basic API is to read through the Server Components demo, which uses most of these features: https://github.com/reactjs/server-components-demo
You can also check a few sandboxes using the same APIs on the client:
API Overview
unstable_getCacheForType(createInitialCache)
Returns the cache for a given type, or creates a new one if it doesn't exist. The "create" function acts as the type/key/identity for the cache. The type corresponds either to a type of data or the framework responsible for fetching it.
React stores the cache in an internal queue. You have full control over populating the entries for a given type, but React owns and handles the lifetime of the cache object itself.
Crucially, the cache object persists across multiple suspended render attempts. React will reuse the same cache (and therefore won't issue duplicate requests) on each attempt — even before the UI has mounted yet. This is a key feature that isn't currently possible in a user space implementation.
Previously, to circumvent this problem, a user space implementation would need to store the cache outside of React, as opposed to a built-in hook like useState. But this made evicting existing entries extremely complicated — there was basically no correct way to do this in a fully concurrent-friendly way, without de-opting to synchronous rendering.
With the built-in API, all that complexity is now handled by React.
If you've built a Suspense cache before, you may know there are some "cache laws" for how to properly populate the cache. We'll document these at some point, but just know for now that they haven't changed: you can add new entries to the cache, but you can't modify or delete existing ones. To evict stale data, you need to use the next feature...
unstable_useCacheRefresh
This will invalidate the current cache and create a fresh one in the background. The data in the old cache will continue to be displayed on screen until the new data is ready to display. Crucially, because both the old cache and new cache can exist simultaneously, the UI remains interactive even while we're waiting for the requests to finish in the background.
The canonical use case is to refresh after performing a server mutation (POST, UPDATE, etc). You want to rerender the UI with fresh data from the server. Often, the server mutation request will include fresh data in its response. Like when you edit a field in a form, the browser will return the updated version of that field. So we include a mechanism to seed the new cache with this initial data. (This feature needs a lot more research, though.)
We don't intend to provide support for refreshing specific entries. The idea is that you refresh everything, and rely on an additional, backing cache layer — the browser request cache, a mutable data store, etc — to deduplicate requests. Again, still lots of research required to figure out the details, but that's the basic model.
<Cache />
A Cache component defines a boundary within which all components must read consistent data. If the same data is requested in multiple parts of the tree — say, the current user's profile picture — then they must always display the same thing:
Because these components share a cache boundary, they will always show the same image — even if the user has just changed their profile picture. In that scenario, both component will either display the old (stale) picture or the new (fresh) picture. Imagine you're building a chat UI, where each message is accompanied by the author's photo. You'd rather show the old photo next to each message than show the new photo next to some and the old photo next to others.
However, since this actually the default behavior of the built-in cache, I prefer to think of a Cache boundary as defining which parts of the UI are allowed to be inconsistent.
In this example, the message thread photos will always be consistent — either both the old photo or both the new one. But the photo in the toolbar is allowed to be different.
Data in separate boundaries can be fetched independently. That doesn't mean they will be fetched independently, though. That would be wasteful. So in this example, on initial render, React will use the same cache for both boundaries. But after initial render, you can use
useCacheRefresh
to load the new photo in the message thread without bothering to also refresh the toolbar.I expect developers won't often interact with Cache boundaries directly. They'll likely be baked into some other part of the app architecture. A router is a natural place to put these — each route is wrapped with its own boundary.
There's a lot more to discuss here that I won't get into in this overview. @sebmarkbage wrote a highly influential post internally called "Here and There" that does an excellent job explaining the UX principles behind our caching model; I'll encourage him to share it in this group.
Integrating the Cache APIs into your existing Suspense cache
The React-provided cache isn't designed to act as persistent storage. There's always some layer beneath it that you will read from in order to populate the cache. Often that layer is itself a kind of cache — like the browser network cache. Or, the backing layer is a mutable data store, owned and managed by a data framework that lives outside React.
The React cache doesn't replace those layers, but it could make it easier for those layers to work with Suspense.
I like to think of the React cache as a lazily-populated, immutable snapshot of an external data source. Because React will always read from the snapshot, data in the external store can be mutated, changed, manipulated, deleted without affecting the current UI, or breaking the requirements of Concurrent React. React manages the lifetime of these snapshots, deduplicates requests, and provides high-level APIs for controlling when a new snapshot should be taken. In theory, this should shift implementation complexity away from the data framework and into React itself.
Beta Was this translation helpful? Give feedback.
All reactions