-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify EntityStore CacheGroup logic to improve InMemoryCache result caching. #7887
Conversation
The presence of this permanent Layer wrapping the Root EntityStore allows us to read from the store optimistically (that is, registering optimistic dependencies rather than just Root-level non-optimistic dependencies) even when no optimistic Layers are currently active. Previously, those reads would register only non-optimistic dependencies, because they read directly from the Root store. Now, optimistic reads will read "through" the Stump, thereby registering dependencies in the same CacheGroup shared by other optimistic layers. The cached results of these optimistic reads can later be invalidated by optimistic writes, which was not possible before. This fixes a long-standing source of confusion/complexity when working with optimistic updates, by allowing optimistic queries to read from the store in a consistently optimistic fashion, rather than sometimes reading optimistically and sometimes non-optimistically, depending on the dynamic presence or absence of optimistic Layer objects. I chose the name Stump because it's short, and a stump is the part of a tree that's left over (above ground, not counting the roots) after you cut down (or prune back) a tree.
This change means the cached results of optimistic reads can be invalidated by non-optimistic writes, which makes sense because the non-optimistic writes potentially affect data that was previously inherited by optimistic layers and consumed by optimistic reads. I'm not convinced this is absolutely necessary, but it's generally safe to err on the side of over-invalidation.
This maybeBroadcastWatch logic was introduced in #6387 to cope with the possibility that a cache watcher might switch between reading optimistically and non-optimistically over time, depending on the presence/absence of optimistic layers when broadcastWatches was called. Now that cache watchers read consistently optimistically or consistently non-optimistically (thanks to the Stub technique introduced recently), this trick should no longer be necessary.
The goal of broadcastWatches is to notify cache watchers of any new data resulting from cache writes. However, it's possible for cache writes to invalidate watched queries in a way that does not result in any differences in the resulting data, so this watch.lastDiff caching saves us from triggering a redundant broadcast of exactly the same data again. Note: thanks to #7439, when two result objects are deeply equal to each another, they will automatically also be === to each other, which is what allows us to get away with the !== check in this code.
@jcreighton @brainkim Though this PR might be interesting, I flagged only @hwillson for review because this is a pretty gnarly area of the cache (result caching + optimistic updates), and he has context from reviewing previous PRs. If you have any thoughts on this, I'm all 👂s, but no need to puzzle your way through this esoteric PR if you don't feel like it. |
expect(receivedCallbackResults).toEqual([ | ||
received1, | ||
received1, | ||
received2, | ||
received3, | ||
received4, | ||
// New results: | ||
received1, | ||
received2AllCaps, | ||
received3, | ||
received4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember thinking it was unfortunate these results were getting delivered multiple times when I was first writing these tests!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome stuff @benjamn - thanks!
// TODO It's a regrettable accident of history that cache.readQuery is | ||
// non-optimistic by default. Perhaps the default can be swapped to true | ||
// in the next major version of Apollo Client. | ||
optimistic: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added this to our 4.0 planning.
// reading optimistically (and registering optimistic dependencies) even when | ||
// no optimistic layers are currently active. The stump.group CacheGroup object | ||
// is shared by any/all Layer objects added on top of the Stump. | ||
class Stump extends Layer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha I love this name / concept!
I'll be the first person to admit I don't love the complexity added by the
CacheGroup
concept I introduced in #5648, but it remains useful for separating result caching (#3394) dependencies into two distinct groups: optimistic and non-optimistic.This separation is important because it allows cache readers who only care about non-optimistic data to continue to benefit from
InMemoryCache
result caching even while optimistic cache updates are taking place, because the optimistic dependencies that get invalidated by optimistic cache writes are tracked in a separateCacheGroup
from the group used to track the non-optimistic dependencies registered by non-optimistic reads.In the process of writing tests for #7827, I realized this
CacheGroup
story had a missing piece (or two):During optimistic cache reads (where the reader indicates they can accept optimistic data, by passing
optimistic: true
), optimistic dependencies were getting registered only when there were any active optimisticLayer
objects (that is, only during mutations with optimistic responses). When no optimistic updates were in progress (that is, most of the time), an optimistic cache read would register exactly the same (non-optimistic) dependencies as a non-optimistic read, so future optimistic updates would be mistakenly prevented from invalidating the cached results of those optimistic reads.By adding a permanent
Stump
layer between theRoot
store and any additionalLayer
objects, we can ensure that optimistic reads always register optimistic dependencies, because those reads now always read through theStump
layer (which owns theCacheGroup
shared by allLayer
objects), even when there are no activeLayer
s currently on top of theStump
. Likewise, non-optimistic reads always read directly from theRoot
layer, ignoring theStump
.This insight may not simplify the
CacheGroup
logic much, but it means optimistic cache reads will consistently register optimistic dependencies (by reading through theStump
), rather than registering them only sometimes (depending on whether optimistic updates are currently in progress, a dangerously dynamic condition).More importantly, since optimistic cache watchers no longer switch their
CacheGroup
when optimistic updates start or stop happening, optimistic result objects previously read from the cache (when there were no optimistic updates happening) can be immediately reused as optimistic results while optimistic updates are in progress, as long as none of the fields involved have been invalidated. This improvement should resolve the concerns raised by @stephenh in #4141 (comment).In addition to introducing the concept of a
Stump
, I noticed a few more opportunities for related improvements. For more details, please read the commit messages.