-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test that cache evictions propagate to parent queries. #6412
Conversation
When an object is evicted from the cache, common intuition says that any dangling references to that object should be proactively removed from elsewhere in the cache. Thankfully, this intuition is misguided, because a much simpler and more efficient approach to handling dangling references is already possible, without requiring any new cache features. As the tests added in this commit demonstrate, the cleanup of dangling references can be postponed until the next time the affected fields are read from the cache, simply by defining a custom read function that performs any necessary cleanup, in whatever way makes sense for the logic of the particular field. This lazy approach is vastly more efficient than scanning the entire cache for dangling references would be, because it kicks in only for fields you actually care about, the next time you ask for their values. For example, you might have a list of references that should be filtered to exclude the dangling ones, or you might want the dangling references to be nullified in place (without filtering), or you might have a single reference that should default to something else if it becomes invalid. All of these options are matters of application-level logic, so the cache cannot choose the right default strategy in all cases. By default, references are left untouched unless you define custom logic to do something else. It may actually be unwise/destructive to remove dangling references from the cache, because the evicted data could always be written back into the cache at some later time, restoring the validity of the references. Since eviction is not necessarily final, dangling references should be preserved by default after eviction, and filtered out just in time to keep them from causing problems. And even if you ultimately decide to prune the dangling references, proactively removing them is way more work than letting a read function handle them on-demand. This system works because the result caching system tracks hierarchical field dependencies in a way that causes read functions to be reinvoked any time the field in question is affected by updates to the cache, even if the changes are nested many layers deep within the field. It also helps that custom read functions are consistently invoked for a given field any time that field is read from the cache, so you don't have to worry about dangling references leaking out by other means. I recommend reading through this test not only because it demonstrates important capabilities of InMemoryCache, but also because the mythological subject matter contains some good jokes, IMHO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks awesome @benjamn, and thanks for the Greek mythology refresher! 😂
// Fun fact: Apollo is the only major Greco-Roman deity whose name | ||
// is the same in both traditions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL
06dced3
to
018ca84
Compare
The story we're telling in #6412 about using custom read functions to filter out dangling references works best if there's an easy way to check whether a Reference points to existing data in the cache. Although we could have introduced a new options.isValidReference helper function, I think it makes sense to let the existing options.isReference function handle this use case as well. I ended up refactoring how the toReference function gets created and passed around as well, since I want toReference and isReference to remain together as much as possible. I considered making isReference a property of EntityStore (like toReference used to be), but that would not have worked because the new isReference(ref, true) functionality needs access to the topmost layer of the cache, which only InMemoryCache knows about. Long story short, both isReference and toReference are now methods of InMemoryCache, rather than EntityStore.
The story we're telling in #6412 about using custom read functions to filter out dangling references works best if there's an easy way to check whether a Reference points to existing data in the cache. Although we could have introduced a new options.isValidReference helper function, I think it makes sense to let the existing options.isReference function handle this use case as well.
The story we're telling in #6412 about using custom read functions to filter out dangling references works best if there's an easy way to check whether a Reference points to existing data in the cache. Although we could have introduced a new options.isValidReference helper function, I think it makes sense to let the existing options.isReference function handle this use case as well.
The story we're telling in #6412 about using custom read functions to filter out dangling references works best if there's an easy way to check whether a Reference points to existing data in the cache. Although we could have introduced a new options.isValidReference helper function, I think it makes sense to let the existing options.isReference function handle this use case as well.
…6413) The story we're telling in #6412 about using custom read functions to filter out dangling references works best if there's an easy way to check whether a Reference points to existing data in the cache. Although we could have introduced a new options.isValidReference helper function, I think it makes sense to let the existing options.isReference function handle this use case as well.
Hi, thanks for this. Was struggling with best way to handle (intentional) dangling references without having the queries fail. I had settled on just using the Couple of questions (2) What is the recommended approach to checking deeply-nested objects for dangling references, ideally without having to create type policies all the way down/up? For example say I have a Overall, I do think the lazy approach makes a lot more sense and I hadn't thought about just filtering in |
}), | ||
})).toBe(true); | ||
|
||
// You didn't think we were going to let Apollo be garbage-collected, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benjamn this test should go straight to the documentation :)
}, | ||
}); | ||
|
||
const apolloRulerResult = cache.readQuery<{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benjamn maybe adding another expect(cache.extract()).toEqual()
after readQuery
to further show that reading a query with type policies does not write it to the cache?
Yes, I would say it's not necessary to update the cache.modify({
// As @darkbasic has suggested before, this ID should be easier to obtain.
id: theCurrentEntityID,
// This will permanently update elements to exclude any dangling references.
nameOfTheList(elements, { canRead }) {
return elements.filter(canRead);
},
// Important if you want the removal to be a quiet side-effect.
broadcast: false,
});
You're right that the filtering should happen in the |
This commit implements the proposal for automatic filtering of dangling references that I described in #6425 (comment). The filtering functionality demonstrated by #6412 (and updated by #6425) seems useful enough that we might as well make it the default behavior for any array-valued field consumed by a selection set. Note: the presence of field.selectionSet implies the author of the query expects the elements to be objects (or references) rather than scalar values. By making .filter(canRead) automatic, we free developers from having to worry about manually removing any references after evicting entities from the cache. Instead, those dangling references will simply (appear to) disappear from cache results, which is almost always the desired behavior. In case this automatic filtering is not desired, a custom read function can be used to override the filtering, since read functions run before this filtering happens. This commit includes tests demonstrating several options for replacing/filtering dangling references in non-default ways.
This commit implements the proposal for automatic filtering of dangling references that I described in #6425 (comment). The filtering functionality demonstrated by #6412 (and updated by #6425) seems useful enough that we might as well make it the default behavior for any array-valued field consumed by a selection set. Note: the presence of field.selectionSet implies the author of the query expects the elements to be objects (or references) rather than scalar values. A list of scalar values should not be filtered, since it cannot contain dangling references. By making .filter(canRead) automatic, we free developers from having to worry about manually removing any references after evicting entities from the cache. Instead, those dangling references will simply (appear to) disappear from cache results, which is almost always the desired behavior. Fields whose values hold single (non-list) dangling references cannot be automatically filtered in the same way, but you can always write a custom read function for the field, and it's somewhat more likely that a refetch will fix those fields correctly. In case this automatic filtering is not desired, a custom read function can be used to override the filtering, since read functions run before this filtering happens. This commit includes tests demonstrating several options for replacing/filtering dangling references in non-default ways.
When an object is evicted from the cache, common intuition says that any dangling references to that object should be proactively removed from elsewhere in the cache. Thankfully, this intuition is misguided, because a much simpler and more efficient approach to handling dangling references is already possible, without requiring any new cache features.
As the tests added in this commit demonstrate, the cleanup of dangling references can be postponed until the next time the affected fields are read from the cache, simply by defining a custom
read
function that performs any necessary cleanup, in whatever way makes sense for the logic of the particular field. This lazy approach is vastly more efficient than scanning the entire cache for dangling references would be, because it kicks in only for fields you actually care about, the next time you ask for their values.For example, you might have a list of references that should be filtered to exclude the dangling ones, or you might want the dangling references to be nullified in place (without filtering), or you might have a single reference that should default to something else if it becomes invalid. All of these options are matters of application-level logic, so the cache cannot choose the right default strategy in all cases.
By default, references are left untouched unless you define custom logic to do something else. It may actually be unwise/destructive to remove dangling references from the cache, because the evicted data could always be written back into the cache at some later time, restoring the validity of the references. Since eviction is not necessarily final, dangling references represent useful information that should be preserved by default after eviction, but filtered out just in time to keep them from causing problems. Even if you ultimately decide to prune the dangling references, proactively finding and removing them is way more work than letting a
read
function handle them on-demand.This system works because the result caching system (#3394, #5617) tracks hierarchical field dependencies in a way that causes
read
functions to be reinvoked any time the field in question is affected by updates to the cache, even if the changes are nested many layers deep within the field. It also helps that customread
functions are consistently invoked for a given field any time that field is read from the cache, so you don't have to worry about dangling references leaking out by other means.I recommend reading through this test not only because it demonstrates important capabilities of
InMemoryCache
, but also because the mythological subject matter contains some fun jokes/references, IMHO.