Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge performance regressions in recent versions #4464

Closed
META-DREAMER opened this issue Feb 19, 2019 · 10 comments
Closed

Huge performance regressions in recent versions #4464

META-DREAMER opened this issue Feb 19, 2019 · 10 comments

Comments

@META-DREAMER
Copy link
Contributor

META-DREAMER commented Feb 19, 2019

After going through a bunch of dependency updates in our app, we noticed that the performance of apollo-client is orders of magnitude slower than it used to be. We narrowed it down to v2.4.3 being the culprit. There was another issue opened in regards to this, but the "fix" mentioned by @benjamn doesn't really fix the performance problem at all, its crazy slow even in the latest version of apollo-client.

Here's some examples to show what I'm talking about. Cache persistence was disabled in all three tests and the only variable is the apollo-client version:

Using v2.4.2

Using v2.4.3

Using v2.4.11

This issue raised a very valid point as well about using immutable data which was rudely dismissed. Using something like seamless-immutable or immer is very easy and would result in exponential performance gains for cloning data (which seems to be the bottleneck). It would not only make apollo-client faster, but make every "real application" using apollo-client with React faster as well because of the immutable data and PureComponent optimizations. However, since apollo-client wasn't built from the ground up with this in mind, it might not be a feasible change to make now, which is a shame.

Nevertheless, the newer versions of apollo-client are unusable for our app, and the performance regressions need to be addressed. There should be automated testing in place for performance regressions to prevent these kinds of things from happening in the first place.

@vadistic
Copy link

vadistic commented Feb 19, 2019

Just plugging an brief observation with betas

apollo-client: 1.5.0-beta.1
react-apollo: 1.5.0-beta.1
apollo-cache-inmemory: 1.5.0-beta.1

I have a absolutely basic (at least apollo-client) setup and let's take an example type:

  • 20 own fields
  • 5 relations with few non-nested fields each
  • around 100 records of this type in cache

client.writeQuery (of only single object, used for optimistic response) takes ~400ms
client.mutate response resolves after about ~800ms
my server response time is only ~80-100ms

(measured with window.performance.now())

It's very likely I'm doing something wrong, but it's huge bottleneck for me. Please take a look :)

@hwillson
Copy link
Member

@HammadJ @vadistic Any chance you'd be able to share a small runnable reproduction that shows these performance issues?

@hwillson hwillson added this to the Release 2.5.0 milestone Feb 19, 2019
@peterbraden
Copy link

To add to this, the issue in #4097 is still a major problem for us. It appears that a cloneDeep happens in the render call tree, which is an issue with large props and frequent renders.

I don't have a minimal reproduction to offer, but the numbers in ticket #4097 are still representative.

@peterbraden
Copy link

Here's a call tree illustrating the problem:

screenshot 2019-02-19 at 13 34 16

@benjamn
Copy link
Member

benjamn commented Feb 26, 2019

What browser/engine version(s) were used to run these profiles? My guess is that the Map used to track previously seen arrays/objects is slow, since that's the primary difference between this version of cloneDeep and the previous one. Are you polyfilling Map in your application? If so, can you share the non-native Map implementation that you're using?

My other guess is that cloneDeep is just being called more often than before, due to #4069. I would love to avoid making those defensive copies of lastResult, but that's the only default behavior that guarantees safety—a lesson we learned the hard way (see #3992).

I would be willing to consider a PR that allows disabling the defensive clones of lastResult, but you'd have to opt into that behavior, and it would mean taking full responsibility for not destructively modifying the result objects. Would that tradeoff appeal to this group? If you started encountering bugs like #3992, and reenabling the defensive cloning fixed the bugs (but made things slower), I trust the implications would be obvious?

Mandating immutability is a luxury we simply do not have, as a library used by a diverse audience of developers with different styles and values, who have almost nothing universally in common except GraphQL and JavaScript. However, I believe we can work with you, if you are comfortable (and your team is comfortable) writing your code in a non-destructive style.

@jync
Copy link

jync commented Feb 27, 2019

Having the option for devs to mandate immutability I think is the best of both worlds so long as the rest of the Apollo Client doesn't make any assumptions requiring mutability.

@peterbraden
Copy link

This was in Chrome 72 using the native Map implementation.

@META-DREAMER
Copy link
Contributor Author

META-DREAMER commented Feb 28, 2019

In our setup, we are running it in react-native and using apollo-cache-hermes for the cache implementation (apollo-cache-inmemory was far too slow for our needs).

It's not only because cloneDeep is being called more due to #4069, the performance issues are very bad from v2.4.3 till v2.4.11, as can be seen in the GIFs in the original post. v2.4.2 is the only one with reasonable performance, and it's orders of magnitude better than the other versions. For that reason, I don't think disabling defensive cloning will help the issue, it seems like it's directly to do with cloneDeep.

@benjamn
Copy link
Member

benjamn commented Feb 28, 2019

For that reason, I don't think disabling defensive cloning will help the issue, it seems like it's directly to do with cloneDeep.

Please take a moment to appreciate why this sentence doesn't make any sense: defensive cloning is the entire reason cloneDeep is being called. Any performance problems that cloneDeep might have would be eliminated completely if we could somehow stop calling it.

Something else you may not realize: the cloning happens in apollo-client here and here, not in apollo-cache-inmemory. So your use of apollo-cache-hermes is unrelated to the cloneDeep discussion.

I honestly think you're muddying your own understanding of the problem by lumping all performance problems into one bucket, and using imprecise terminology like "very bad" and "orders of magnitude better" without offering any actionable details. Your performance problems are real, but they have multiple distinct components, each of which is worth addressing individually. I don't understand why the existence of other problems should keep you from caring about calling cloneDeep less often. If it's just that you think you have to complain loudly to get your problems addressed, you can save your energy—we hear you!

benjamn added a commit that referenced this issue Feb 28, 2019
…bility.

Part of the plan I outlined in this comment:
#4464 (comment)

If we could trust application code not to modify cache results, we
wouldn't have to save deep snapshots of past results in order to implement
isDifferentFromLastResult correctly (see #4069).

Aside: why doesn't the cache just return defensive copies of all results?
#4031 (comment)

While you might agree that immutability is a worthwhile aspiration, it can
be hard to maintain that discipline across your entire application over
time, especially in a team of multiple developers.

This commit implements a new freezeResults option for the InMemoryCache
constructor, which (when true) causes all cache results to be frozen in
development, so you can more easily detect accidental mutations.

Note: mutating frozen objects only throws in strict mode, whereas it fails
silently in non-strict code. ECMAScript module code automatically runs in
strict mode, and most module transforms add "use strict" at the top of the
generated code, so you're probably already using strict mode everywhere,
though you might want to double-check.

The beauty of this implementation is that it does not need to repeatedly
freeze entire results, because it can shallow-freeze the root of each
subtree when that object is first created.

Thanks to result caching, those frozen objects can be shared between
multiple different result trees without any additional freezing, and the
entire result always ends up deeply frozen.

The freezing happens only in non-production environments, so there is no
runtime cost to using { freezeResults: true } in production. Please keep
this in mind when benchmarking cache performance!
benjamn added a commit that referenced this issue Mar 6, 2019
Part of the plan I outlined in this comment:
#4464 (comment)

Passing { assumeImmutableResults: true } to the ApolloClient constructor
should probably always be accompanied by passing { freezeResults: true }
to the InMemoryCache constructor (see #4514), though of course the use of
InMemoryCache is optional, and other cache implementations may not support
that option.
@benjamn
Copy link
Member

benjamn commented May 21, 2019

The functionality described in PRs #4514 and #4543 has been officially released in apollo-client@2.6.0 and apollo-cache-inmemory@1.6.0, which should help address the very real concerns that were raised in this issue. Since this was only a minor version bump, the new functionality is not enabled by default, but we hope to make it the default behavior in Apollo Client 3.0.

See CHANGELOG.md if you're curious about the other changes in this release.

@benjamn benjamn closed this as completed May 21, 2019
@benjamn benjamn modified the milestones: Release 2.5.0, Release 2.6.0 May 21, 2019
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants