Optimize write rate in Gcp Firestore #1458
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it: We used to have less than 100 QPS on writing data to the online store in Gcp Firestore. This was due to using non-batch requests (get & put) instead of batch variants (get_multi & put_multi). Moreover, we completely removed the read part of the functionality and directly write the new data without comparing timestamps. So, it's up to the user now to make sure they're not overwriting old data over the new data. In future we'll add a flag that if enabled by the user, will do this check on every write, but the performance hit is too much to leave it on by default.
The new functionality benchmarks at ~1.95K writes per second.
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?: