-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Conversation
0790725
to
7c09f27
Compare
@@ -136,6 +136,12 @@ mapbox::sqlite::Statement& OfflineDatabase::getStatement(const char* sql) { | |||
return *it->second; | |||
} | |||
|
|||
void OfflineDatabase::batch(BatchedFn&& fn) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of exposing a generic function like this, can we pass a list/collection of Resource
/Response
pairs and handle iteration over those, and transaction management internally? E.g. have a public putRegionResources()
function that internally calls putRegionResource
for every item, wrapping everything in a transaction.
I just discovered that you can
This takes advantage of the fact that However, there does not appear to be a way of determining whether such a statement replaced or inserted, which |
@ivovandongen Can you benchmark this approach versus https://github.com/mapbox/mapbox-gl-native/compare/no-transaction? I'm not sure if small automatic deferred transactions, or batched immediate transactions will be faster. |
@jfirebaugh Between 95 - 100 seconds on the Pixel XL 2. The Galaxy Nexus is still charging, but I don't think it will be much better I'm afraid. |
I'm not clear how 95-100 seconds compares. Can you summarize the results from the three approaches ( |
7c09f27
to
eb649c9
Compare
Tested with
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great -- I think that offline-download-batches
is the best balance between improved performance and code complexity.
eb649c9
to
f565e9c
Compare
\o/ |
iOS changelog added in #12086. |
Fixes #11217
Closes #10252
Alternative to #11235 using batching of resource update/inserts.
Somewhat slower: +/- 30 seconds on Pixel XL 2, 130 seconds on Galaxy Nexus. Higher memory consumption; had to keep the batch size relatively small to avoid running out of memory on old devices.