-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store/cachekv: why should (*Store).Write sort the keys before deletion yet ordering doesn't seem important? #10487
Comments
Hi thanks for asking. |
Well, we only need to deterministically write new insertions & deletions to the database. Edits of existing fields can be done in any order. So I guess we could track these two things separately |
Also once we go to SMT, I don't think the parent store in that case requires deterministic writes? (No insertion order dependency in the tree structure) If so, then perhaps we need to expose whether this is a requirement or not for the parent store's DB. |
Correct!, so we need to do that change in |
I'm also relatively suspicious of this benchmark dominating execution time on further thought. Sounds like its not on an IAVL backend, where the file opens should be the bottleneck normally |
…st (#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes #10487 Updates evmos/ethermint#710
…st (cosmos#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes cosmos#10487 Updates evmos/ethermint#710
…st (#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes #10487 Updates evmos/ethermint#710 (cherry picked from commit 5399e72) # Conflicts: # CHANGELOG.md
…st, avoid keys sort (backport #10486) (#10848) * perf: store/cachekv: avoid a map lookup if unnecessary, clear maps fast (#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes #10487 Updates evmos/ethermint#710 (cherry picked from commit 5399e72) # Conflicts: # CHANGELOG.md * fix conflicts Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com> Co-authored-by: Aleksandr Bezobchuk <aleks.bezobchuk@gmail.com>
…st, avoid keys sort (backport cosmos#10486) (cosmos#10848) * perf: store/cachekv: avoid a map lookup if unnecessary, clear maps fast (cosmos#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes cosmos#10487 Updates evmos/ethermint#710 (cherry picked from commit 5399e72) # Conflicts: # CHANGELOG.md * fix conflicts Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com> Co-authored-by: Aleksandr Bezobchuk <aleks.bezobchuk@gmail.com>
…st, avoid keys sort (backport cosmos#10486) (cosmos#10848) * perf: store/cachekv: avoid a map lookup if unnecessary, clear maps fast (cosmos#10486) We can shave off some milliseconds, but also cut down some Megabytes of RAM consumed by only requesting from the cache if needed, but also using the map clearing idiom which is recognized by the compiler to make fast code. Noticed in profiles from Tharsis' Ethermint per evmos/ethermint#710 - Before * Memory profiles ```shell 19.50MB 19.50MB 134: store.cache = make(map[string]*cValue) 18.50MB 18.50MB 135: store.deleted = make(map[string]struct{}) 15.50MB 15.50MB 136: store.unsortedCache = make(map[string]struct{}) ``` * CPU profiles ```go . . 118: // TODO: Consider allowing usage of Batch, which would allow the write to . . 119: // at least happen atomically. 150ms 150ms 120: for _, key := range keys { 220ms 3.64s 121: cacheValue := store.cache[key] . . 122: . . 123: switch { . 250ms 124: case store.isDeleted(key): . . 125: store.parent.Delete([]byte(key)) 210ms 210ms 126: case cacheValue.value == nil: . . 127: // Skip, it already doesn't exist in parent. . . 128: default: 240ms 27.94s 129: store.parent.Set([]byte(key), cacheValue.value) . . 130: } . . 131: } ... 10ms 60ms 134: store.cache = make(map[string]*cValue) . 40ms 135: store.deleted = make(map[string]struct{}) . 50ms 136: store.unsortedCache = make(map[string]struct{}) . 110ms 137: store.sortedCache = dbm.NewMemDB() ``` - After * Memory profiles ```shell . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . . 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . . 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . . 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } ``` * CPU profiles ```shell . . 111: // TODO: Consider allowing usage of Batch, which would allow the write to . . 112: // at least happen atomically. 110ms 110ms 113: for _, key := range keys { . 210ms 114: if store.isDeleted(key) { . . 115: // We use []byte(key) instead of conv.UnsafeStrToBytes because we cannot . . 116: // be sure if the underlying store might do a save with the byteslice or . . 117: // not. Once we get confirmation that .Delete is guaranteed not to . . 118: // save the byteslice, then we can assume only a read-only copy is sufficient. . . 119: store.parent.Delete([]byte(key)) . . 120: continue . . 121: } . . 122: 50ms 2.45s 123: cacheValue := store.cache[key] 910ms 920ms 124: if cacheValue.value != nil { . . 125: // It already exists in the parent, hence delete it. 120ms 29.56s 126: store.parent.Set([]byte(key), cacheValue.value) . . 127: } . . 128: } . . 129: . . 130: // Clear the cache using the map clearing idiom . . 131: // and not allocating fresh objects. . . 132: // Please see https://bencher.orijtech.com/perfclinic/mapclearing/ . 210ms 133: for key := range store.cache { . . 134: delete(store.cache, key) . . 135: } . 10ms 136: for key := range store.deleted { . . 137: delete(store.deleted, key) . . 138: } . 170ms 139: for key := range store.unsortedCache { . . 140: delete(store.unsortedCache, key) . . 141: } . 260ms 142: store.sortedCache = dbm.NewMemDB() . 10ms 143:} ``` Fixes cosmos#10487 Updates evmos/ethermint#710 (cherry picked from commit 5399e72) # Conflicts: # CHANGELOG.md * fix conflicts Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com> Co-authored-by: Aleksandr Bezobchuk <aleks.bezobchuk@gmail.com>
At release v0.44.3
Summary of Bug
Brought to my attention by the Tharsis team, store/cachekv.(*Store).Write is quite slow and for example has these CPU and Memory profiles and the biggest CPU consumer culprit is
where the subject code below is
cosmos-sdk/store/cachekv/store.go
Lines 104 to 127 in d83a3bf
Notice in there that if at all we encounter a key that was already deleted, we just invoke
store.parent.Delete([]byte(key))
, it doesn't seem to me like we need to have the keys sorted at all. Removing sort.String reduces CPU time from 52.57s down to say 36.59s along with some optimizations I have started in #10486References
CPU Profile
Memory profile
Kindly /cc-ing @fedekunze @ValarDragon @robert-zaremba @marbar3778
For Admin Use
The text was updated successfully, but these errors were encountered: