You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Presently, processor insertions are nondeterministic when concurrent processing tasks are enabled, such that the processor must be pinned to single-threading to enforce total ordering of transactions.
In practice, this slows down processing, in particular doubling or even tripling the time to sync to chain tip, for example with the Econia Data Service Stack (https://econia.dev/off-chain/dss/data-service-stack).
One relatively simple implementation is to allow for a reduce. Currently we create multiple threads (map) and these directly write to the db, but if we actually could compute the results of the threads and run through a reduce against data already in the db we could achieve ordering.
Myself
Cache insertions in a postgres table and only insert to main tables once colliding threads are complete
Execute each insertion as a subtransaction of an overall postgres transaction, comitting once colliding threads are complete
Presently, processor insertions are nondeterministic when concurrent processing tasks are enabled, such that the processor must be pinned to single-threading to enforce total ordering of transactions.
In practice, this slows down processing, in particular doubling or even tripling the time to sync to chain tip, for example with the Econia Data Service Stack (https://econia.dev/off-chain/dss/data-service-stack).
Offline notes and suggestions:
@banool
@bowenyang007
Myself
cc @CRBl69
The text was updated successfully, but these errors were encountered: