Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
26599: kv: pipeline transactional writes r=nvanbenschoten a=nvanbenschoten This change pipelines transactional writes using the approach presented in #16026 (comment). ## Approach The change introduces transactional pipelining by creating a new `txnReqInterceptor` that hooks into the `TxnCoordSender` called `txnPipeliner`. `txnPipeliner` is a `txnReqInterceptor` that pipelines transactional writes by using asynchronous consensus. The interceptor then tracks all writes that have been asynchronously proposed through Raft and ensures that all interfering requests chain on to them by first proving that the async writes succeeded. The interceptor also ensures that when committing a transaction all outstanding writes that have been proposed but not proven to have succeeded are first checked before committing. Chaining on to in-flight async writes is important for two main reasons to txnPipeliner: 1. requests proposed to Raft will not necessarily succeed. For any number of reasons, the request may make it through Raft and be discarded or fail to ever even be replicated. A transaction must check that all async writes succeeded before committing. However, when these proposals do fail, their errors aren't particularly interesting to a transaction. This is because these errors are not deterministic Transaction-domain errors that a transaction must adhere to for correctness such as conditional-put errors or other symptoms of constraint violations. These kinds of errors are all discovered during write *evaluation*, which an async write will perform synchronously before consensus. Any error during consensus is outside of the Transaction-domain and can always trigger a transaction retry. 2. transport layers beneath the txnPipeliner do not provide strong enough ordering guarantees between concurrent requests in the same transaction to avoid needing explicit chaining. For instance, DistSender uses unary gRPC requests instead of gRPC streams, so it can't natively expose strong ordering guarantees. Perhaps more importantly, even when a command has entered the command queue and evaluated on a Replica, it is not guaranteed to be applied before interfering commands. This is because the command may be retried outside of the serialization of the command queue for any number of reasons, such as leaseholder changes. When the command re-enters the command queue, it's possible that interfering commands may jump ahead of it. To combat this, the txnPipeliner uses chaining to throw an error when these re-orderings would have affected the order that transactional requests evaluate in. ## Testing/Benchmarking This change will require a lot of testing (warning: very little unit testing is included so far) and benchmarking. The first batch of benchmarking has been very promising in terms of reducing transactional latency, but it hasn't shown much effect on transactional throughput. This is somewhat expected, as the approach to transactional pipelining is hits a tradeoff between a transaction performing slightly more work (in parallel) while also having a significantly smaller contention footprint. The first sanity check benchmark was to run the change against a geo-distributed cluster running TPCC. A node was located in each of us-east1-b, us-west1-b, europe-west2-b. The load generator was located in us-east1-b and all leases were moved to this zone as well. The test first limited all operations to newOrders, but soon expanded to full TPCC after seeing desirable results. #### Without txn pipelining ``` _elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) 180.0s 198.3 154.2% 916.5 486.5 2818.6 4160.7 5637.1 5905.6 ``` #### With txn pipelining ``` _elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) 180.0s 200.0 155.5% 388.2 184.5 1208.0 1946.2 2684.4 2818.6 ``` One big caveat is that all foreign key checks needed to be disabled for this experiment to show much of an improvement. The reason for this is subtle. TPCC's schema is structured as a large hierarchy of tables, and most of its transactions insert down a lineage of tables. With FKs enabled, we see a query to a table immediately after it is inserted into when its child table is inserted into next. This effectively causes a pipeline stall immediately after each write, which eliminated a lot of the benefit that we expect to see here. Luckily, this is a problem that can be avoided by being a little smarter about foreign keys at the SQL-level. We should not be performing these scans over all possible column families for a row just to check for existence (resulting kv count > 0) if we know that we just inserted into that row. We need some kind of row-level existence cache in SQL to avoid these scans. This will be generally useful, but will be especially useful to avoid these pipeline stalls. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
- Loading branch information