TiKV (The pronunciation is: /'taɪkeɪvi:/ tai-K-V, etymology: titanium) is a distributed Key-Value database which is based on the design of Google Spanner and HBase, but it is much simpler without dependency on any distributed file system. With the implementation of the Raft consensus algorithm in Rust and consensus state stored in RocksDB, it guarantees data consistency. Placement Driver which is introduced to implement sharding enables automatic data migration. The transaction model is similar to Google's Percolator with some performance improvements. TiKV also provides snapshot isolation (SI), snapshot isolation with lock (SQL: select ... for update), and externally consistent reads and writes in distributed transactions. See TiKV-server software stack for more information. TiKV has the following primary features:
-
Geo-Replication TiKV uses Raft and Placement Driver to support Geo-Replication.
-
Horizontal scalability With Placement Driver and carefully designed Raft groups, TiKV excels in horizontal scalability and can easily scale to 100+ TBs of data.
-
Consistent distributed transactions Similar to Google's Spanner, TiKV supports externally-consistent distributed transactions.
-
Coprocessor support Similar to Hbase, TiKV implements the coprocessor framework to support distributed computing.
-
Working with TiDB Thanks to the internal optimization, TiKV and TiDB can work together to be the best database system that specializes in horizontal scalability, support for externally-consistent transactions, as well as a focus on supporting both traditional RDBMS and NoSQL.
Rust Nightly is required. TiKV is currently tested mainly with rust-nightly-2018-01-12
, however we would like to track nightly
, so please report new breakage.
# Get rustup from rustup.rs, then in your `tikv` folder:
rustup override set nightly-2018-01-12
cargo +nightly-2018-01-12 install rustfmt-nightly --version 0.3.4
This figure represents tikv-server software stack.
- Placement driver: Placement Driver (PD) is the cluster manager of TiKV. PD periodically checks replication constraints to balance load and data automatically.
- Store: There is a RocksDB within each Store and it stores data into local disk.
- Region: Region is the basic unit of Key-Value data movement. Each Region is replicated to multiple Nodes. These multiple replicas form a Raft group.
- Node: A physical node in the cluster. Within each node, there are one or more Stores. Within each Store, there are many Regions.
When a node starts, the metadata of the Node, Store and Region are registered into PD. The status of each Region and Store is reported to PD regularly.
TiKV is a component in the TiDB project, you must build and run it with TiDB and PD together.
If you want to use TiDB in production, see deployment build guide to build the TiDB project first.
If you want to dive into TiDB, see development build guide on how to build the TiDB project.
- Read the deployment doc on how to run the TiDB project.
- Learn the configuration explanations.
- Use Docker to run the TiDB project.
See CONTRIBUTING for details on submitting patches and the contribution workflow.
TiKV is under the Apache 2.0 license. See the LICENSE file for details.
- Thanks etcd for providing some great open source tools.
- Thanks RocksDB for their powerful storage engines.
- Thanks mio for providing metal IO library for Rust.
- Thanks rust-clippy. We do love the great project.