Skip to content

What are Distributed Hash Tables ?

Adrien Béraud edited this page Mar 14, 2018 · 21 revisions

OpenDHT implements a Distributed Hash Table (DHT) of the Kademlia kind.

A DHT can be viewed as a dictionary service distributed over a network: it provides access to a common shared key->value data-store, distributed over participating nodes with great performance and scalability.

From a user perspective, a DHT essentially provides a map interface, with two main operations: put(key, value) and get(key). Get will retrieve values stored at a certain key while put (often called announce) will store a value on the network. Note that many values can be stored under the same key.

The Kademlia DHT algorithm requires to contact only O(log(N)) nodes for a get operation, N being the number of nodes in the network. This property makes DHTs very scalable as demonstrated, for instance, by the mainline BitTorrent DHT running with tens of millions of nodes.

Underlying principle

Every node in the network is identified by a randomly chosen ID (of 160 bits for OpenDHT). Every value stored on the network is identified by a randomly distributed key of the same size.

Kademlia DHTs define XOR as the distance operator between keys and node IDs. That means the arbitrary defined "distance" d between a node with ID Na and a value with key Kb can be computed as d = Na XOR Kb.

Every node has partial knowledge of the network by maintaining a routing table of known node IDs and their IP addresses. The routing table, a binary tree of node IDs, is divided in buckets containing a maximum of 8 nodes. Only the bucket containing the node's own ID will split to fit more nodes, so that the routing table will contain a number of nodes approximately proportional to log(N), concentrated "around" (according to the XOR metric) the node ID.

Every node also stores some of the data. To find the nodes globally responsible of storage for a key H, a node will contact the nodes in his table that are the closest to H according to the XOR metric. Requests include the key, H, and answers include values for H (if any) and IDs and IP addresses of nodes closest to H in the answerer table. Those new known nodes are then contacted the same way, and so on, until all 8 known closest nodes have answered.

The same iterative process is used when a new node joins the network: the node performs the algorithm with its own node ID as the key to find neighbours. By doing so, contacted neighbours will then also be aware of the new node and its IP address through requests sent to them.

This allows every node to read and write values on the distributed data store by sending messages to the same 8 nodes whose IDs are the closest to the target key.