1# Raft library 2 3Raft is a protocol with which a cluster of nodes can maintain a replicated state machine. 4The state machine is kept in sync through the use of a replicated log. 5For more details on Raft, see "In Search of an Understandable Consensus Algorithm" 6(https://ramcloud.stanford.edu/raft.pdf) by Diego Ongaro and John Ousterhout. 7 8This Raft library is stable and feature complete. As of 2016, it is **the most widely used** Raft library in production, serving tens of thousands clusters each day. It powers distributed systems such as etcd, Kubernetes, Docker Swarm, Cloud Foundry Diego, CockroachDB, TiDB, Project Calico, Flannel, and more. 9 10Most Raft implementations have a monolithic design, including storage handling, messaging serialization, and network transport. This library instead follows a minimalistic design philosophy by only implementing the core raft algorithm. This minimalism buys flexibility, determinism, and performance. 11 12To keep the codebase small as well as provide flexibility, the library only implements the Raft algorithm; both network and disk IO are left to the user. Library users must implement their own transportation layer for message passing between Raft peers over the wire. Similarly, users must implement their own storage layer to persist the Raft log and state. 13 14In order to easily test the Raft library, its behavior should be deterministic. To achieve this determinism, the library models Raft as a state machine. The state machine takes a `Message` as input. A message can either be a local timer update or a network message sent from a remote peer. The state machine's output is a 3-tuple `{[]Messages, []LogEntries, NextState}` consisting of an array of `Messages`, `log entries`, and `Raft state changes`. For state machines with the same state, the same state machine input should always generate the same state machine output. 15 16A simple example application, _raftexample_, is also available to help illustrate how to use this package in practice: https://github.com/coreos/etcd/tree/master/contrib/raftexample 17 18# Features 19 20This raft implementation is a full feature implementation of Raft protocol. Features includes: 21 22- Leader election 23- Log replication 24- Log compaction 25- Membership changes 26- Leadership transfer extension 27- Efficient linearizable read-only queries served by both the leader and followers 28 - leader checks with quorum and bypasses Raft log before processing read-only queries 29 - followers asks leader to get a safe read index before processing read-only queries 30- More efficient lease-based linearizable read-only queries served by both the leader and followers 31 - leader bypasses Raft log and processing read-only queries locally 32 - followers asks leader to get a safe read index before processing read-only queries 33 - this approach relies on the clock of the all the machines in raft group 34 35This raft implementation also includes a few optional enhancements: 36 37- Optimistic pipelining to reduce log replication latency 38- Flow control for log replication 39- Batching Raft messages to reduce synchronized network I/O calls 40- Batching log entries to reduce disk synchronized I/O 41- Writing to leader's disk in parallel 42- Internal proposal redirection from followers to leader 43- Automatic stepping down when the leader loses quorum 44 45## Notable Users 46 47- [cockroachdb](https://github.com/cockroachdb/cockroach) A Scalable, Survivable, Strongly-Consistent SQL Database 48- [dgraph](https://github.com/dgraph-io/dgraph) A Scalable, Distributed, Low Latency, High Throughput Graph Database 49- [etcd](https://github.com/coreos/etcd) A distributed reliable key-value store 50- [tikv](https://github.com/pingcap/tikv) A Distributed transactional key value database powered by Rust and Raft 51- [swarmkit](https://github.com/docker/swarmkit) A toolkit for orchestrating distributed systems at any scale. 52- [chain core](https://github.com/chain/chain) Software for operating permissioned, multi-asset blockchain networks 53 54## Usage 55 56The primary object in raft is a Node. Either start a Node from scratch using raft.StartNode or start a Node from some initial state using raft.RestartNode. 57 58To start a three-node cluster 59```go 60 storage := raft.NewMemoryStorage() 61 c := &Config{ 62 ID: 0x01, 63 ElectionTick: 10, 64 HeartbeatTick: 1, 65 Storage: storage, 66 MaxSizePerMsg: 4096, 67 MaxInflightMsgs: 256, 68 } 69 // Set peer list to the other nodes in the cluster. 70 // Note that they need to be started separately as well. 71 n := raft.StartNode(c, []raft.Peer{{ID: 0x02}, {ID: 0x03}}) 72``` 73 74Start a single node cluster, like so: 75```go 76 // Create storage and config as shown above. 77 // Set peer list to itself, so this node can become the leader of this single-node cluster. 78 peers := []raft.Peer{{ID: 0x01}} 79 n := raft.StartNode(c, peers) 80``` 81 82To allow a new node to join this cluster, do not pass in any peers. First, add the node to the existing cluster by calling `ProposeConfChange` on any existing node inside the cluster. Then, start the node with an empty peer list, like so: 83```go 84 // Create storage and config as shown above. 85 n := raft.StartNode(c, nil) 86``` 87 88To restart a node from previous state: 89```go 90 storage := raft.NewMemoryStorage() 91 92 // Recover the in-memory storage from persistent snapshot, state and entries. 93 storage.ApplySnapshot(snapshot) 94 storage.SetHardState(state) 95 storage.Append(entries) 96 97 c := &Config{ 98 ID: 0x01, 99 ElectionTick: 10, 100 HeartbeatTick: 1, 101 Storage: storage, 102 MaxSizePerMsg: 4096, 103 MaxInflightMsgs: 256, 104 } 105 106 // Restart raft without peer information. 107 // Peer information is already included in the storage. 108 n := raft.RestartNode(c) 109``` 110 111After creating a Node, the user has a few responsibilities: 112 113First, read from the Node.Ready() channel and process the updates it contains. These steps may be performed in parallel, except as noted in step 2. 114 1151. Write Entries, HardState and Snapshot to persistent storage in order, i.e. Entries first, then HardState and Snapshot if they are not empty. If persistent storage supports atomic writes then all of them can be written together. Note that when writing an Entry with Index i, any previously-persisted entries with Index >= i must be discarded. 116 1172. Send all Messages to the nodes named in the To field. It is important that no messages be sent until the latest HardState has been persisted to disk, and all Entries written by any previous Ready batch (Messages may be sent while entries from the same batch are being persisted). To reduce the I/O latency, an optimization can be applied to make leader write to disk in parallel with its followers (as explained at section 10.2.1 in Raft thesis). If any Message has type MsgSnap, call Node.ReportSnapshot() after it has been sent (these messages may be large). Note: Marshalling messages is not thread-safe; it is important to make sure that no new entries are persisted while marshalling. The easiest way to achieve this is to serialise the messages directly inside the main raft loop. 118 1193. Apply Snapshot (if any) and CommittedEntries to the state machine. If any committed Entry has Type EntryConfChange, call Node.ApplyConfChange() to apply it to the node. The configuration change may be cancelled at this point by setting the NodeID field to zero before calling ApplyConfChange (but ApplyConfChange must be called one way or the other, and the decision to cancel must be based solely on the state machine and not external information such as the observed health of the node). 120 1214. Call Node.Advance() to signal readiness for the next batch of updates. This may be done at any time after step 1, although all updates must be processed in the order they were returned by Ready. 122 123Second, all persisted log entries must be made available via an implementation of the Storage interface. The provided MemoryStorage type can be used for this (if repopulating its state upon a restart), or a custom disk-backed implementation can be supplied. 124 125Third, after receiving a message from another node, pass it to Node.Step: 126 127```go 128 func recvRaftRPC(ctx context.Context, m raftpb.Message) { 129 n.Step(ctx, m) 130 } 131``` 132 133Finally, call `Node.Tick()` at regular intervals (probably via a `time.Ticker`). Raft has two important timeouts: heartbeat and the election timeout. However, internally to the raft package time is represented by an abstract "tick". 134 135The total state machine handling loop will look something like this: 136 137```go 138 for { 139 select { 140 case <-s.Ticker: 141 n.Tick() 142 case rd := <-s.Node.Ready(): 143 saveToStorage(rd.State, rd.Entries, rd.Snapshot) 144 send(rd.Messages) 145 if !raft.IsEmptySnap(rd.Snapshot) { 146 processSnapshot(rd.Snapshot) 147 } 148 for _, entry := range rd.CommittedEntries { 149 process(entry) 150 if entry.Type == raftpb.EntryConfChange { 151 var cc raftpb.ConfChange 152 cc.Unmarshal(entry.Data) 153 s.Node.ApplyConfChange(cc) 154 } 155 } 156 s.Node.Advance() 157 case <-s.done: 158 return 159 } 160 } 161``` 162 163To propose changes to the state machine from the node to take application data, serialize it into a byte slice and call: 164 165```go 166 n.Propose(ctx, data) 167``` 168 169If the proposal is committed, data will appear in committed entries with type raftpb.EntryNormal. There is no guarantee that a proposed command will be committed; the command may have to be reproposed after a timeout. 170 171To add or remove node in a cluster, build ConfChange struct 'cc' and call: 172 173```go 174 n.ProposeConfChange(ctx, cc) 175``` 176 177After config change is committed, some committed entry with type raftpb.EntryConfChange will be returned. This must be applied to node through: 178 179```go 180 var cc raftpb.ConfChange 181 cc.Unmarshal(data) 182 n.ApplyConfChange(cc) 183``` 184 185Note: An ID represents a unique node in a cluster for all time. A 186given ID MUST be used only once even if the old node has been removed. 187This means that for example IP addresses make poor node IDs since they 188may be reused. Node IDs must be non-zero. 189 190## Implementation notes 191 192This implementation is up to date with the final Raft thesis (https://ramcloud.stanford.edu/~ongaro/thesis.pdf), although this implementation of the membership change protocol differs somewhat from that described in chapter 4. The key invariant that membership changes happen one node at a time is preserved, but in our implementation the membership change takes effect when its entry is applied, not when it is added to the log (so the entry is committed under the old membership instead of the new). This is equivalent in terms of safety, since the old and new configurations are guaranteed to overlap. 193 194To ensure there is no attempt to commit two membership changes at once by matching log positions (which would be unsafe since they should have different quorum requirements), any proposed membership change is simply disallowed while any uncommitted change appears in the leader's log. 195 196This approach introduces a problem when removing a member from a two-member cluster: If one of the members dies before the other one receives the commit of the confchange entry, then the member cannot be removed any more since the cluster cannot make progress. For this reason it is highly recommended to use three or more nodes in every cluster. 197