• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..15-Oct-2021-

ProcfileH A D15-Oct-2021436 54

README.mdH A D15-Oct-20214.1 KiB12686

doc.goH A D15-Oct-2021682 171

httpapi.goH A D15-Oct-20213.2 KiB12489

kvstore.goH A D15-Oct-20212.9 KiB11382

kvstore_test.goH A D15-Oct-20211.2 KiB4828

listener.goH A D15-Oct-20211.5 KiB6039

main.goH A D15-Oct-20211.5 KiB4622

raft.goH A D15-Oct-202112.9 KiB483382

raftexample_test.goH A D15-Oct-20215.4 KiB223155

README.md

1# raftexample
2
3raftexample is an example usage of etcd's [raft library](../../raft). It provides a simple REST API for a key-value store cluster backed by the [Raft][raft] consensus algorithm.
4
5[raft]: http://raftconsensus.github.io/
6
7## Getting Started
8
9### Building raftexample
10
11Clone `etcd` to `<directory>/src/go.etcd.io/etcd`
12
13```sh
14export GOPATH=<directory>
15cd <directory>/src/go.etcd.io/etcd/contrib/raftexample
16go build -o raftexample
17```
18
19### Running single node raftexample
20
21First start a single-member cluster of raftexample:
22
23```sh
24raftexample --id 1 --cluster http://127.0.0.1:12379 --port 12380
25```
26
27Each raftexample process maintains a single raft instance and a key-value server.
28The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.
29
30Next, store a value ("hello") to a key ("my-key"):
31
32```
33curl -L http://127.0.0.1:12380/my-key -XPUT -d hello
34```
35
36Finally, retrieve the stored key:
37
38```
39curl -L http://127.0.0.1:12380/my-key
40```
41
42### Running a local cluster
43
44First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
45
46The [Procfile script](./Procfile) will set up a local example cluster. Start it with:
47
48```sh
49goreman start
50```
51
52This will bring up three raftexample instances.
53
54Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.
55
56### Fault Tolerance
57
58To test cluster recovery, first start a cluster and write a value "foo":
59```sh
60goreman start
61curl -L http://127.0.0.1:12380/my-key -XPUT -d foo
62```
63
64Next, remove a node and replace the value with "bar" to check cluster availability:
65
66```sh
67goreman run stop raftexample2
68curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
69curl -L http://127.0.0.1:32380/my-key
70```
71
72Finally, bring the node back up and verify it recovers with the updated value "bar":
73```sh
74goreman run start raftexample2
75curl -L http://127.0.0.1:22380/my-key
76```
77
78### Dynamic cluster reconfiguration
79
80Nodes can be added to or removed from a running cluster using requests to the REST API.
81
82For example, suppose we have a 3-node cluster that was started with the commands:
83```sh
84raftexample --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
85raftexample --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
86raftexample --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
87```
88
89A fourth node with ID 4 can be added by issuing a POST:
90```sh
91curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379
92```
93
94Then the new node can be started as the others were, using the --join option:
95```sh
96raftexample --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join
97```
98
99The new node should join the cluster and be able to service key/value requests.
100
101We can remove a node using a DELETE request:
102```sh
103curl -L http://127.0.0.1:12380/3 -XDELETE
104```
105
106Node 3 should shut itself down once the cluster has processed this request.
107
108## Design
109
110The raftexample consists of three components: a raft-backed key-value store, a REST API server, and a raft consensus server based on etcd's raft implementation.
111
112The raft-backed key-value store is a key-value map that holds all committed key-values.
113The store bridges communication between the raft server and the REST server.
114Key-value updates are issued through the store to the raft server.
115The store updates its map once raft reports the updates are committed.
116
117The REST server exposes the current raft consensus by accessing the raft-backed key-value store.
118A GET command looks up a key in the store and returns the value, if any.
119A key-value PUT command issues an update proposal to the store.
120
121The raft server participates in consensus with its cluster peers.
122When the REST server submits a proposal, the raft server transmits the proposal to its peers.
123When raft reaches a consensus, the server publishes all committed updates over a commit channel.
124For raftexample, this commit channel is consumed by the key-value store.
125
126