1---
2title: Performance
3---
4
5## Understanding performance
6
7etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.
8
9etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
10
11There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcd’s boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcd’s performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.
12
13## Benchmarks
14
15Benchmarking etcd performance can be done with the [benchmark](https://github.com/coreos/etcd/tree/master/tools/benchmark) CLI tool included with etcd.
16
17For some baseline performance numbers, we consider a three member etcd cluster with the following hardware configuration:
18
19- Google Cloud Compute Engine
20- 3 machines of 8 vCPUs + 16GB Memory + 50GB SSD
21- 1 machine(client) of 16 vCPUs + 30GB Memory + 50GB SSD
22- Ubuntu 17.04
23- etcd 3.2.0, go 1.8.3
24
25With this configuration, etcd can approximately write:
26
27| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Average server RSS |
28|---------------:|------------------:|--------------------:|----------------------:|------------------:|--------------------|------------------:|----------------------------:|-------------------:|
29| 10,000 | 8 | 256 | 1 | 1 | leader only | 583 | 1.6ms | 48 MB |
30| 100,000 | 8 | 256 | 100 | 1000 | leader only | 44,341 | 22ms |  124MB |
31| 100,000 | 8 | 256 | 100 | 1000 | all members |  50,104 | 20ms |  126MB |
32
33Sample commands are:
34
35```sh
36# write to leader
37benchmark --endpoints=${HOST_1} --target-leader --conns=1 --clients=1 \
38    put --key-size=8 --sequential-keys --total=10000 --val-size=256
39benchmark --endpoints=${HOST_1} --target-leader  --conns=100 --clients=1000 \
40    put --key-size=8 --sequential-keys --total=100000 --val-size=256
41
42# write to all members
43benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
44    put --key-size=8 --sequential-keys --total=100000 --val-size=256
45```
46
47Linearizable read requests go through a quorum of cluster members for consensus to fetch the most recent data. Serializable read requests are cheaper than linearizable reads since they are served by any single etcd member, instead of a quorum of members, in exchange for possibly serving stale data. etcd can read:
48
49| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average read QPS | Average latency per request |
50|-------------------:|------------------:|--------------------:|----------------------:|------------------:|-------------|-----------------:|----------------------------:|
51| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 1,353 | 0.7ms |
52| 10,000 | 8 | 256 | 1 | 1 | Serializable | 2,909 | 0.3ms |
53| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 141,578 | 5.5ms |
54| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 185,758 | 2.2ms |
55
56Sample commands are:
57
58```sh
59# Single connection read requests
60benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
61    range YOUR_KEY --consistency=l --total=10000
62benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
63    range YOUR_KEY --consistency=s --total=10000
64
65# Many concurrent read requests
66benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
67    range YOUR_KEY --consistency=l --total=100000
68benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
69    range YOUR_KEY --consistency=s --total=100000
70```
71
72We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
73