1---
2title: Hardware recommendations
3---
4
5etcd usually runs well with limited resources for development or testing purposes; it’s common to develop with etcd on a  laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production.
6
7## CPUs
8
9Few etcd deployments require a lot of CPU capacity. Typical clusters need two to four cores to run smoothly.
10Heavily loaded etcd deployments, serving thousands of clients or tens of thousands of requests per second, tend to be CPU bound since etcd can serve requests from memory. Such heavy deployments usually need eight to sixteen dedicated cores.
11
12
13## Memory
14
15etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly.
16
17
18## Disks
19
20Fast disks are the most critical factor for etcd deployment performance and stability.
21
22A slow disk will increase etcd request latency and potentially hurt cluster stability. Since etcd’s consensus protocol depends on persistently storing metadata to a log, a majority of etcd cluster members must write every request down to disk. Additionally, etcd will also incrementally checkpoint its state to disk so it can truncate this log. If these writes take too long, heartbeats may time out and trigger an election, undermining the stability of the cluster. In general, to tell whether a disk is fast enough for etcd, a benchmarking tool such as [fio][fio] can be used. Read [here][fio-blog-post] for an example.
23
24etcd is very sensitive to disk write latency. Typically 50 sequential IOPS (e.g., a 7200 RPM disk) is required. For heavily loaded clusters, 500 sequential IOPS (e.g., a typical local SSD or a high performance virtualized block device) is recommended. Note that most cloud providers publish concurrent IOPS rather than sequential IOPS; the published concurrent IOPS can be 10x greater than the sequential IOPS. To measure actual sequential IOPS, we suggest using a disk benchmarking tool such as [diskbench][diskbench] or [fio][fio].
25
26etcd requires only modest disk bandwidth but more disk bandwidth buys faster recovery times when a failed member has to catch up with the cluster. Typically 10MB/s will recover 100MB data within 15 seconds. For large clusters, 100MB/s or higher is suggested for recovering 1GB data within 15 seconds.
27
28When possible, back etcd’s storage with a SSD. A SSD usually provides lower write latencies and with less variance than a spinning disk, thus improving the stability and reliability of etcd. If using spinning disk, get the fastest disks possible (15,000 RPM). Using RAID 0 is also an effective way to increase disk speed, for both spinning disks and SSD. With at least three cluster members, mirroring and/or parity variants of RAID are unnecessary; etcd's consistent replication already gets high availability.
29
30
31## Network
32
33Multi-member etcd deployments benefit from a fast and reliable network. In order for etcd to be both consistent and partition tolerant, an unreliable network with partitioning outages will lead to poor availability. Low latency ensures etcd members can communicate fast. High bandwidth can reduce the time to recover a failed etcd member. 1GbE is sufficient for common etcd deployments. For large etcd clusters, a 10GbE network will reduce mean time to recovery.
34
35Deploy etcd members within a single data center when possible to avoid latency overheads and lessen the possibility of partitioning events. If a failure domain in another data center is required, choose a data center closer to the existing one. Please also read the [tuning][tuning] documentation for more information on cross data center deployment.
36
37
38## Example hardware configurations
39
40Here are a few example hardware setups on AWS and GCE environments. As mentioned before, but must be stressed  regardless, administrators should test an etcd deployment with a simulated workload before putting it into production.
41
42Note that these configurations assume these machines are totally dedicated to etcd. Running other applications along with etcd on these machines may cause resource contentions and lead to cluster instability.
43
44### Small cluster
45
46A small cluster serves fewer than 100 clients, fewer than 200 of requests per second, and stores no more than 100MB of data.
47
48Example application workload: A 50-node Kubernetes cluster
49
50| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
51|----------|------|-------|--------|------|----------------|
52| AWS | m4.large | 2 | 8 | 3600 | 56.25 |
53| GCE | n1-standard-2 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
54
55
56### Medium cluster
57
58A medium cluster serves fewer than 500 clients, fewer than 1,000 of requests per second, and stores no more than 500MB of data.
59
60Example application workload: A 250-node Kubernetes cluster
61
62| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
63|----------|------|-------|--------|------|----------------|
64| AWS | m4.xlarge | 4 | 16 | 6000 | 93.75 |
65| GCE | n1-standard-4 + 150GB PD SSD | 4 | 15 | 4500 | 75 |
66
67
68### Large cluster
69
70A large cluster serves fewer than 1,500 clients, fewer than 10,000 of requests per second, and stores no more  than 1GB of data.
71
72Example application workload: A 1,000-node Kubernetes cluster
73
74| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
75|----------|------|-------|--------|------|----------------|
76| AWS | m4.2xlarge | 8 | 32 | 8000 | 125 |
77| GCE | n1-standard-8 + 250GB PD SSD | 8 | 30 | 7500 | 125 |
78
79
80### xLarge cluster
81
82An xLarge cluster serves more than 1,500 clients, more than 10,000 of requests per second, and stores more than 1GB data.
83
84Example application workload: A 3,000 node Kubernetes cluster
85
86| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
87|----------|------|-------|--------|------|----------------|
88| AWS | m4.4xlarge | 16 | 64 | 16,000 | 250 |
89| GCE | n1-standard-16 + 500GB PD SSD | 16 | 60 | 15,000 | 250 |
90
91
92[diskbench]: https://github.com/ongardie/diskbenchmark
93[fio]: https://github.com/axboe/fio
94[fio-blog-post]: https://www.ibm.com/blogs/bluemix/2019/04/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd/
95[tuning]: ../tuning.md
96
97