1**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.** 2 3[v3-docs]: ../docs.md#documentation 4 5 6# Upgrade etcd to 2.1 7 8In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, rolling upgrade: 9 - one by one, stop the etcd v2.0 processes and replace them with etcd v2.1 processes 10 - after you are running all v2.1 processes, new features in v2.1 are available to the cluster 11 12Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare. 13 14## Upgrade Checklists 15 16### Upgrade Requirements 17 18To upgrade an existing etcd deployment to 2.1, you must be running 2.0. If you’re running a version of etcd before 2.0, you must upgrade to [2.0][v2.0] before upgrading to 2.1. 19 20Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command. 21 22### Preparedness 23 24Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. 25 26You might also want to [backup your data directory][backup-datastore] for a potential [downgrade](#downgrade). 27 28etcd 2.1 introduces a new [authentication][auth] feature, which is disabled by default. If your deployment depends on these, you may want to test the auth features before enabling them in production. 29 30### Mixed Versions 31 32While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.1. 33 34Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features. For example, if you are mid-upgrade, any 2.1 features (such as the the authentication feature mentioned above) won’t be available. 35 36### Limitations 37 38If you encounter any issues during the upgrade, you can attempt to restart the etcd process in trouble using a newer v2.1 binary to solve the problem. One known issue is that etcd v2.0.0 and v2.0.2 may panic during rolling upgrades due to an existing bug, which has been fixed since etcd v2.0.3. 39 40It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB (You can check the size of the existing snapshot to know about the rough data size). In other words, it is safest to wait for 2 minutes before upgrading the next member. 41 42If you have even more data, this might take more time. If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly. 43 44### Downgrade 45 46If all members have been upgraded to v2.1, the cluster will be upgraded to v2.1, and downgrade is **not possible**. If any member is still v2.0, the cluster will remain in v2.0, and you can go back to use v2.0 binary. 47 48Please [backup your data directory][backup-datastore] of all etcd members if you want to downgrade the cluster, even if it is upgraded. 49 50### Upgrade Procedure 51 52#### 1. Check upgrade requirements. 53 54``` 55$ etcdctl cluster-health 56cluster is healthy 57member 6e3bd23ae5f1eae0 is healthy 58member 924e2e83e93f2560 is healthy 59member a8266ecf031671f3 is healthy 60 61$ curl http://127.0.0.1:4001/version 62etcd 2.0.x 63``` 64 65#### 2. Stop the existing etcd process 66 67You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member. 68 69``` 702015/06/23 15:45:09 sender: error posting to 6e3bd23ae5f1eae0: dial tcp 127.0.0.1:7002: connection refused 712015/06/23 15:45:09 sender: the connection with 6e3bd23ae5f1eae0 became inactive 722015/06/23 15:45:11 rafthttp: encountered error writing to server log stream: write tcp 127.0.0.1:53783: broken pipe 732015/06/23 15:45:11 rafthttp: server streaming to 6e3bd23ae5f1eae0 at term 2 has been stopped 742015/06/23 15:45:11 stream: error sending message: stopped 752015/06/23 15:45:11 stream: stopping the stream server... 76``` 77 78You could [backup your data directory][backup-datastore] for data safety. 79 80``` 81$ etcdctl backup \ 82 --data-dir /var/lib/etcd \ 83 --backup-dir /tmp/etcd_backup 84``` 85 86#### 3. Drop-in etcd v2.1 binary and start the new etcd process 87 88You will see the etcd publish its information to the cluster. 89 90``` 912015/06/23 15:45:39 etcdserver: published {Name:infra2 ClientURLs:[http://localhost:4002]} to cluster e9c7614f68f35fb2 92``` 93 94You could verify the cluster becomes healthy. 95 96``` 97$ etcdctl cluster-health 98cluster is healthy 99member 6e3bd23ae5f1eae0 is healthy 100member 924e2e83e93f2560 is healthy 101member a8266ecf031671f3 is healthy 102``` 103 104#### 4. Repeat step 2 to step 3 for all other members 105 106#### 5. Finish 107 108When all members are upgraded, you will see the cluster is upgraded to 2.1 successfully: 109 110``` 1112015/06/23 15:46:35 etcdserver: updated the cluster version from 2.0.0 to 2.1.0 112``` 113 114``` 115$ curl http://127.0.0.1:4001/version 116{"etcdserver":"2.1.x","etcdcluster":"2.1.0"} 117``` 118 119[auth]: auth_api.md 120[backup-datastore]: admin_guide.md#backing-up-the-datastore 121[v2.0]: https://github.com/coreos/etcd/releases/tag/v2.0.13 122