5 Each release of Ceph may have additional steps. Refer to the `release notes
6 document of your release`_ to identify release-specific procedures for your
7 cluster before using the upgrade procedures.
13 You can upgrade daemons in your Ceph cluster while the cluster is online and in
14 service! Certain types of daemons depend upon others. For example, Ceph Metadata
15 Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
16 We recommend upgrading in this order:
21 #. Ceph Metadata Servers
22 #. Ceph Object Gateways
24 As a general rule, we recommend upgrading all the daemons of a specific type
25 (e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
26 they are all on the same release. We also recommend that you upgrade all the
27 daemons in your cluster before you try to exercise new functionality in a
30 The `Upgrade Procedures`_ are relatively simple, but do look at the `release
31 notes document of your release`_ before upgrading. The basic process involves
34 #. Use ``ceph-deploy`` on your admin node to upgrade the packages for
35 multiple hosts (using the ``ceph-deploy install`` command), or login to each
36 host and upgrade the Ceph package `using your distro's package manager`_.
37 For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
40 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
41 ceph-deploy install --release firefly mon1 mon2 mon3
43 **Note:** The ``ceph-deploy install`` command will upgrade the packages
44 in the specified node(s) from the old release to the release you specify.
45 There is no ``ceph-deploy upgrade`` command.
47 #. Login in to each Ceph node and restart each Ceph daemon.
48 See `Operating a Cluster`_ for details.
50 #. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
52 .. important:: Once you upgrade a daemon, you cannot downgrade it.
58 Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
60 sudo pip install -U ceph-deploy
64 sudo apt-get install ceph-deploy
68 sudo yum install ceph-deploy python-pushy
74 The following sections describe the upgrade process.
76 .. important:: Each release of Ceph may have some additional steps. Refer to
77 the `release notes document of your release`_ for details **BEFORE** you
78 begin upgrading daemons.
84 To upgrade monitors, perform the following steps:
86 #. Upgrade the Ceph package for each daemon instance.
88 You may use ``ceph-deploy`` to address all monitor nodes at once.
91 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
92 ceph-deploy install --release hammer mon1 mon2 mon3
94 You may also use the package manager for your Linux distribution on
95 each individual node. To upgrade packages manually on each Debian/Ubuntu
96 host, perform the following steps::
99 sudo apt-get update && sudo apt-get install ceph
101 On CentOS/Red Hat hosts, perform the following steps::
104 sudo yum update && sudo yum install ceph
107 #. Restart each monitor. For Ubuntu distributions, use::
109 sudo systemctl restart ceph-mon@{hostname}.service
111 For CentOS/Red Hat/Debian distributions, use::
113 sudo /etc/init.d/ceph restart {mon-id}
115 For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
116 the monitor ID is usually ``mon.{hostname}``.
118 #. Ensure each monitor has rejoined the quorum::
122 Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
128 To upgrade a Ceph OSD Daemon, perform the following steps:
130 #. Upgrade the Ceph OSD Daemon package.
132 You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
135 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
136 ceph-deploy install --release hammer osd1 osd2 osd3
138 You may also use the package manager on each node to upgrade packages
139 `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
140 following steps on each host::
143 sudo apt-get update && sudo apt-get install ceph
145 For CentOS/Red Hat hosts, perform the following steps::
148 sudo yum update && sudo yum install ceph
151 #. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
153 sudo systemctl restart ceph-osd@{N}.service
155 For multiple OSDs on a host, you may restart all of them with systemd. ::
157 sudo systemctl restart ceph-osd
159 For CentOS/Red Hat/Debian distributions, use::
161 sudo /etc/init.d/ceph restart N
164 #. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
168 Ensure that you have completed the upgrade cycle for all of your
172 Upgrading a Metadata Server
173 ---------------------------
175 To upgrade a Ceph Metadata Server, perform the following steps:
177 #. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
178 address all Ceph Metadata Server nodes at once, or use the package manager
179 on each node. For example::
181 ceph-deploy install --release {release-name} ceph-node1
182 ceph-deploy install --release hammer mds1
184 To upgrade packages manually, perform the following steps on each
188 sudo apt-get update && sudo apt-get install ceph-mds
190 Or the following steps on CentOS/Red Hat hosts::
193 sudo yum update && sudo yum install ceph-mds
196 #. Restart the metadata server. For Ubuntu, use::
198 sudo systemctl restart ceph-mds@{hostname}.service
200 For CentOS/Red Hat/Debian distributions, use::
202 sudo /etc/init.d/ceph restart mds.{hostname}
204 For clusters deployed with ``ceph-deploy``, the name is usually either
205 the name you specified on creation or the hostname.
207 #. Ensure the metadata server is up and running::
215 Once you have upgraded the packages and restarted daemons on your Ceph
216 cluster, we recommend upgrading ``ceph-common`` and client libraries
217 (``librbd1`` and ``librados2``) on your client nodes too.
219 #. Upgrade the package::
222 apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
224 #. Ensure that you have the latest version::
228 If you do not have the latest version, you may need to uninstall, auto remove
229 dependencies and reinstall.
232 .. _using your distro's package manager: ../install-storage-cluster/
233 .. _Operating a Cluster: ../../rados/operations/operating
234 .. _Monitoring a Cluster: ../../rados/operations/monitoring
235 .. _release notes document of your release: ../../releases