]> git.proxmox.com Git - ceph.git/blob - ceph/doc/install/ceph-deploy/upgrading-ceph.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / doc / install / ceph-deploy / upgrading-ceph.rst
1 ================
2 Upgrading Ceph
3 ================
4
5 Each release of Ceph may have additional steps. Refer to the `release notes
6 document of your release`_ to identify release-specific procedures for your
7 cluster before using the upgrade procedures.
8
9
10 Summary
11 =======
12
13 You can upgrade daemons in your Ceph cluster while the cluster is online and in
14 service! Certain types of daemons depend upon others. For example, Ceph Metadata
15 Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
16 We recommend upgrading in this order:
17
18 #. `Ceph Deploy`_
19 #. Ceph Monitors
20 #. Ceph OSD Daemons
21 #. Ceph Metadata Servers
22 #. Ceph Object Gateways
23
24 As a general rule, we recommend upgrading all the daemons of a specific type
25 (e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
26 they are all on the same release. We also recommend that you upgrade all the
27 daemons in your cluster before you try to exercise new functionality in a
28 release.
29
30 The `Upgrade Procedures`_ are relatively simple, but do look at the `release
31 notes document of your release`_ before upgrading. The basic process involves
32 three steps:
33
34 #. Use ``ceph-deploy`` on your admin node to upgrade the packages for
35 multiple hosts (using the ``ceph-deploy install`` command), or login to each
36 host and upgrade the Ceph package `using your distro's package manager`_.
37 For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
38 look like this::
39
40 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
41 ceph-deploy install --release firefly mon1 mon2 mon3
42
43 **Note:** The ``ceph-deploy install`` command will upgrade the packages
44 in the specified node(s) from the old release to the release you specify.
45 There is no ``ceph-deploy upgrade`` command.
46
47 #. Login in to each Ceph node and restart each Ceph daemon.
48 See `Operating a Cluster`_ for details.
49
50 #. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
51
52 .. important:: Once you upgrade a daemon, you cannot downgrade it.
53
54
55 Ceph Deploy
56 ===========
57
58 Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
59
60 sudo pip install -U ceph-deploy
61
62 Or::
63
64 sudo apt-get install ceph-deploy
65
66 Or::
67
68 sudo yum install ceph-deploy python-pushy
69
70
71 Upgrade Procedures
72 ==================
73
74 The following sections describe the upgrade process.
75
76 .. important:: Each release of Ceph may have some additional steps. Refer to
77 the `release notes document of your release`_ for details **BEFORE** you
78 begin upgrading daemons.
79
80
81 Upgrading Monitors
82 ------------------
83
84 To upgrade monitors, perform the following steps:
85
86 #. Upgrade the Ceph package for each daemon instance.
87
88 You may use ``ceph-deploy`` to address all monitor nodes at once.
89 For example::
90
91 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
92 ceph-deploy install --release hammer mon1 mon2 mon3
93
94 You may also use the package manager for your Linux distribution on
95 each individual node. To upgrade packages manually on each Debian/Ubuntu
96 host, perform the following steps::
97
98 ssh {mon-host}
99 sudo apt-get update && sudo apt-get install ceph
100
101 On CentOS/Red Hat hosts, perform the following steps::
102
103 ssh {mon-host}
104 sudo yum update && sudo yum install ceph
105
106
107 #. Restart each monitor. For Ubuntu distributions, use::
108
109 sudo systemctl restart ceph-mon@{hostname}.service
110
111 For CentOS/Red Hat/Debian distributions, use::
112
113 sudo /etc/init.d/ceph restart {mon-id}
114
115 For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
116 the monitor ID is usually ``mon.{hostname}``.
117
118 #. Ensure each monitor has rejoined the quorum::
119
120 ceph mon stat
121
122 Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
123
124
125 Upgrading an OSD
126 ----------------
127
128 To upgrade a Ceph OSD Daemon, perform the following steps:
129
130 #. Upgrade the Ceph OSD Daemon package.
131
132 You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
133 once. For example::
134
135 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
136 ceph-deploy install --release hammer osd1 osd2 osd3
137
138 You may also use the package manager on each node to upgrade packages
139 `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
140 following steps on each host::
141
142 ssh {osd-host}
143 sudo apt-get update && sudo apt-get install ceph
144
145 For CentOS/Red Hat hosts, perform the following steps::
146
147 ssh {osd-host}
148 sudo yum update && sudo yum install ceph
149
150
151 #. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
152
153 sudo systemctl restart ceph-osd@{N}.service
154
155 For multiple OSDs on a host, you may restart all of them with systemd. ::
156
157 sudo systemctl restart ceph-osd
158
159 For CentOS/Red Hat/Debian distributions, use::
160
161 sudo /etc/init.d/ceph restart N
162
163
164 #. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
165
166 ceph osd stat
167
168 Ensure that you have completed the upgrade cycle for all of your
169 Ceph OSD Daemons.
170
171
172 Upgrading a Metadata Server
173 ---------------------------
174
175 To upgrade a Ceph Metadata Server, perform the following steps:
176
177 #. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
178 address all Ceph Metadata Server nodes at once, or use the package manager
179 on each node. For example::
180
181 ceph-deploy install --release {release-name} ceph-node1
182 ceph-deploy install --release hammer mds1
183
184 To upgrade packages manually, perform the following steps on each
185 Debian/Ubuntu host::
186
187 ssh {mon-host}
188 sudo apt-get update && sudo apt-get install ceph-mds
189
190 Or the following steps on CentOS/Red Hat hosts::
191
192 ssh {mon-host}
193 sudo yum update && sudo yum install ceph-mds
194
195
196 #. Restart the metadata server. For Ubuntu, use::
197
198 sudo systemctl restart ceph-mds@{hostname}.service
199
200 For CentOS/Red Hat/Debian distributions, use::
201
202 sudo /etc/init.d/ceph restart mds.{hostname}
203
204 For clusters deployed with ``ceph-deploy``, the name is usually either
205 the name you specified on creation or the hostname.
206
207 #. Ensure the metadata server is up and running::
208
209 ceph mds stat
210
211
212 Upgrading a Client
213 ------------------
214
215 Once you have upgraded the packages and restarted daemons on your Ceph
216 cluster, we recommend upgrading ``ceph-common`` and client libraries
217 (``librbd1`` and ``librados2``) on your client nodes too.
218
219 #. Upgrade the package::
220
221 ssh {client-host}
222 apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
223
224 #. Ensure that you have the latest version::
225
226 ceph --version
227
228 If you do not have the latest version, you may need to uninstall, auto remove
229 dependencies and reinstall.
230
231
232 .. _using your distro's package manager: ../install-storage-cluster/
233 .. _Operating a Cluster: ../../rados/operations/operating
234 .. _Monitoring a Cluster: ../../rados/operations/monitoring
235 .. _release notes document of your release: ../../releases