]>
Commit | Line | Data |
---|---|---|
1 | ============================ | |
2 | Deploying a new Ceph cluster | |
3 | ============================ | |
4 | ||
5 | Cephadm creates a new Ceph cluster by "bootstrapping" on a single | |
6 | host, expanding the cluster to encompass any additional hosts, and | |
7 | then deploying the needed services. | |
8 | ||
9 | .. highlight:: console | |
10 | ||
11 | Requirements | |
12 | ============ | |
13 | ||
14 | - Systemd | |
15 | - Podman or Docker for running containers | |
16 | - Time synchronization (such as chrony or NTP) | |
17 | - LVM2 for provisioning storage devices | |
18 | ||
19 | Any modern Linux distribution should be sufficient. Dependencies | |
20 | are installed automatically by the bootstrap process below. | |
21 | ||
22 | .. _get-cephadm: | |
23 | ||
24 | Install cephadm | |
25 | =============== | |
26 | ||
27 | The ``cephadm`` command can (1) bootstrap a new cluster, (2) | |
28 | launch a containerized shell with a working Ceph CLI, and (3) aid in | |
29 | debugging containerized Ceph daemons. | |
30 | ||
31 | There are a few ways to install cephadm: | |
32 | ||
33 | * Use ``curl`` to fetch the most recent version of the | |
34 | standalone script:: | |
35 | ||
36 | # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm | |
37 | # chmod +x cephadm | |
38 | ||
39 | This script can be run directly from the current directory with:: | |
40 | ||
41 | # ./cephadm <arguments...> | |
42 | ||
43 | * Although the standalone script is sufficient to get a cluster started, it is | |
44 | convenient to have the ``cephadm`` command installed on the host. To install | |
45 | these packages for the current Octopus release:: | |
46 | ||
47 | # ./cephadm add-repo --release octopus | |
48 | # ./cephadm install | |
49 | ||
50 | Confirm that ``cephadm`` is now in your PATH with:: | |
51 | ||
52 | # which cephadm | |
53 | ||
54 | * Some commercial Linux distributions (e.g., RHEL, SLE) may already | |
55 | include up-to-date Ceph packages. In that case, you can install | |
56 | cephadm directly. For example:: | |
57 | ||
58 | # dnf install -y cephadm # or | |
59 | # zypper install -y cephadm | |
60 | ||
61 | ||
62 | ||
63 | Bootstrap a new cluster | |
64 | ======================= | |
65 | ||
66 | You need to know which *IP address* to use for the cluster's first | |
67 | monitor daemon. This is normally just the IP for the first host. If there | |
68 | are multiple networks and interfaces, be sure to choose one that will | |
69 | be accessible by any host accessing the Ceph cluster. | |
70 | ||
71 | To bootstrap the cluster:: | |
72 | ||
73 | # mkdir -p /etc/ceph | |
74 | # cephadm bootstrap --mon-ip *<mon-ip>* | |
75 | ||
76 | This command will: | |
77 | ||
78 | * Create a monitor and manager daemon for the new cluster on the local | |
79 | host. | |
80 | * Generate a new SSH key for the Ceph cluster and adds it to the root | |
81 | user's ``/root/.ssh/authorized_keys`` file. | |
82 | * Write a minimal configuration file needed to communicate with the | |
83 | new cluster to ``/etc/ceph/ceph.conf``. | |
84 | * Write a copy of the ``client.admin`` administrative (privileged!) | |
85 | secret key to ``/etc/ceph/ceph.client.admin.keyring``. | |
86 | * Write a copy of the public key to | |
87 | ``/etc/ceph/ceph.pub``. | |
88 | ||
89 | The default bootstrap behavior will work for the vast majority of | |
90 | users. See below for a few options that may be useful for some users, | |
91 | or run ``cephadm bootstrap -h`` to see all available options: | |
92 | ||
93 | * Bootstrap writes the files needed to access the new cluster to | |
94 | ``/etc/ceph`` for convenience, so that any Ceph packages installed | |
95 | on the host itself (e.g., to access the command line interface) can | |
96 | easily find them. | |
97 | ||
98 | Daemon containers deployed with cephadm, however, do not need | |
99 | ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option | |
100 | to put them in a different directory (like ``.``), avoiding any | |
101 | potential conflicts with existing Ceph configuration (cephadm or | |
102 | otherwise) on the same host. | |
103 | ||
104 | * You can pass any initial Ceph configuration options to the new | |
105 | cluster by putting them in a standard ini-style configuration file | |
106 | and using the ``--config *<config-file>*`` option. | |
107 | ||
108 | ||
109 | Enable Ceph CLI | |
110 | =============== | |
111 | ||
112 | Cephadm does not require any Ceph packages to be installed on the | |
113 | host. However, we recommend enabling easy access to the the ``ceph`` | |
114 | command. There are several ways to do this: | |
115 | ||
116 | * The ``cephadm shell`` command launches a bash shell in a container | |
117 | with all of the Ceph packages installed. By default, if | |
118 | configuration and keyring files are found in ``/etc/ceph`` on the | |
119 | host, they are passed into the container environment so that the | |
120 | shell is fully functional:: | |
121 | ||
122 | # cephadm shell | |
123 | ||
124 | * It may be helpful to create an alias:: | |
125 | ||
126 | # alias ceph='cephadm shell --' | |
127 | ||
128 | * You can install the ``ceph-common`` package, which contains all of the | |
129 | ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting | |
130 | CephFS file systems), etc.:: | |
131 | ||
132 | # cephadm add-repo --release octopus | |
133 | # cephadm install ceph-common | |
134 | ||
135 | Confirm that the ``ceph`` command is accessible with:: | |
136 | ||
137 | # ceph -v | |
138 | ||
139 | Confirm that the ``ceph`` command can connect to the cluster and also | |
140 | its status with:: | |
141 | ||
142 | # ceph status | |
143 | ||
144 | ||
145 | Add hosts to the cluster | |
146 | ======================== | |
147 | ||
148 | To add each new host to the cluster, perform two steps: | |
149 | ||
150 | #. Install the cluster's public SSH key in the new host's root user's | |
151 | ``authorized_keys`` file:: | |
152 | ||
153 | # ssh-copy-id -f -i ceph.pub root@*<new-host>* | |
154 | ||
155 | For example:: | |
156 | ||
157 | # ssh-copy-id -f -i ceph.pub root@host2 | |
158 | # ssh-copy-id -f -i ceph.pub root@host3 | |
159 | ||
160 | #. Tell Ceph that the new node is part of the cluster:: | |
161 | ||
162 | # ceph orch host add *newhost* | |
163 | ||
164 | For example:: | |
165 | ||
166 | # ceph orch host add host2 | |
167 | # ceph orch host add host3 | |
168 | ||
169 | ||
170 | Deploy additional monitors (optional) | |
171 | ===================================== | |
172 | ||
173 | A typical Ceph cluster has three or five monitor daemons spread | |
174 | across different hosts. We recommend deploying five | |
175 | monitors if there are five or more nodes in your cluster. | |
176 | ||
177 | .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation | |
178 | ||
179 | When Ceph knows what IP subnet the monitors should use it can automatically | |
180 | deploy and scale monitors as the cluster grows (or contracts). By default, | |
181 | Ceph assumes that other monitors should use the same subnet as the first | |
182 | monitor's IP. | |
183 | ||
184 | If your Ceph monitors (or the entire cluster) live on a single subnet, | |
185 | then by default cephadm automatically adds up to 5 monitors as you add new | |
186 | hosts to the cluster. No further steps are necessary. | |
187 | ||
188 | * If there is a specific IP subnet that should be used by monitors, you | |
189 | can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with:: | |
190 | ||
191 | # ceph config set mon public_network *<mon-cidr-network>* | |
192 | ||
193 | For example:: | |
194 | ||
195 | # ceph config set mon public_network 10.1.2.0/24 | |
196 | ||
197 | Cephadm only deploys new monitor daemons on hosts that have IPs | |
198 | configured in the configured subnet. | |
199 | ||
200 | * If you want to adjust the default of 5 monitors:: | |
201 | ||
202 | # ceph orch apply mon *<number-of-monitors>* | |
203 | ||
204 | * To deploy monitors on a specific set of hosts:: | |
205 | ||
206 | # ceph orch apply mon *<host1,host2,host3,...>* | |
207 | ||
208 | Be sure to include the first (bootstrap) host in this list. | |
209 | ||
210 | * You can control which hosts the monitors run on by making use of | |
211 | host labels. To set the ``mon`` label to the appropriate | |
212 | hosts:: | |
213 | ||
214 | # ceph orch host label add *<hostname>* mon | |
215 | ||
216 | To view the current hosts and labels:: | |
217 | ||
218 | # ceph orch host ls | |
219 | ||
220 | For example:: | |
221 | ||
222 | # ceph orch host label add host1 mon | |
223 | # ceph orch host label add host2 mon | |
224 | # ceph orch host label add host3 mon | |
225 | # ceph orch host ls | |
226 | HOST ADDR LABELS STATUS | |
227 | host1 mon | |
228 | host2 mon | |
229 | host3 mon | |
230 | host4 | |
231 | host5 | |
232 | ||
233 | Tell cephadm to deploy monitors based on the label:: | |
234 | ||
235 | # ceph orch apply mon label:mon | |
236 | ||
237 | * You can explicitly specify the IP address or CIDR network for each monitor | |
238 | and control where it is placed. To disable automated monitor deployment:: | |
239 | ||
240 | # ceph orch apply mon --unmanaged | |
241 | ||
242 | To deploy each additional monitor:: | |
243 | ||
244 | # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]* | |
245 | ||
246 | For example, to deploy a second monitor on ``newhost1`` using an IP | |
247 | address ``10.1.2.123`` and a third monitor on ``newhost2`` in | |
248 | network ``10.1.2.0/24``:: | |
249 | ||
250 | # ceph orch apply mon --unmanaged | |
251 | # ceph orch daemon add mon newhost1:10.1.2.123 | |
252 | # ceph orch daemon add mon newhost2:10.1.2.0/24 | |
253 | ||
254 | ||
255 | Deploy OSDs | |
256 | =========== | |
257 | ||
258 | An inventory of storage devices on all cluster hosts can be displayed with:: | |
259 | ||
260 | # ceph orch device ls | |
261 | ||
262 | A storage device is considered *available* if all of the following | |
263 | conditions are met: | |
264 | ||
265 | * The device must have no partitions. | |
266 | * The device must not have any LVM state. | |
267 | * The device must not be mounted. | |
268 | * The device must not contain a file system. | |
269 | * The device must not contain a Ceph BlueStore OSD. | |
270 | * The device must be larger than 5 GB. | |
271 | ||
272 | Ceph refuses to provision an OSD on a device that is not available. | |
273 | ||
274 | There are a few ways to create new OSDs: | |
275 | ||
276 | * Tell Ceph to consume any available and unused storage device:: | |
277 | ||
278 | # ceph orch apply osd --all-available-devices | |
279 | ||
280 | * Create an OSD from a specific device on a specific host:: | |
281 | ||
282 | # ceph orch daemon add osd *<host>*:*<device-path>* | |
283 | ||
284 | For example:: | |
285 | ||
286 | # ceph orch daemon add osd host1:/dev/sdb | |
287 | ||
288 | * Use :ref:`drivegroups` to describe device(s) to consume | |
289 | based on their properties, such device type (SSD or HDD), device | |
290 | model names, size, or the hosts on which the devices exist:: | |
291 | ||
292 | # ceph orch osd create -i spec.yml | |
293 | ||
294 | ||
295 | Deploy MDSs | |
296 | =========== | |
297 | ||
298 | One or more MDS daemons is required to use the CephFS file system. | |
299 | These are created automatically if the newer ``ceph fs volume`` | |
300 | interface is used to create a new file system. For more information, | |
301 | see :ref:`fs-volumes-and-subvolumes`. | |
302 | ||
303 | To deploy metadata servers:: | |
304 | ||
305 | # ceph orch apply mds *<fs-name>* *<num-daemons>* [*<host1>* ...] | |
306 | ||
307 | Deploy RGWs | |
308 | =========== | |
309 | ||
310 | Cephadm deploys radosgw as a collection of daemons that manage a | |
311 | particular *realm* and *zone*. (For more information about realms and | |
312 | zones, see :ref:`multisite`.) | |
313 | ||
314 | Note that with cephadm, radosgw daemons are configured via the monitor | |
315 | configuration database instead of via a `ceph.conf` or the command line. If | |
316 | that configuration isn't already in place (usually in the | |
317 | ``client.rgw.<realmname>.<zonename>`` section), then the radosgw | |
318 | daemons will start up with default settings (e.g., binding to port | |
319 | 80). | |
320 | ||
321 | If a realm has not been created yet, first create a realm:: | |
322 | ||
323 | # radosgw-admin realm create --rgw-realm=<realm-name> --default | |
324 | ||
325 | Next create a new zonegroup:: | |
326 | ||
327 | # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default | |
328 | ||
329 | Next create a zone:: | |
330 | ||
331 | # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default | |
332 | ||
333 | To deploy a set of radosgw daemons for a particular realm and zone:: | |
334 | ||
335 | # ceph orch apply rgw *<realm-name>* *<zone-name>* *<num-daemons>* [*<host1>* ...] | |
336 | ||
337 | For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1* | |
338 | zone on *myhost1* and *myhost2*:: | |
339 | ||
340 | # radosgw-admin realm create --rgw-realm=myorg --default | |
341 | # radosgw-admin zonegroup create --rgw-zonegroup=default --master --default | |
342 | # radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=us-east-1 --master --default | |
343 | # ceph orch apply rgw myorg us-east-1 2 myhost1 myhost2 |