]>
Commit | Line | Data |
---|---|---|
9f95a23c TL |
1 | ============================ |
2 | Deploying a new Ceph cluster | |
3 | ============================ | |
4 | ||
5 | Cephadm creates a new Ceph cluster by "bootstrapping" on a single | |
6 | host, expanding the cluster to encompass any additional hosts, and | |
7 | then deploying the needed services. | |
8 | ||
9 | .. highlight:: console | |
10 | ||
11 | Requirements | |
12 | ============ | |
13 | ||
14 | - Systemd | |
15 | - Podman or Docker for running containers | |
16 | - Time synchronization (such as chrony or NTP) | |
17 | - LVM2 for provisioning storage devices | |
18 | ||
19 | Any modern Linux distribution should be sufficient. Dependencies | |
20 | are installed automatically by the bootstrap process below. | |
21 | ||
22 | .. _get-cephadm: | |
23 | ||
24 | Install cephadm | |
25 | =============== | |
26 | ||
27 | The ``cephadm`` command can (1) bootstrap a new cluster, (2) | |
28 | launch a containerized shell with a working Ceph CLI, and (3) aid in | |
29 | debugging containerized Ceph daemons. | |
30 | ||
31 | There are a few ways to install cephadm: | |
32 | ||
33 | * Use ``curl`` to fetch the most recent version of the | |
34 | standalone script:: | |
35 | ||
36 | # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm | |
37 | # chmod +x cephadm | |
38 | ||
39 | This script can be run directly from the current directory with:: | |
40 | ||
41 | # ./cephadm <arguments...> | |
42 | ||
43 | * Although the standalone script is sufficient to get a cluster started, it is | |
44 | convenient to have the ``cephadm`` command installed on the host. To install | |
45 | these packages for the current Octopus release:: | |
46 | ||
47 | # ./cephadm add-repo --release octopus | |
48 | # ./cephadm install | |
49 | ||
50 | Confirm that ``cephadm`` is now in your PATH with:: | |
51 | ||
52 | # which cephadm | |
53 | ||
54 | * Some commercial Linux distributions (e.g., RHEL, SLE) may already | |
55 | include up-to-date Ceph packages. In that case, you can install | |
56 | cephadm directly. For example:: | |
57 | ||
58 | # dnf install -y cephadm # or | |
59 | # zypper install -y cephadm | |
60 | ||
61 | ||
62 | ||
63 | Bootstrap a new cluster | |
64 | ======================= | |
65 | ||
66 | You need to know which *IP address* to use for the cluster's first | |
67 | monitor daemon. This is normally just the IP for the first host. If there | |
68 | are multiple networks and interfaces, be sure to choose one that will | |
69 | be accessible by any host accessing the Ceph cluster. | |
70 | ||
71 | To bootstrap the cluster:: | |
72 | ||
73 | # mkdir -p /etc/ceph | |
74 | # cephadm bootstrap --mon-ip *<mon-ip>* | |
75 | ||
76 | This command will: | |
77 | ||
78 | * Create a monitor and manager daemon for the new cluster on the local | |
79 | host. | |
80 | * Generate a new SSH key for the Ceph cluster and adds it to the root | |
81 | user's ``/root/.ssh/authorized_keys`` file. | |
82 | * Write a minimal configuration file needed to communicate with the | |
83 | new cluster to ``/etc/ceph/ceph.conf``. | |
84 | * Write a copy of the ``client.admin`` administrative (privileged!) | |
85 | secret key to ``/etc/ceph/ceph.client.admin.keyring``. | |
86 | * Write a copy of the public key to | |
87 | ``/etc/ceph/ceph.pub``. | |
88 | ||
89 | The default bootstrap behavior will work for the vast majority of | |
90 | users. See below for a few options that may be useful for some users, | |
91 | or run ``cephadm bootstrap -h`` to see all available options: | |
92 | ||
93 | * Bootstrap writes the files needed to access the new cluster to | |
94 | ``/etc/ceph`` for convenience, so that any Ceph packages installed | |
95 | on the host itself (e.g., to access the command line interface) can | |
96 | easily find them. | |
97 | ||
98 | Daemon containers deployed with cephadm, however, do not need | |
99 | ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option | |
100 | to put them in a different directory (like ``.``), avoiding any | |
101 | potential conflicts with existing Ceph configuration (cephadm or | |
102 | otherwise) on the same host. | |
103 | ||
104 | * You can pass any initial Ceph configuration options to the new | |
105 | cluster by putting them in a standard ini-style configuration file | |
106 | and using the ``--config *<config-file>*`` option. | |
107 | ||
f6b5b4d7 TL |
108 | * You can choose the ssh user cephadm will use to connect to hosts by |
109 | using the ``--ssh-user *<user>*`` option. The ssh key will be added | |
110 | to ``/home/*<user>*/.ssh/authorized_keys``. This user will require | |
111 | passwordless sudo access. | |
112 | ||
113 | * If you are using a container on an authenticated registry that requires | |
114 | login you may add the three arguments ``--registry-url <url of registry>``, | |
115 | ``--registry-username <username of account on registry>``, | |
116 | ``--registry-password <password of account on registry>`` OR | |
117 | ``--registry-json <json file with login info>``. Cephadm will attempt | |
118 | to login to this registry so it may pull your container and then store | |
119 | the login info in its config database so other hosts added to the cluster | |
120 | may also make use of the authenticated registry. | |
9f95a23c TL |
121 | |
122 | Enable Ceph CLI | |
123 | =============== | |
124 | ||
125 | Cephadm does not require any Ceph packages to be installed on the | |
e306af50 | 126 | host. However, we recommend enabling easy access to the ``ceph`` |
9f95a23c TL |
127 | command. There are several ways to do this: |
128 | ||
129 | * The ``cephadm shell`` command launches a bash shell in a container | |
e306af50 | 130 | with all of the Ceph packages installed. By default, if |
9f95a23c TL |
131 | configuration and keyring files are found in ``/etc/ceph`` on the |
132 | host, they are passed into the container environment so that the | |
e306af50 TL |
133 | shell is fully functional. Note that when executed on a MON host, |
134 | ``cephadm shell`` will infer the ``config`` from the MON container | |
135 | instead of using the default configuration. If ``--mount <path>`` | |
136 | is given, then the host ``<path>`` (file or directory) will appear | |
137 | under ``/mnt`` inside the container:: | |
9f95a23c TL |
138 | |
139 | # cephadm shell | |
140 | ||
f6b5b4d7 | 141 | * To execute ``ceph`` commands, you can also run commands like so:: |
9f95a23c | 142 | |
f6b5b4d7 | 143 | # cephadm shell -- ceph -s |
9f95a23c TL |
144 | |
145 | * You can install the ``ceph-common`` package, which contains all of the | |
146 | ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting | |
147 | CephFS file systems), etc.:: | |
148 | ||
149 | # cephadm add-repo --release octopus | |
150 | # cephadm install ceph-common | |
151 | ||
152 | Confirm that the ``ceph`` command is accessible with:: | |
153 | ||
154 | # ceph -v | |
155 | ||
156 | Confirm that the ``ceph`` command can connect to the cluster and also | |
157 | its status with:: | |
158 | ||
159 | # ceph status | |
160 | ||
161 | ||
162 | Add hosts to the cluster | |
163 | ======================== | |
164 | ||
165 | To add each new host to the cluster, perform two steps: | |
166 | ||
167 | #. Install the cluster's public SSH key in the new host's root user's | |
168 | ``authorized_keys`` file:: | |
169 | ||
801d1391 | 170 | # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>* |
9f95a23c TL |
171 | |
172 | For example:: | |
173 | ||
801d1391 TL |
174 | # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2 |
175 | # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3 | |
9f95a23c TL |
176 | |
177 | #. Tell Ceph that the new node is part of the cluster:: | |
178 | ||
179 | # ceph orch host add *newhost* | |
180 | ||
181 | For example:: | |
182 | ||
183 | # ceph orch host add host2 | |
184 | # ceph orch host add host3 | |
185 | ||
186 | ||
e306af50 TL |
187 | .. _deploy_additional_monitors: |
188 | ||
9f95a23c TL |
189 | Deploy additional monitors (optional) |
190 | ===================================== | |
191 | ||
192 | A typical Ceph cluster has three or five monitor daemons spread | |
193 | across different hosts. We recommend deploying five | |
194 | monitors if there are five or more nodes in your cluster. | |
195 | ||
196 | .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation | |
197 | ||
198 | When Ceph knows what IP subnet the monitors should use it can automatically | |
199 | deploy and scale monitors as the cluster grows (or contracts). By default, | |
200 | Ceph assumes that other monitors should use the same subnet as the first | |
201 | monitor's IP. | |
202 | ||
203 | If your Ceph monitors (or the entire cluster) live on a single subnet, | |
204 | then by default cephadm automatically adds up to 5 monitors as you add new | |
205 | hosts to the cluster. No further steps are necessary. | |
206 | ||
207 | * If there is a specific IP subnet that should be used by monitors, you | |
208 | can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with:: | |
209 | ||
210 | # ceph config set mon public_network *<mon-cidr-network>* | |
211 | ||
212 | For example:: | |
213 | ||
214 | # ceph config set mon public_network 10.1.2.0/24 | |
215 | ||
216 | Cephadm only deploys new monitor daemons on hosts that have IPs | |
217 | configured in the configured subnet. | |
218 | ||
219 | * If you want to adjust the default of 5 monitors:: | |
220 | ||
221 | # ceph orch apply mon *<number-of-monitors>* | |
222 | ||
223 | * To deploy monitors on a specific set of hosts:: | |
224 | ||
225 | # ceph orch apply mon *<host1,host2,host3,...>* | |
226 | ||
227 | Be sure to include the first (bootstrap) host in this list. | |
228 | ||
229 | * You can control which hosts the monitors run on by making use of | |
230 | host labels. To set the ``mon`` label to the appropriate | |
231 | hosts:: | |
232 | ||
233 | # ceph orch host label add *<hostname>* mon | |
234 | ||
235 | To view the current hosts and labels:: | |
236 | ||
237 | # ceph orch host ls | |
238 | ||
239 | For example:: | |
240 | ||
241 | # ceph orch host label add host1 mon | |
242 | # ceph orch host label add host2 mon | |
243 | # ceph orch host label add host3 mon | |
244 | # ceph orch host ls | |
245 | HOST ADDR LABELS STATUS | |
246 | host1 mon | |
247 | host2 mon | |
248 | host3 mon | |
249 | host4 | |
250 | host5 | |
251 | ||
252 | Tell cephadm to deploy monitors based on the label:: | |
253 | ||
254 | # ceph orch apply mon label:mon | |
255 | ||
256 | * You can explicitly specify the IP address or CIDR network for each monitor | |
257 | and control where it is placed. To disable automated monitor deployment:: | |
258 | ||
259 | # ceph orch apply mon --unmanaged | |
260 | ||
261 | To deploy each additional monitor:: | |
262 | ||
263 | # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]* | |
264 | ||
265 | For example, to deploy a second monitor on ``newhost1`` using an IP | |
266 | address ``10.1.2.123`` and a third monitor on ``newhost2`` in | |
267 | network ``10.1.2.0/24``:: | |
268 | ||
269 | # ceph orch apply mon --unmanaged | |
270 | # ceph orch daemon add mon newhost1:10.1.2.123 | |
271 | # ceph orch daemon add mon newhost2:10.1.2.0/24 | |
272 | ||
f6b5b4d7 TL |
273 | .. note:: |
274 | The **apply** command can be confusing. For this reason, we recommend using | |
275 | YAML specifications. | |
276 | ||
277 | Each 'ceph orch apply mon' command supersedes the one before it. | |
278 | This means that you must use the proper comma-separated list-based | |
279 | syntax when you want to apply monitors to more than one host. | |
280 | If you do not use the proper syntax, you will clobber your work | |
281 | as you go. | |
282 | ||
283 | For example:: | |
284 | ||
285 | # ceph orch apply mon host1 | |
286 | # ceph orch apply mon host2 | |
287 | # ceph orch apply mon host3 | |
288 | ||
289 | This results in only one host having a monitor applied to it: host 3. | |
290 | ||
291 | (The first command creates a monitor on host1. Then the second command | |
292 | clobbers the monitor on host1 and creates a monitor on host2. Then the | |
293 | third command clobbers the monitor on host2 and creates a monitor on | |
294 | host3. In this scenario, at this point, there is a monitor ONLY on | |
295 | host3.) | |
296 | ||
297 | To make certain that a monitor is applied to each of these three hosts, | |
298 | run a command like this:: | |
299 | ||
300 | # ceph orch apply mon "host1,host2,host3" | |
301 | ||
302 | Instead of using the "ceph orch apply mon" commands, run a command like | |
303 | this:: | |
304 | ||
305 | # ceph orch apply -i file.yaml | |
306 | ||
307 | Here is a sample **file.yaml** file:: | |
308 | ||
309 | service_type: mon | |
310 | placement: | |
311 | hosts: | |
312 | - host1 | |
313 | - host2 | |
314 | - host3 | |
315 | ||
9f95a23c TL |
316 | |
317 | Deploy OSDs | |
318 | =========== | |
319 | ||
320 | An inventory of storage devices on all cluster hosts can be displayed with:: | |
321 | ||
322 | # ceph orch device ls | |
323 | ||
324 | A storage device is considered *available* if all of the following | |
325 | conditions are met: | |
326 | ||
327 | * The device must have no partitions. | |
328 | * The device must not have any LVM state. | |
329 | * The device must not be mounted. | |
330 | * The device must not contain a file system. | |
331 | * The device must not contain a Ceph BlueStore OSD. | |
332 | * The device must be larger than 5 GB. | |
333 | ||
334 | Ceph refuses to provision an OSD on a device that is not available. | |
335 | ||
336 | There are a few ways to create new OSDs: | |
337 | ||
338 | * Tell Ceph to consume any available and unused storage device:: | |
339 | ||
340 | # ceph orch apply osd --all-available-devices | |
341 | ||
342 | * Create an OSD from a specific device on a specific host:: | |
343 | ||
344 | # ceph orch daemon add osd *<host>*:*<device-path>* | |
345 | ||
346 | For example:: | |
347 | ||
348 | # ceph orch daemon add osd host1:/dev/sdb | |
349 | ||
350 | * Use :ref:`drivegroups` to describe device(s) to consume | |
351 | based on their properties, such device type (SSD or HDD), device | |
352 | model names, size, or the hosts on which the devices exist:: | |
353 | ||
801d1391 | 354 | # ceph orch apply osd -i spec.yml |
9f95a23c TL |
355 | |
356 | ||
357 | Deploy MDSs | |
358 | =========== | |
359 | ||
360 | One or more MDS daemons is required to use the CephFS file system. | |
361 | These are created automatically if the newer ``ceph fs volume`` | |
362 | interface is used to create a new file system. For more information, | |
363 | see :ref:`fs-volumes-and-subvolumes`. | |
364 | ||
365 | To deploy metadata servers:: | |
366 | ||
1911f103 TL |
367 | # ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]" |
368 | ||
369 | See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. | |
9f95a23c TL |
370 | |
371 | Deploy RGWs | |
372 | =========== | |
373 | ||
374 | Cephadm deploys radosgw as a collection of daemons that manage a | |
375 | particular *realm* and *zone*. (For more information about realms and | |
376 | zones, see :ref:`multisite`.) | |
377 | ||
378 | Note that with cephadm, radosgw daemons are configured via the monitor | |
379 | configuration database instead of via a `ceph.conf` or the command line. If | |
380 | that configuration isn't already in place (usually in the | |
381 | ``client.rgw.<realmname>.<zonename>`` section), then the radosgw | |
382 | daemons will start up with default settings (e.g., binding to port | |
383 | 80). | |
384 | ||
f6b5b4d7 | 385 | To deploy a set of radosgw daemons for a particular realm and zone:: |
9f95a23c | 386 | |
f6b5b4d7 | 387 | # ceph orch apply rgw *<realm-name>* *<zone-name>* --placement="*<num-daemons>* [*<host1>* ...]" |
9f95a23c | 388 | |
f6b5b4d7 TL |
389 | For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1* |
390 | zone on *myhost1* and *myhost2*:: | |
9f95a23c | 391 | |
f6b5b4d7 | 392 | # ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2" |
9f95a23c | 393 | |
f6b5b4d7 | 394 | Cephadm will wait for a healthy cluster and automatically create the supplied realm and zone if they do not exist before deploying the rgw daemon(s) |
9f95a23c | 395 | |
f6b5b4d7 | 396 | Alternatively, the realm, zonegroup, and zone can be manually created using ``radosgw-admin`` commands:: |
9f95a23c | 397 | |
f6b5b4d7 | 398 | # radosgw-admin realm create --rgw-realm=<realm-name> --default |
9f95a23c | 399 | |
f6b5b4d7 | 400 | # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default |
9f95a23c | 401 | |
f6b5b4d7 | 402 | # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default |
9f95a23c | 403 | |
f6b5b4d7 | 404 | # radosgw-admin period update --rgw-realm=<realm-name> --commit |
1911f103 TL |
405 | |
406 | See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. | |
801d1391 | 407 | |
f91f0fd5 TL |
408 | |
409 | .. _deploy-cephadm-nfs-ganesha: | |
410 | ||
801d1391 TL |
411 | Deploying NFS ganesha |
412 | ===================== | |
413 | ||
414 | Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool* | |
415 | and optional *namespace* | |
416 | ||
417 | To deploy a NFS Ganesha gateway,:: | |
418 | ||
1911f103 | 419 | # ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]" |
801d1391 TL |
420 | |
421 | For example, to deploy NFS with a service id of *foo*, that will use the | |
422 | RADOS pool *nfs-ganesha* and namespace *nfs-ns*,:: | |
423 | ||
424 | # ceph orch apply nfs foo nfs-ganesha nfs-ns | |
1911f103 | 425 | |
f6b5b4d7 TL |
426 | .. note:: |
427 | Create the *nfs-ganesha* pool first if it doesn't exist. | |
428 | ||
1911f103 TL |
429 | See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. |
430 | ||
e306af50 TL |
431 | Deploying custom containers |
432 | =========================== | |
f6b5b4d7 | 433 | It is also possible to choose different containers than the default containers to deploy Ceph. See :ref:`containers` for information about your options in this regard. |