]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/osd.rst
import ceph pacific 16.2.5
[ceph.git] / ceph / doc / cephadm / osd.rst
1 ***********
2 OSD Service
3 ***********
4 .. _device management: ../rados/operations/devices
5 .. _libstoragemgmt: https://github.com/libstorage/libstoragemgmt
6
7 List Devices
8 ============
9
10 ``ceph-volume`` scans each cluster in the host from time to time in order
11 to determine which devices are present and whether they are eligible to be
12 used as OSDs.
13
14 To print a list of devices discovered by ``cephadm``, run this command:
15
16 .. prompt:: bash #
17
18 ceph orch device ls [--hostname=...] [--wide] [--refresh]
19
20 Example
21 ::
22
23 Hostname Path Type Serial Size Health Ident Fault Available
24 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No
25 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Unknown N/A N/A No
26 srv-01 /dev/sdd hdd 15R0A07DFRD6 300G Unknown N/A N/A No
27 srv-01 /dev/sde hdd 15P0A0QDFRD6 300G Unknown N/A N/A No
28 srv-02 /dev/sdb hdd 15R0A033FRD6 300G Unknown N/A N/A No
29 srv-02 /dev/sdc hdd 15R0A05XFRD6 300G Unknown N/A N/A No
30 srv-02 /dev/sde hdd 15R0A0ANFRD6 300G Unknown N/A N/A No
31 srv-02 /dev/sdf hdd 15R0A06EFRD6 300G Unknown N/A N/A No
32 srv-03 /dev/sdb hdd 15R0A0OGFRD6 300G Unknown N/A N/A No
33 srv-03 /dev/sdc hdd 15R0A0P7FRD6 300G Unknown N/A N/A No
34 srv-03 /dev/sdd hdd 15R0A0O7FRD6 300G Unknown N/A N/A No
35
36 Using the ``--wide`` option provides all details relating to the device,
37 including any reasons that the device might not be eligible for use as an OSD.
38
39 In the above example you can see fields named "Health", "Ident", and "Fault".
40 This information is provided by integration with `libstoragemgmt`_. By default,
41 this integration is disabled (because `libstoragemgmt`_ may not be 100%
42 compatible with your hardware). To make ``cephadm`` include these fields,
43 enable cephadm's "enhanced device scan" option as follows;
44
45 .. prompt:: bash #
46
47 ceph config set mgr mgr/cephadm/device_enhanced_scan true
48
49 .. warning::
50 Although the libstoragemgmt library performs standard SCSI inquiry calls,
51 there is no guarantee that your firmware fully implements these standards.
52 This can lead to erratic behaviour and even bus resets on some older
53 hardware. It is therefore recommended that, before enabling this feature,
54 you test your hardware's compatibility with libstoragemgmt first to avoid
55 unplanned interruptions to services.
56
57 There are a number of ways to test compatibility, but the simplest may be
58 to use the cephadm shell to call libstoragemgmt directly - ``cephadm shell
59 lsmcli ldl``. If your hardware is supported you should see something like
60 this:
61
62 ::
63
64 Path | SCSI VPD 0x83 | Link Type | Serial Number | Health Status
65 ----------------------------------------------------------------------------
66 /dev/sda | 50000396082ba631 | SAS | 15P0A0R0FRD6 | Good
67 /dev/sdb | 50000396082bbbf9 | SAS | 15P0A0YFFRD6 | Good
68
69
70 After you have enabled libstoragemgmt support, the output will look something
71 like this:
72
73 ::
74
75 # ceph orch device ls
76 Hostname Path Type Serial Size Health Ident Fault Available
77 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Good Off Off No
78 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Good Off Off No
79 :
80
81 In this example, libstoragemgmt has confirmed the health of the drives and the ability to
82 interact with the Identification and Fault LEDs on the drive enclosures. For further
83 information about interacting with these LEDs, refer to `device management`_.
84
85 .. note::
86 The current release of `libstoragemgmt`_ (1.8.8) supports SCSI, SAS, and SATA based
87 local disks only. There is no official support for NVMe devices (PCIe)
88
89 .. _cephadm-deploy-osds:
90
91 Deploy OSDs
92 ===========
93
94 Listing Storage Devices
95 -----------------------
96
97 In order to deploy an OSD, there must be a storage device that is *available* on
98 which the OSD will be deployed.
99
100 Run this command to display an inventory of storage devices on all cluster hosts:
101
102 .. prompt:: bash #
103
104 ceph orch device ls
105
106 A storage device is considered *available* if all of the following
107 conditions are met:
108
109 * The device must have no partitions.
110 * The device must not have any LVM state.
111 * The device must not be mounted.
112 * The device must not contain a file system.
113 * The device must not contain a Ceph BlueStore OSD.
114 * The device must be larger than 5 GB.
115
116 Ceph will not provision an OSD on a device that is not available.
117
118 Creating New OSDs
119 -----------------
120
121 There are a few ways to create new OSDs:
122
123 * Tell Ceph to consume any available and unused storage device:
124
125 .. prompt:: bash #
126
127 ceph orch apply osd --all-available-devices
128
129 * Create an OSD from a specific device on a specific host:
130
131 .. prompt:: bash #
132
133 ceph orch daemon add osd *<host>*:*<device-path>*
134
135 For example:
136
137 .. prompt:: bash #
138
139 ceph orch daemon add osd host1:/dev/sdb
140
141 * You can use :ref:`drivegroups` to categorize device(s) based on their
142 properties. This might be useful in forming a clearer picture of which
143 devices are available to consume. Properties include device type (SSD or
144 HDD), device model names, size, and the hosts on which the devices exist:
145
146 .. prompt:: bash #
147
148 ceph orch apply -i spec.yml
149
150 Dry Run
151 -------
152
153 The ``--dry-run`` flag causes the orchestrator to present a preview of what
154 will happen without actually creating the OSDs.
155
156 For example:
157
158 .. prompt:: bash #
159
160 ceph orch apply osd --all-available-devices --dry-run
161
162 ::
163
164 NAME HOST DATA DB WAL
165 all-available-devices node1 /dev/vdb - -
166 all-available-devices node2 /dev/vdc - -
167 all-available-devices node3 /dev/vdd - -
168
169 .. _cephadm-osd-declarative:
170
171 Declarative State
172 -----------------
173
174 The effect of ``ceph orch apply`` is persistent. This means that drives that
175 are added to the system after the ``ceph orch apply`` command completes will be
176 automatically found and added to the cluster. It also means that drives that
177 become available (by zapping, for example) after the ``ceph orch apply``
178 command completes will be automatically found and added to the cluster.
179
180 We will examine the effects of the following command:
181
182 .. prompt:: bash #
183
184 ceph orch apply osd --all-available-devices
185
186 After running the above command:
187
188 * If you add new disks to the cluster, they will automatically be used to
189 create new OSDs.
190 * If you remove an OSD and clean the LVM physical volume, a new OSD will be
191 created automatically.
192
193 To disable the automatic creation of OSD on available devices, use the
194 ``unmanaged`` parameter:
195
196 If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter:
197
198 .. prompt:: bash #
199
200 ceph orch apply osd --all-available-devices --unmanaged=true
201
202 .. note::
203
204 Keep these three facts in mind:
205
206 - The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected.
207
208 - Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service.
209
210 - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
211
212 * For cephadm, see also :ref:`cephadm-spec-unmanaged`.
213
214
215 Remove an OSD
216 =============
217
218 Removing an OSD from a cluster involves two steps:
219
220 #. evacuating all placement groups (PGs) from the cluster
221 #. removing the PG-free OSD from the cluster
222
223 The following command performs these two steps:
224
225 .. prompt:: bash #
226
227 ceph orch osd rm <osd_id(s)> [--replace] [--force]
228
229 Example:
230
231 .. prompt:: bash #
232
233 ceph orch osd rm 0
234
235 Expected output::
236
237 Scheduled OSD(s) for removal
238
239 OSDs that are not safe to destroy will be rejected.
240
241 Monitoring OSD State
242 --------------------
243
244 You can query the state of OSD operation with the following command:
245
246 .. prompt:: bash #
247
248 ceph orch osd rm status
249
250 Expected output::
251
252 OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
253 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
254 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
255 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
256
257
258 When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
259
260 .. note::
261 After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
262 For more information on this, read about the ``unmanaged`` parameter in :ref:`cephadm-osd-declarative`.
263
264 Stopping OSD Removal
265 --------------------
266
267 It is possible to stop queued OSD removals by using the following command:
268
269 .. prompt:: bash #
270
271 ceph orch osd rm stop <osd_id(s)>
272
273 Example:
274
275 .. prompt:: bash #
276
277 ceph orch osd rm stop 4
278
279 Expected output::
280
281 Stopped OSD(s) removal
282
283 This resets the initial state of the OSD and takes it off the removal queue.
284
285
286 Replacing an OSD
287 ----------------
288
289 .. prompt:: bash #
290
291 orch osd rm <osd_id(s)> --replace [--force]
292
293 Example:
294
295 .. prompt:: bash #
296
297 ceph orch osd rm 4 --replace
298
299 Expected output::
300
301 Scheduled OSD(s) for replacement
302
303 This follows the same procedure as the procedure in the "Remove OSD" section, with
304 one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is
305 instead assigned a 'destroyed' flag.
306
307 **Preserving the OSD ID**
308
309 The 'destroyed' flag is used to determine which OSD ids will be reused in the
310 next OSD deployment.
311
312 If you use OSDSpecs for OSD deployment, your newly added disks will be assigned
313 the OSD ids of their replaced counterparts. This assumes that the new disks
314 still match the OSDSpecs.
315
316 Use the ``--dry-run`` flag to make certain that the ``ceph orch apply osd``
317 command does what you want it to. The ``--dry-run`` flag shows you what the
318 outcome of the command will be without making the changes you specify. When
319 you are satisfied that the command will do what you want, run the command
320 without the ``--dry-run`` flag.
321
322 .. tip::
323
324 The name of your OSDSpec can be retrieved with the command ``ceph orch ls``
325
326 Alternatively, you can use your OSDSpec file:
327
328 .. prompt:: bash #
329
330 ceph orch apply osd -i <osd_spec_file> --dry-run
331
332 Expected output::
333
334 NAME HOST DATA DB WAL
335 <name_of_osd_spec> node1 /dev/vdb - -
336
337
338 When this output reflects your intention, omit the ``--dry-run`` flag to
339 execute the deployment.
340
341
342 Erasing Devices (Zapping Devices)
343 ---------------------------------
344
345 Erase (zap) a device so that it can be reused. ``zap`` calls ``ceph-volume
346 zap`` on the remote host.
347
348 .. prompt:: bash #
349
350 orch device zap <hostname> <path>
351
352 Example command:
353
354 .. prompt:: bash #
355
356 ceph orch device zap my_hostname /dev/sdx
357
358 .. note::
359 If the unmanaged flag is unset, cephadm automatically deploys drives that
360 match the DriveGroup in your OSDSpec. For example, if you use the
361 ``all-available-devices`` option when creating OSDs, when you ``zap`` a
362 device the cephadm orchestrator automatically creates a new OSD in the
363 device. To disable this behavior, see :ref:`cephadm-osd-declarative`.
364
365
366 .. _osd_autotune:
367
368 Automatically tuning OSD memory
369 ===============================
370
371 OSD daemons will adjust their memory consumption based on the
372 ``osd_memory_target`` config option (several gigabytes, by
373 default). If Ceph is deployed on dedicated nodes that are not sharing
374 memory with other services, cephadm can automatically adjust the per-OSD
375 memory consumption based on the total amount of RAM and the number of deployed
376 OSDs.
377
378 This option is enabled globally with::
379
380 ceph config set osd osd_memory_target_autotune true
381
382 Cephadm will start with a fraction
383 (``mgr/cephadm/autotune_memory_target_ratio``, which defaults to
384 ``.7``) of the total RAM in the system, subtract off any memory
385 consumed by non-autotuned daemons (non-OSDs, for OSDs for which
386 ``osd_memory_target_autotune`` is false), and then divide by the
387 remaining OSDs.
388
389 The final targets are reflected in the config database with options like::
390
391 WHO MASK LEVEL OPTION VALUE
392 osd host:foo basic osd_memory_target 126092301926
393 osd host:bar basic osd_memory_target 6442450944
394
395 Both the limits and the current memory consumed by each daemon are visible from
396 the ``ceph orch ps`` output in the ``MEM LIMIT`` column::
397
398 NAME HOST PORTS STATUS REFRESHED AGE MEM USED MEM LIMIT VERSION IMAGE ID CONTAINER ID
399 osd.1 dael running (3h) 10s ago 3h 72857k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 9e183363d39c
400 osd.2 dael running (81m) 10s ago 81m 63989k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 1f0cc479b051
401 osd.3 dael running (62m) 10s ago 62m 64071k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 ac5537492f27
402
403 To exclude an OSD from memory autotuning, disable the autotune option
404 for that OSD and also set a specific memory target. For example,
405
406 .. prompt:: bash #
407
408 ceph config set osd.123 osd_memory_target_autotune false
409 ceph config set osd.123 osd_memory_target 16G
410
411
412 .. _drivegroups:
413
414 Advanced OSD Service Specifications
415 ===================================
416
417 :ref:`orchestrator-cli-service-spec`\s of type ``osd`` are a way to describe a
418 cluster layout, using the properties of disks. Service specifications give the
419 user an abstract way to tell Ceph which disks should turn into OSDs with which
420 configurations, without knowing the specifics of device names and paths.
421
422 Service specifications make it possible to define a yaml or json file that can
423 be used to reduce the amount of manual work involved in creating OSDs.
424
425 For example, instead of running the following command:
426
427 .. prompt:: bash [monitor.1]#
428
429 ceph orch daemon add osd *<host>*:*<path-to-device>*
430
431 for each device and each host, we can define a yaml or json file that allows us
432 to describe the layout. Here's the most basic example.
433
434 Create a file called (for example) ``osd_spec.yml``:
435
436 .. code-block:: yaml
437
438 service_type: osd
439 service_id: default_drive_group <- name of the drive_group (name can be custom)
440 placement:
441 host_pattern: '*' <- which hosts to target, currently only supports globs
442 data_devices: <- the type of devices you are applying specs to
443 all: true <- a filter, check below for a full list
444
445 This means :
446
447 #. Turn any available device (ceph-volume decides what 'available' is) into an
448 OSD on all hosts that match the glob pattern '*'. (The glob pattern matches
449 against the registered hosts from `host ls`) A more detailed section on
450 host_pattern is available below.
451
452 #. Then pass it to `osd create` like this:
453
454 .. prompt:: bash [monitor.1]#
455
456 ceph orch apply osd -i /path/to/osd_spec.yml
457
458 This instruction will be issued to all the matching hosts, and will deploy
459 these OSDs.
460
461 Setups more complex than the one specified by the ``all`` filter are
462 possible. See :ref:`osd_filters` for details.
463
464 A ``--dry-run`` flag can be passed to the ``apply osd`` command to display a
465 synopsis of the proposed layout.
466
467 Example
468
469 .. prompt:: bash [monitor.1]#
470
471 ceph orch apply osd -i /path/to/osd_spec.yml --dry-run
472
473
474
475 .. _osd_filters:
476
477 Filters
478 -------
479
480 .. note::
481 Filters are applied using an `AND` gate by default. This means that a drive
482 must fulfill all filter criteria in order to get selected. This behavior can
483 be adjusted by setting ``filter_logic: OR`` in the OSD specification.
484
485 Filters are used to assign disks to groups, using their attributes to group
486 them.
487
488 The attributes are based off of ceph-volume's disk query. You can retrieve
489 information about the attributes with this command:
490
491 .. code-block:: bash
492
493 ceph-volume inventory </path/to/disk>
494
495 Vendor or Model
496 ^^^^^^^^^^^^^^^
497
498 Specific disks can be targeted by vendor or model:
499
500 .. code-block:: yaml
501
502 model: disk_model_name
503
504 or
505
506 .. code-block:: yaml
507
508 vendor: disk_vendor_name
509
510
511 Size
512 ^^^^
513
514 Specific disks can be targeted by `Size`:
515
516 .. code-block:: yaml
517
518 size: size_spec
519
520 Size specs
521 __________
522
523 Size specifications can be of the following forms:
524
525 * LOW:HIGH
526 * :HIGH
527 * LOW:
528 * EXACT
529
530 Concrete examples:
531
532 To include disks of an exact size
533
534 .. code-block:: yaml
535
536 size: '10G'
537
538 To include disks within a given range of size:
539
540 .. code-block:: yaml
541
542 size: '10G:40G'
543
544 To include disks that are less than or equal to 10G in size:
545
546 .. code-block:: yaml
547
548 size: ':10G'
549
550 To include disks equal to or greater than 40G in size:
551
552 .. code-block:: yaml
553
554 size: '40G:'
555
556 Sizes don't have to be specified exclusively in Gigabytes(G).
557
558 Other units of size are supported: Megabyte(M), Gigabyte(G) and Terrabyte(T).
559 Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``.
560
561
562 Rotational
563 ^^^^^^^^^^
564
565 This operates on the 'rotational' attribute of the disk.
566
567 .. code-block:: yaml
568
569 rotational: 0 | 1
570
571 `1` to match all disks that are rotational
572
573 `0` to match all disks that are non-rotational (SSD, NVME etc)
574
575
576 All
577 ^^^
578
579 This will take all disks that are 'available'
580
581 Note: This is exclusive for the data_devices section.
582
583 .. code-block:: yaml
584
585 all: true
586
587
588 Limiter
589 ^^^^^^^
590
591 If you have specified some valid filters but want to limit the number of disks that they match, use the ``limit`` directive:
592
593 .. code-block:: yaml
594
595 limit: 2
596
597 For example, if you used `vendor` to match all disks that are from `VendorA`
598 but want to use only the first two, you could use `limit`:
599
600 .. code-block:: yaml
601
602 data_devices:
603 vendor: VendorA
604 limit: 2
605
606 Note: `limit` is a last resort and shouldn't be used if it can be avoided.
607
608
609 Additional Options
610 ------------------
611
612 There are multiple optional settings you can use to change the way OSDs are deployed.
613 You can add these options to the base level of a DriveGroup for it to take effect.
614
615 This example would deploy all OSDs with encryption enabled.
616
617 .. code-block:: yaml
618
619 service_type: osd
620 service_id: example_osd_spec
621 placement:
622 host_pattern: '*'
623 data_devices:
624 all: true
625 encrypted: true
626
627 See a full list in the DriveGroupSpecs
628
629 .. py:currentmodule:: ceph.deployment.drive_group
630
631 .. autoclass:: DriveGroupSpec
632 :members:
633 :exclude-members: from_json
634
635 Examples
636 --------
637
638 The simple case
639 ^^^^^^^^^^^^^^^
640
641 All nodes with the same setup
642
643 .. code-block:: none
644
645 20 HDDs
646 Vendor: VendorA
647 Model: HDD-123-foo
648 Size: 4TB
649
650 2 SSDs
651 Vendor: VendorB
652 Model: MC-55-44-ZX
653 Size: 512GB
654
655 This is a common setup and can be described quite easily:
656
657 .. code-block:: yaml
658
659 service_type: osd
660 service_id: osd_spec_default
661 placement:
662 host_pattern: '*'
663 data_devices:
664 model: HDD-123-foo <- note that HDD-123 would also be valid
665 db_devices:
666 model: MC-55-44-XZ <- same here, MC-55-44 is valid
667
668 However, we can improve it by reducing the filters on core properties of the drives:
669
670 .. code-block:: yaml
671
672 service_type: osd
673 service_id: osd_spec_default
674 placement:
675 host_pattern: '*'
676 data_devices:
677 rotational: 1
678 db_devices:
679 rotational: 0
680
681 Now, we enforce all rotating devices to be declared as 'data devices' and all non-rotating devices will be used as shared_devices (wal, db)
682
683 If you know that drives with more than 2TB will always be the slower data devices, you can also filter by size:
684
685 .. code-block:: yaml
686
687 service_type: osd
688 service_id: osd_spec_default
689 placement:
690 host_pattern: '*'
691 data_devices:
692 size: '2TB:'
693 db_devices:
694 size: ':2TB'
695
696 Note: All of the above DriveGroups are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change.
697
698
699 The advanced case
700 ^^^^^^^^^^^^^^^^^
701
702 Here we have two distinct setups
703
704 .. code-block:: none
705
706 20 HDDs
707 Vendor: VendorA
708 Model: HDD-123-foo
709 Size: 4TB
710
711 12 SSDs
712 Vendor: VendorB
713 Model: MC-55-44-ZX
714 Size: 512GB
715
716 2 NVMEs
717 Vendor: VendorC
718 Model: NVME-QQQQ-987
719 Size: 256GB
720
721
722 * 20 HDDs should share 2 SSDs
723 * 10 SSDs should share 2 NVMes
724
725 This can be described with two layouts.
726
727 .. code-block:: yaml
728
729 service_type: osd
730 service_id: osd_spec_hdd
731 placement:
732 host_pattern: '*'
733 data_devices:
734 rotational: 0
735 db_devices:
736 model: MC-55-44-XZ
737 limit: 2 (db_slots is actually to be favoured here, but it's not implemented yet)
738 ---
739 service_type: osd
740 service_id: osd_spec_ssd
741 placement:
742 host_pattern: '*'
743 data_devices:
744 model: MC-55-44-XZ
745 db_devices:
746 vendor: VendorC
747
748 This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated db/wal devices.
749 The remaining SSDs(8) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
750
751 The advanced case (with non-uniform nodes)
752 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
753
754 The examples above assumed that all nodes have the same drives. That's however not always the case.
755
756 Node1-5
757
758 .. code-block:: none
759
760 20 HDDs
761 Vendor: Intel
762 Model: SSD-123-foo
763 Size: 4TB
764 2 SSDs
765 Vendor: VendorA
766 Model: MC-55-44-ZX
767 Size: 512GB
768
769 Node6-10
770
771 .. code-block:: none
772
773 5 NVMEs
774 Vendor: Intel
775 Model: SSD-123-foo
776 Size: 4TB
777 20 SSDs
778 Vendor: VendorA
779 Model: MC-55-44-ZX
780 Size: 512GB
781
782 You can use the 'host_pattern' key in the layout to target certain nodes. Salt target notation helps to keep things easy.
783
784
785 .. code-block:: yaml
786
787 service_type: osd
788 service_id: osd_spec_node_one_to_five
789 placement:
790 host_pattern: 'node[1-5]'
791 data_devices:
792 rotational: 1
793 db_devices:
794 rotational: 0
795 ---
796 service_type: osd
797 service_id: osd_spec_six_to_ten
798 placement:
799 host_pattern: 'node[6-10]'
800 data_devices:
801 model: MC-55-44-XZ
802 db_devices:
803 model: SSD-123-foo
804
805 This applies different OSD specs to different hosts depending on the `host_pattern` key.
806
807 Dedicated wal + db
808 ^^^^^^^^^^^^^^^^^^
809
810 All previous cases co-located the WALs with the DBs.
811 It's however possible to deploy the WAL on a dedicated device as well, if it makes sense.
812
813 .. code-block:: none
814
815 20 HDDs
816 Vendor: VendorA
817 Model: SSD-123-foo
818 Size: 4TB
819
820 2 SSDs
821 Vendor: VendorB
822 Model: MC-55-44-ZX
823 Size: 512GB
824
825 2 NVMEs
826 Vendor: VendorC
827 Model: NVME-QQQQ-987
828 Size: 256GB
829
830
831 The OSD spec for this case would look like the following (using the `model` filter):
832
833 .. code-block:: yaml
834
835 service_type: osd
836 service_id: osd_spec_default
837 placement:
838 host_pattern: '*'
839 data_devices:
840 model: MC-55-44-XZ
841 db_devices:
842 model: SSD-123-foo
843 wal_devices:
844 model: NVME-QQQQ-987
845
846
847 It is also possible to specify directly device paths in specific hosts like the following:
848
849 .. code-block:: yaml
850
851 service_type: osd
852 service_id: osd_using_paths
853 placement:
854 hosts:
855 - Node01
856 - Node02
857 data_devices:
858 paths:
859 - /dev/sdb
860 db_devices:
861 paths:
862 - /dev/sdc
863 wal_devices:
864 paths:
865 - /dev/sdd
866
867
868 This can easily be done with other filters, like `size` or `vendor` as well.
869
870 Activate existing OSDs
871 ======================
872
873 In case the OS of a host was reinstalled, existing OSDs need to be activated
874 again. For this use case, cephadm provides a wrapper for :ref:`ceph-volume-lvm-activate` that
875 activates all existing OSDs on a host.
876
877 .. prompt:: bash #
878
879 ceph cephadm osd activate <host>...
880
881 This will scan all existing disks for OSDs and deploy corresponding daemons.