4 .. _device management: ../rados/operations/devices
5 .. _libstoragemgmt: https://github.com/libstorage/libstoragemgmt
10 ``ceph-volume`` scans each host in the cluster from time to time in order
11 to determine which devices are present and whether they are eligible to be
14 To print a list of devices discovered by ``cephadm``, run this command:
18 ceph orch device ls [--hostname=...] [--wide] [--refresh]
23 Hostname Path Type Serial Size Health Ident Fault Available
24 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No
25 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Unknown N/A N/A No
26 srv-01 /dev/sdd hdd 15R0A07DFRD6 300G Unknown N/A N/A No
27 srv-01 /dev/sde hdd 15P0A0QDFRD6 300G Unknown N/A N/A No
28 srv-02 /dev/sdb hdd 15R0A033FRD6 300G Unknown N/A N/A No
29 srv-02 /dev/sdc hdd 15R0A05XFRD6 300G Unknown N/A N/A No
30 srv-02 /dev/sde hdd 15R0A0ANFRD6 300G Unknown N/A N/A No
31 srv-02 /dev/sdf hdd 15R0A06EFRD6 300G Unknown N/A N/A No
32 srv-03 /dev/sdb hdd 15R0A0OGFRD6 300G Unknown N/A N/A No
33 srv-03 /dev/sdc hdd 15R0A0P7FRD6 300G Unknown N/A N/A No
34 srv-03 /dev/sdd hdd 15R0A0O7FRD6 300G Unknown N/A N/A No
36 Using the ``--wide`` option provides all details relating to the device,
37 including any reasons that the device might not be eligible for use as an OSD.
39 In the above example you can see fields named "Health", "Ident", and "Fault".
40 This information is provided by integration with `libstoragemgmt`_. By default,
41 this integration is disabled (because `libstoragemgmt`_ may not be 100%
42 compatible with your hardware). To make ``cephadm`` include these fields,
43 enable cephadm's "enhanced device scan" option as follows;
47 ceph config set mgr mgr/cephadm/device_enhanced_scan true
50 Although the libstoragemgmt library performs standard SCSI inquiry calls,
51 there is no guarantee that your firmware fully implements these standards.
52 This can lead to erratic behaviour and even bus resets on some older
53 hardware. It is therefore recommended that, before enabling this feature,
54 you test your hardware's compatibility with libstoragemgmt first to avoid
55 unplanned interruptions to services.
57 There are a number of ways to test compatibility, but the simplest may be
58 to use the cephadm shell to call libstoragemgmt directly - ``cephadm shell
59 lsmcli ldl``. If your hardware is supported you should see something like
64 Path | SCSI VPD 0x83 | Link Type | Serial Number | Health Status
65 ----------------------------------------------------------------------------
66 /dev/sda | 50000396082ba631 | SAS | 15P0A0R0FRD6 | Good
67 /dev/sdb | 50000396082bbbf9 | SAS | 15P0A0YFFRD6 | Good
70 After you have enabled libstoragemgmt support, the output will look something
76 Hostname Path Type Serial Size Health Ident Fault Available
77 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Good Off Off No
78 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Good Off Off No
81 In this example, libstoragemgmt has confirmed the health of the drives and the ability to
82 interact with the Identification and Fault LEDs on the drive enclosures. For further
83 information about interacting with these LEDs, refer to `device management`_.
86 The current release of `libstoragemgmt`_ (1.8.8) supports SCSI, SAS, and SATA based
87 local disks only. There is no official support for NVMe devices (PCIe)
89 .. _cephadm-deploy-osds:
94 Listing Storage Devices
95 -----------------------
97 In order to deploy an OSD, there must be a storage device that is *available* on
98 which the OSD will be deployed.
100 Run this command to display an inventory of storage devices on all cluster hosts:
106 A storage device is considered *available* if all of the following
109 * The device must have no partitions.
110 * The device must not have any LVM state.
111 * The device must not be mounted.
112 * The device must not contain a file system.
113 * The device must not contain a Ceph BlueStore OSD.
114 * The device must be larger than 5 GB.
116 Ceph will not provision an OSD on a device that is not available.
121 There are a few ways to create new OSDs:
123 * Tell Ceph to consume any available and unused storage device:
127 ceph orch apply osd --all-available-devices
129 * Create an OSD from a specific device on a specific host:
133 ceph orch daemon add osd *<host>*:*<device-path>*
139 ceph orch daemon add osd host1:/dev/sdb
141 Advanced OSD creation from specific devices on a specific host:
145 ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/sdc,osds_per_device=2
147 * Create an OSD on a specific LVM logical volume on a specific host:
151 ceph orch daemon add osd *<host>*:*<lvm-path>*
157 ceph orch daemon add osd host1:/dev/vg_osd/lvm_osd1701
159 * You can use :ref:`drivegroups` to categorize device(s) based on their
160 properties. This might be useful in forming a clearer picture of which
161 devices are available to consume. Properties include device type (SSD or
162 HDD), device model names, size, and the hosts on which the devices exist:
166 ceph orch apply -i spec.yml
171 The ``--dry-run`` flag causes the orchestrator to present a preview of what
172 will happen without actually creating the OSDs.
178 ceph orch apply osd --all-available-devices --dry-run
182 NAME HOST DATA DB WAL
183 all-available-devices node1 /dev/vdb - -
184 all-available-devices node2 /dev/vdc - -
185 all-available-devices node3 /dev/vdd - -
187 .. _cephadm-osd-declarative:
192 The effect of ``ceph orch apply`` is persistent. This means that drives that
193 are added to the system after the ``ceph orch apply`` command completes will be
194 automatically found and added to the cluster. It also means that drives that
195 become available (by zapping, for example) after the ``ceph orch apply``
196 command completes will be automatically found and added to the cluster.
198 We will examine the effects of the following command:
202 ceph orch apply osd --all-available-devices
204 After running the above command:
206 * If you add new disks to the cluster, they will automatically be used to
208 * If you remove an OSD and clean the LVM physical volume, a new OSD will be
209 created automatically.
211 If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter:
215 ceph orch apply osd --all-available-devices --unmanaged=true
219 Keep these three facts in mind:
221 - The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected.
223 - Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service.
225 - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
227 * For cephadm, see also :ref:`cephadm-spec-unmanaged`.
229 .. _cephadm-osd-removal:
234 Removing an OSD from a cluster involves two steps:
236 #. evacuating all placement groups (PGs) from the cluster
237 #. removing the PG-free OSD from the cluster
239 The following command performs these two steps:
243 ceph orch osd rm <osd_id(s)> [--replace] [--force]
253 Scheduled OSD(s) for removal
255 OSDs that are not safe to destroy will be rejected.
258 After removing OSDs, if the drives the OSDs were deployed on once again
259 become available, cephadm may automatically try to deploy more OSDs
260 on these drives if they match an existing drivegroup spec. If you deployed
261 the OSDs you are removing with a spec and don't want any new OSDs deployed on
262 the drives after removal, it's best to modify the drivegroup spec before removal.
263 Either set ``unmanaged: true`` to stop it from picking up new drives at all,
264 or modify it in some way that it no longer matches the drives used for the
265 OSDs you wish to remove. Then re-apply the spec. For more info on drivegroup
266 specs see :ref:`drivegroups`. For more info on the declarative nature of
267 cephadm in reference to deploying OSDs, see :ref:`cephadm-osd-declarative`
272 You can query the state of OSD operation with the following command:
276 ceph orch osd rm status
280 OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
281 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
282 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
283 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
286 When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
289 After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
290 For more information on this, read about the ``unmanaged`` parameter in :ref:`cephadm-osd-declarative`.
295 It is possible to stop queued OSD removals by using the following command:
299 ceph orch osd rm stop <osd_id(s)>
305 ceph orch osd rm stop 4
309 Stopped OSD(s) removal
311 This resets the initial state of the OSD and takes it off the removal queue.
313 .. _cephadm-replacing-an-osd:
320 ceph orch osd rm <osd_id(s)> --replace [--force]
326 ceph orch osd rm 4 --replace
330 Scheduled OSD(s) for replacement
332 This follows the same procedure as the procedure in the "Remove OSD" section, with
333 one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is
334 instead assigned a 'destroyed' flag.
337 The new OSD that will replace the removed OSD must be created on the same host
338 as the OSD that was removed.
340 **Preserving the OSD ID**
342 The 'destroyed' flag is used to determine which OSD ids will be reused in the
345 If you use OSDSpecs for OSD deployment, your newly added disks will be assigned
346 the OSD ids of their replaced counterparts. This assumes that the new disks
347 still match the OSDSpecs.
349 Use the ``--dry-run`` flag to make certain that the ``ceph orch apply osd``
350 command does what you want it to. The ``--dry-run`` flag shows you what the
351 outcome of the command will be without making the changes you specify. When
352 you are satisfied that the command will do what you want, run the command
353 without the ``--dry-run`` flag.
357 The name of your OSDSpec can be retrieved with the command ``ceph orch ls``
359 Alternatively, you can use your OSDSpec file:
363 ceph orch apply -i <osd_spec_file> --dry-run
367 NAME HOST DATA DB WAL
368 <name_of_osd_spec> node1 /dev/vdb - -
371 When this output reflects your intention, omit the ``--dry-run`` flag to
372 execute the deployment.
375 Erasing Devices (Zapping Devices)
376 ---------------------------------
378 Erase (zap) a device so that it can be reused. ``zap`` calls ``ceph-volume
379 zap`` on the remote host.
383 ceph orch device zap <hostname> <path>
389 ceph orch device zap my_hostname /dev/sdx
392 If the unmanaged flag is unset, cephadm automatically deploys drives that
393 match the OSDSpec. For example, if you use the
394 ``all-available-devices`` option when creating OSDs, when you ``zap`` a
395 device the cephadm orchestrator automatically creates a new OSD in the
396 device. To disable this behavior, see :ref:`cephadm-osd-declarative`.
401 Automatically tuning OSD memory
402 ===============================
404 OSD daemons will adjust their memory consumption based on the
405 ``osd_memory_target`` config option (several gigabytes, by
406 default). If Ceph is deployed on dedicated nodes that are not sharing
407 memory with other services, cephadm can automatically adjust the per-OSD
408 memory consumption based on the total amount of RAM and the number of deployed
411 .. warning:: Cephadm sets ``osd_memory_target_autotune`` to ``true`` by default which is unsuitable for hyperconverged infrastructures.
413 Cephadm will start with a fraction
414 (``mgr/cephadm/autotune_memory_target_ratio``, which defaults to
415 ``.7``) of the total RAM in the system, subtract off any memory
416 consumed by non-autotuned daemons (non-OSDs, for OSDs for which
417 ``osd_memory_target_autotune`` is false), and then divide by the
420 The final targets are reflected in the config database with options like::
422 WHO MASK LEVEL OPTION VALUE
423 osd host:foo basic osd_memory_target 126092301926
424 osd host:bar basic osd_memory_target 6442450944
426 Both the limits and the current memory consumed by each daemon are visible from
427 the ``ceph orch ps`` output in the ``MEM LIMIT`` column::
429 NAME HOST PORTS STATUS REFRESHED AGE MEM USED MEM LIMIT VERSION IMAGE ID CONTAINER ID
430 osd.1 dael running (3h) 10s ago 3h 72857k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 9e183363d39c
431 osd.2 dael running (81m) 10s ago 81m 63989k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 1f0cc479b051
432 osd.3 dael running (62m) 10s ago 62m 64071k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 ac5537492f27
434 To exclude an OSD from memory autotuning, disable the autotune option
435 for that OSD and also set a specific memory target. For example,
439 ceph config set osd.123 osd_memory_target_autotune false
440 ceph config set osd.123 osd_memory_target 16G
445 Advanced OSD Service Specifications
446 ===================================
448 :ref:`orchestrator-cli-service-spec`\s of type ``osd`` are a way to describe a
449 cluster layout, using the properties of disks. Service specifications give the
450 user an abstract way to tell Ceph which disks should turn into OSDs with which
451 configurations, without knowing the specifics of device names and paths.
453 Service specifications make it possible to define a yaml or json file that can
454 be used to reduce the amount of manual work involved in creating OSDs.
456 For example, instead of running the following command:
458 .. prompt:: bash [monitor.1]#
460 ceph orch daemon add osd *<host>*:*<path-to-device>*
462 for each device and each host, we can define a yaml or json file that allows us
463 to describe the layout. Here's the most basic example.
465 Create a file called (for example) ``osd_spec.yml``:
470 service_id: default_drive_group # custom name of the osd spec
472 host_pattern: '*' # which hosts to target
474 data_devices: # the type of devices you are applying specs to
475 all: true # a filter, check below for a full list
479 #. Turn any available device (ceph-volume decides what 'available' is) into an
480 OSD on all hosts that match the glob pattern '*'. (The glob pattern matches
481 against the registered hosts from `host ls`) A more detailed section on
482 host_pattern is available below.
484 #. Then pass it to `osd create` like this:
486 .. prompt:: bash [monitor.1]#
488 ceph orch apply -i /path/to/osd_spec.yml
490 This instruction will be issued to all the matching hosts, and will deploy
493 Setups more complex than the one specified by the ``all`` filter are
494 possible. See :ref:`osd_filters` for details.
496 A ``--dry-run`` flag can be passed to the ``apply osd`` command to display a
497 synopsis of the proposed layout.
501 .. prompt:: bash [monitor.1]#
503 ceph orch apply -i /path/to/osd_spec.yml --dry-run
513 Filters are applied using an `AND` gate by default. This means that a drive
514 must fulfill all filter criteria in order to get selected. This behavior can
515 be adjusted by setting ``filter_logic: OR`` in the OSD specification.
517 Filters are used to assign disks to groups, using their attributes to group
520 The attributes are based off of ceph-volume's disk query. You can retrieve
521 information about the attributes with this command:
525 ceph-volume inventory </path/to/disk>
530 Specific disks can be targeted by vendor or model:
534 model: disk_model_name
540 vendor: disk_vendor_name
546 Specific disks can be targeted by `Size`:
555 Size specifications can be of the following forms:
564 To include disks of an exact size
570 To include disks within a given range of size:
576 To include disks that are less than or equal to 10G in size:
582 To include disks equal to or greater than 40G in size:
588 Sizes don't have to be specified exclusively in Gigabytes(G).
590 Other units of size are supported: Megabyte(M), Gigabyte(G) and Terabyte(T).
591 Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``.
597 This operates on the 'rotational' attribute of the disk.
603 `1` to match all disks that are rotational
605 `0` to match all disks that are non-rotational (SSD, NVME etc)
611 This will take all disks that are 'available'
613 .. note:: This is exclusive for the data_devices section.
623 If you have specified some valid filters but want to limit the number of disks that they match, use the ``limit`` directive:
629 For example, if you used `vendor` to match all disks that are from `VendorA`
630 but want to use only the first two, you could use `limit`:
638 .. note:: `limit` is a last resort and shouldn't be used if it can be avoided.
644 There are multiple optional settings you can use to change the way OSDs are deployed.
645 You can add these options to the base level of an OSD spec for it to take effect.
647 This example would deploy all OSDs with encryption enabled.
652 service_id: example_osd_spec
660 See a full list in the DriveGroupSpecs
662 .. py:currentmodule:: ceph.deployment.drive_group
664 .. autoclass:: DriveGroupSpec
666 :exclude-members: from_json
675 All nodes with the same setup
689 This is a common setup and can be described quite easily:
694 service_id: osd_spec_default
699 model: HDD-123-foo # Note, HDD-123 would also be valid
701 model: MC-55-44-XZ # Same here, MC-55-44 is valid
703 However, we can improve it by reducing the filters on core properties of the drives:
708 service_id: osd_spec_default
717 Now, we enforce all rotating devices to be declared as 'data devices' and all non-rotating devices will be used as shared_devices (wal, db)
719 If you know that drives with more than 2TB will always be the slower data devices, you can also filter by size:
724 service_id: osd_spec_default
733 .. note:: All of the above OSD specs are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change.
736 Multiple OSD specs for a single host
737 ------------------------------------
739 Here we have two distinct setups
759 * 20 HDDs should share 2 SSDs
760 * 10 SSDs should share 2 NVMes
762 This can be described with two layouts.
767 service_id: osd_spec_hdd
775 limit: 2 # db_slots is actually to be favoured here, but it's not implemented yet
778 service_id: osd_spec_ssd
787 This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated db/wal devices.
788 The remaining SSDs(10) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
790 Multiple hosts with the same disk layout
791 ----------------------------------------
793 Assuming the cluster has different kinds of hosts each with similar disk
794 layout, it is recommended to apply different OSD specs matching only one
795 set of hosts. Typically you will have a spec for multiple hosts with the
798 The service id as the unique key: In case a new OSD spec with an already
799 applied service id is applied, the existing OSD spec will be superseded.
800 cephadm will now create new OSD daemons based on the new spec
801 definition. Existing OSD daemons will not be affected. See :ref:`cephadm-osd-declarative`.
829 You can use the 'placement' key in the layout to target certain nodes.
834 service_id: disk_layout_a
844 service_id: disk_layout_b
854 This applies different OSD specs to different hosts depending on the `placement` key.
855 See :ref:`orchestrator-cli-placement-spec`
859 Assuming each host has a unique disk layout, each OSD
860 spec needs to have a different service id
866 All previous cases co-located the WALs with the DBs.
867 It's however possible to deploy the WAL on a dedicated device as well, if it makes sense.
887 The OSD spec for this case would look like the following (using the `model` filter):
892 service_id: osd_spec_default
904 It is also possible to specify directly device paths in specific hosts like the following:
909 service_id: osd_using_paths
926 This can easily be done with other filters, like `size` or `vendor` as well.
928 It's possible to specify the `crush_device_class` parameter within the
929 DriveGroup spec, and it's applied to all the devices defined by the `paths`
935 service_id: osd_using_paths
940 crush_device_class: ssd
953 The `crush_device_class` parameter, however, can be defined for each OSD passed
954 using the `paths` keyword with the following syntax:
959 service_id: osd_using_paths
964 crush_device_class: ssd
969 crush_device_class: ssd
971 crush_device_class: nvme
979 .. _cephadm-osd-activate:
981 Activate existing OSDs
982 ======================
984 In case the OS of a host was reinstalled, existing OSDs need to be activated
985 again. For this use case, cephadm provides a wrapper for :ref:`ceph-volume-lvm-activate` that
986 activates all existing OSDs on a host.
990 ceph cephadm osd activate <host>...
992 This will scan all existing disks for OSDs and deploy corresponding daemons.