]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/services/osd.rst
import ceph quincy 17.2.4
[ceph.git] / ceph / doc / cephadm / services / osd.rst
CommitLineData
f67539c2
TL
1***********
2OSD Service
3***********
4.. _device management: ../rados/operations/devices
5.. _libstoragemgmt: https://github.com/libstorage/libstoragemgmt
6
7List Devices
8============
9
522d829b 10``ceph-volume`` scans each host in the cluster from time to time in order
f67539c2
TL
11to determine which devices are present and whether they are eligible to be
12used as OSDs.
13
14To print a list of devices discovered by ``cephadm``, run this command:
15
16.. prompt:: bash #
17
18 ceph orch device ls [--hostname=...] [--wide] [--refresh]
19
20Example
21::
22
23 Hostname Path Type Serial Size Health Ident Fault Available
24 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No
25 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Unknown N/A N/A No
26 srv-01 /dev/sdd hdd 15R0A07DFRD6 300G Unknown N/A N/A No
27 srv-01 /dev/sde hdd 15P0A0QDFRD6 300G Unknown N/A N/A No
28 srv-02 /dev/sdb hdd 15R0A033FRD6 300G Unknown N/A N/A No
29 srv-02 /dev/sdc hdd 15R0A05XFRD6 300G Unknown N/A N/A No
30 srv-02 /dev/sde hdd 15R0A0ANFRD6 300G Unknown N/A N/A No
31 srv-02 /dev/sdf hdd 15R0A06EFRD6 300G Unknown N/A N/A No
32 srv-03 /dev/sdb hdd 15R0A0OGFRD6 300G Unknown N/A N/A No
33 srv-03 /dev/sdc hdd 15R0A0P7FRD6 300G Unknown N/A N/A No
34 srv-03 /dev/sdd hdd 15R0A0O7FRD6 300G Unknown N/A N/A No
35
36Using the ``--wide`` option provides all details relating to the device,
37including any reasons that the device might not be eligible for use as an OSD.
38
39In the above example you can see fields named "Health", "Ident", and "Fault".
40This information is provided by integration with `libstoragemgmt`_. By default,
41this integration is disabled (because `libstoragemgmt`_ may not be 100%
42compatible with your hardware). To make ``cephadm`` include these fields,
43enable cephadm's "enhanced device scan" option as follows;
44
45.. prompt:: bash #
46
47 ceph config set mgr mgr/cephadm/device_enhanced_scan true
48
49.. warning::
50 Although the libstoragemgmt library performs standard SCSI inquiry calls,
51 there is no guarantee that your firmware fully implements these standards.
52 This can lead to erratic behaviour and even bus resets on some older
53 hardware. It is therefore recommended that, before enabling this feature,
54 you test your hardware's compatibility with libstoragemgmt first to avoid
55 unplanned interruptions to services.
56
57 There are a number of ways to test compatibility, but the simplest may be
58 to use the cephadm shell to call libstoragemgmt directly - ``cephadm shell
59 lsmcli ldl``. If your hardware is supported you should see something like
60 this:
61
62 ::
63
64 Path | SCSI VPD 0x83 | Link Type | Serial Number | Health Status
65 ----------------------------------------------------------------------------
66 /dev/sda | 50000396082ba631 | SAS | 15P0A0R0FRD6 | Good
67 /dev/sdb | 50000396082bbbf9 | SAS | 15P0A0YFFRD6 | Good
68
69
70After you have enabled libstoragemgmt support, the output will look something
71like this:
72
73::
74
75 # ceph orch device ls
76 Hostname Path Type Serial Size Health Ident Fault Available
77 srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Good Off Off No
78 srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Good Off Off No
79 :
80
81In this example, libstoragemgmt has confirmed the health of the drives and the ability to
82interact with the Identification and Fault LEDs on the drive enclosures. For further
83information about interacting with these LEDs, refer to `device management`_.
84
85.. note::
86 The current release of `libstoragemgmt`_ (1.8.8) supports SCSI, SAS, and SATA based
87 local disks only. There is no official support for NVMe devices (PCIe)
88
89.. _cephadm-deploy-osds:
90
91Deploy OSDs
92===========
93
94Listing Storage Devices
95-----------------------
96
97In order to deploy an OSD, there must be a storage device that is *available* on
98which the OSD will be deployed.
99
100Run this command to display an inventory of storage devices on all cluster hosts:
101
102.. prompt:: bash #
103
104 ceph orch device ls
105
106A storage device is considered *available* if all of the following
107conditions are met:
108
109* The device must have no partitions.
110* The device must not have any LVM state.
111* The device must not be mounted.
112* The device must not contain a file system.
113* The device must not contain a Ceph BlueStore OSD.
114* The device must be larger than 5 GB.
115
116Ceph will not provision an OSD on a device that is not available.
117
118Creating New OSDs
119-----------------
120
121There are a few ways to create new OSDs:
122
123* Tell Ceph to consume any available and unused storage device:
124
125 .. prompt:: bash #
126
127 ceph orch apply osd --all-available-devices
128
129* Create an OSD from a specific device on a specific host:
130
131 .. prompt:: bash #
132
133 ceph orch daemon add osd *<host>*:*<device-path>*
134
135 For example:
136
137 .. prompt:: bash #
138
139 ceph orch daemon add osd host1:/dev/sdb
140
33c7a0ef
TL
141 Advanced OSD creation from specific devices on a specific host:
142
143 .. prompt:: bash #
144
145 ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/sdc,osds_per_device=2
146
f67539c2
TL
147* You can use :ref:`drivegroups` to categorize device(s) based on their
148 properties. This might be useful in forming a clearer picture of which
149 devices are available to consume. Properties include device type (SSD or
150 HDD), device model names, size, and the hosts on which the devices exist:
151
152 .. prompt:: bash #
153
154 ceph orch apply -i spec.yml
155
156Dry Run
157-------
158
159The ``--dry-run`` flag causes the orchestrator to present a preview of what
160will happen without actually creating the OSDs.
161
162For example:
163
164 .. prompt:: bash #
165
166 ceph orch apply osd --all-available-devices --dry-run
167
168 ::
169
170 NAME HOST DATA DB WAL
171 all-available-devices node1 /dev/vdb - -
172 all-available-devices node2 /dev/vdc - -
173 all-available-devices node3 /dev/vdd - -
174
175.. _cephadm-osd-declarative:
176
177Declarative State
178-----------------
179
b3b6e05e
TL
180The effect of ``ceph orch apply`` is persistent. This means that drives that
181are added to the system after the ``ceph orch apply`` command completes will be
182automatically found and added to the cluster. It also means that drives that
183become available (by zapping, for example) after the ``ceph orch apply``
184command completes will be automatically found and added to the cluster.
f67539c2 185
b3b6e05e 186We will examine the effects of the following command:
f67539c2 187
b3b6e05e
TL
188 .. prompt:: bash #
189
190 ceph orch apply osd --all-available-devices
191
192After running the above command:
193
194* If you add new disks to the cluster, they will automatically be used to
195 create new OSDs.
196* If you remove an OSD and clean the LVM physical volume, a new OSD will be
197 created automatically.
f67539c2 198
b3b6e05e
TL
199To disable the automatic creation of OSD on available devices, use the
200``unmanaged`` parameter:
f67539c2
TL
201
202If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter:
203
204.. prompt:: bash #
205
206 ceph orch apply osd --all-available-devices --unmanaged=true
207
b3b6e05e
TL
208.. note::
209
210 Keep these three facts in mind:
211
212 - The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected.
213
214 - Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service.
215
216 - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
217
f67539c2
TL
218* For cephadm, see also :ref:`cephadm-spec-unmanaged`.
219
522d829b 220.. _cephadm-osd-removal:
f67539c2
TL
221
222Remove an OSD
223=============
224
225Removing an OSD from a cluster involves two steps:
226
227#. evacuating all placement groups (PGs) from the cluster
228#. removing the PG-free OSD from the cluster
229
230The following command performs these two steps:
231
232.. prompt:: bash #
233
234 ceph orch osd rm <osd_id(s)> [--replace] [--force]
235
236Example:
237
238.. prompt:: bash #
239
240 ceph orch osd rm 0
241
242Expected output::
243
244 Scheduled OSD(s) for removal
245
246OSDs that are not safe to destroy will be rejected.
247
2a845540
TL
248.. note::
249 After removing OSDs, if the drives the OSDs were deployed on once again
250 become available, cephadm may automatically try to deploy more OSDs
251 on these drives if they match an existing drivegroup spec. If you deployed
252 the OSDs you are removing with a spec and don't want any new OSDs deployed on
253 the drives after removal, it's best to modify the drivegroup spec before removal.
254 Either set ``unmanaged: true`` to stop it from picking up new drives at all,
255 or modify it in some way that it no longer matches the drives used for the
256 OSDs you wish to remove. Then re-apply the spec. For more info on drivegroup
257 specs see :ref:`drivegroups`. For more info on the declarative nature of
258 cephadm in reference to deploying OSDs, see :ref:`cephadm-osd-declarative`
259
f67539c2
TL
260Monitoring OSD State
261--------------------
262
263You can query the state of OSD operation with the following command:
264
265.. prompt:: bash #
266
267 ceph orch osd rm status
268
269Expected output::
270
271 OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
272 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
273 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
274 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
275
276
277When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
278
279.. note::
280 After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
281 For more information on this, read about the ``unmanaged`` parameter in :ref:`cephadm-osd-declarative`.
282
283Stopping OSD Removal
284--------------------
285
286It is possible to stop queued OSD removals by using the following command:
287
288.. prompt:: bash #
289
b3b6e05e 290 ceph orch osd rm stop <osd_id(s)>
f67539c2
TL
291
292Example:
293
294.. prompt:: bash #
295
296 ceph orch osd rm stop 4
297
298Expected output::
299
300 Stopped OSD(s) removal
301
302This resets the initial state of the OSD and takes it off the removal queue.
303
304
305Replacing an OSD
306----------------
307
308.. prompt:: bash #
309
b3b6e05e 310 orch osd rm <osd_id(s)> --replace [--force]
f67539c2
TL
311
312Example:
313
314.. prompt:: bash #
315
316 ceph orch osd rm 4 --replace
317
318Expected output::
319
320 Scheduled OSD(s) for replacement
321
322This follows the same procedure as the procedure in the "Remove OSD" section, with
323one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is
324instead assigned a 'destroyed' flag.
325
a4b75251
TL
326.. note::
327 The new OSD that will replace the removed OSD must be created on the same host
328 as the OSD that was removed.
329
f67539c2
TL
330**Preserving the OSD ID**
331
332The 'destroyed' flag is used to determine which OSD ids will be reused in the
333next OSD deployment.
334
335If you use OSDSpecs for OSD deployment, your newly added disks will be assigned
336the OSD ids of their replaced counterparts. This assumes that the new disks
337still match the OSDSpecs.
338
339Use the ``--dry-run`` flag to make certain that the ``ceph orch apply osd``
340command does what you want it to. The ``--dry-run`` flag shows you what the
341outcome of the command will be without making the changes you specify. When
342you are satisfied that the command will do what you want, run the command
343without the ``--dry-run`` flag.
344
345.. tip::
346
347 The name of your OSDSpec can be retrieved with the command ``ceph orch ls``
348
349Alternatively, you can use your OSDSpec file:
350
351.. prompt:: bash #
352
20effc67 353 ceph orch apply -i <osd_spec_file> --dry-run
f67539c2
TL
354
355Expected output::
356
357 NAME HOST DATA DB WAL
358 <name_of_osd_spec> node1 /dev/vdb - -
359
360
361When this output reflects your intention, omit the ``--dry-run`` flag to
362execute the deployment.
363
364
365Erasing Devices (Zapping Devices)
366---------------------------------
367
368Erase (zap) a device so that it can be reused. ``zap`` calls ``ceph-volume
369zap`` on the remote host.
370
371.. prompt:: bash #
372
522d829b 373 ceph orch device zap <hostname> <path>
f67539c2
TL
374
375Example command:
376
377.. prompt:: bash #
378
379 ceph orch device zap my_hostname /dev/sdx
380
381.. note::
382 If the unmanaged flag is unset, cephadm automatically deploys drives that
a4b75251 383 match the OSDSpec. For example, if you use the
f67539c2
TL
384 ``all-available-devices`` option when creating OSDs, when you ``zap`` a
385 device the cephadm orchestrator automatically creates a new OSD in the
386 device. To disable this behavior, see :ref:`cephadm-osd-declarative`.
387
388
b3b6e05e
TL
389.. _osd_autotune:
390
391Automatically tuning OSD memory
392===============================
393
394OSD daemons will adjust their memory consumption based on the
395``osd_memory_target`` config option (several gigabytes, by
396default). If Ceph is deployed on dedicated nodes that are not sharing
397memory with other services, cephadm can automatically adjust the per-OSD
398memory consumption based on the total amount of RAM and the number of deployed
399OSDs.
400
20effc67 401.. warning:: Cephadm sets ``osd_memory_target_autotune`` to ``true`` by default which is unsuitable for hyperconverged infrastructures.
b3b6e05e
TL
402
403Cephadm will start with a fraction
404(``mgr/cephadm/autotune_memory_target_ratio``, which defaults to
405``.7``) of the total RAM in the system, subtract off any memory
406consumed by non-autotuned daemons (non-OSDs, for OSDs for which
407``osd_memory_target_autotune`` is false), and then divide by the
408remaining OSDs.
409
410The final targets are reflected in the config database with options like::
411
412 WHO MASK LEVEL OPTION VALUE
413 osd host:foo basic osd_memory_target 126092301926
414 osd host:bar basic osd_memory_target 6442450944
415
416Both the limits and the current memory consumed by each daemon are visible from
417the ``ceph orch ps`` output in the ``MEM LIMIT`` column::
418
419 NAME HOST PORTS STATUS REFRESHED AGE MEM USED MEM LIMIT VERSION IMAGE ID CONTAINER ID
420 osd.1 dael running (3h) 10s ago 3h 72857k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 9e183363d39c
421 osd.2 dael running (81m) 10s ago 81m 63989k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 1f0cc479b051
422 osd.3 dael running (62m) 10s ago 62m 64071k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 ac5537492f27
423
424To exclude an OSD from memory autotuning, disable the autotune option
425for that OSD and also set a specific memory target. For example,
426
427 .. prompt:: bash #
428
429 ceph config set osd.123 osd_memory_target_autotune false
430 ceph config set osd.123 osd_memory_target 16G
431
432
f67539c2
TL
433.. _drivegroups:
434
435Advanced OSD Service Specifications
436===================================
437
b3b6e05e
TL
438:ref:`orchestrator-cli-service-spec`\s of type ``osd`` are a way to describe a
439cluster layout, using the properties of disks. Service specifications give the
440user an abstract way to tell Ceph which disks should turn into OSDs with which
441configurations, without knowing the specifics of device names and paths.
442
443Service specifications make it possible to define a yaml or json file that can
444be used to reduce the amount of manual work involved in creating OSDs.
f67539c2 445
b3b6e05e 446For example, instead of running the following command:
f67539c2
TL
447
448.. prompt:: bash [monitor.1]#
449
450 ceph orch daemon add osd *<host>*:*<path-to-device>*
451
b3b6e05e
TL
452for each device and each host, we can define a yaml or json file that allows us
453to describe the layout. Here's the most basic example.
f67539c2 454
b3b6e05e 455Create a file called (for example) ``osd_spec.yml``:
f67539c2
TL
456
457.. code-block:: yaml
458
459 service_type: osd
a4b75251 460 service_id: default_drive_group # custom name of the osd spec
f67539c2 461 placement:
a4b75251
TL
462 host_pattern: '*' # which hosts to target
463 spec:
464 data_devices: # the type of devices you are applying specs to
465 all: true # a filter, check below for a full list
f67539c2 466
b3b6e05e 467This means :
f67539c2 468
b3b6e05e
TL
469#. Turn any available device (ceph-volume decides what 'available' is) into an
470 OSD on all hosts that match the glob pattern '*'. (The glob pattern matches
471 against the registered hosts from `host ls`) A more detailed section on
472 host_pattern is available below.
f67539c2 473
b3b6e05e 474#. Then pass it to `osd create` like this:
f67539c2 475
b3b6e05e 476 .. prompt:: bash [monitor.1]#
f67539c2 477
20effc67 478 ceph orch apply -i /path/to/osd_spec.yml
f67539c2 479
b3b6e05e
TL
480 This instruction will be issued to all the matching hosts, and will deploy
481 these OSDs.
f67539c2 482
b3b6e05e
TL
483 Setups more complex than the one specified by the ``all`` filter are
484 possible. See :ref:`osd_filters` for details.
f67539c2 485
b3b6e05e
TL
486 A ``--dry-run`` flag can be passed to the ``apply osd`` command to display a
487 synopsis of the proposed layout.
f67539c2
TL
488
489Example
490
491.. prompt:: bash [monitor.1]#
492
20effc67 493 ceph orch apply -i /path/to/osd_spec.yml --dry-run
b3b6e05e 494
f67539c2
TL
495
496
b3b6e05e 497.. _osd_filters:
f67539c2
TL
498
499Filters
500-------
501
502.. note::
b3b6e05e
TL
503 Filters are applied using an `AND` gate by default. This means that a drive
504 must fulfill all filter criteria in order to get selected. This behavior can
505 be adjusted by setting ``filter_logic: OR`` in the OSD specification.
f67539c2 506
b3b6e05e
TL
507Filters are used to assign disks to groups, using their attributes to group
508them.
f67539c2 509
b3b6e05e
TL
510The attributes are based off of ceph-volume's disk query. You can retrieve
511information about the attributes with this command:
f67539c2
TL
512
513.. code-block:: bash
514
515 ceph-volume inventory </path/to/disk>
516
b3b6e05e
TL
517Vendor or Model
518^^^^^^^^^^^^^^^
f67539c2 519
b3b6e05e 520Specific disks can be targeted by vendor or model:
f67539c2
TL
521
522.. code-block:: yaml
523
524 model: disk_model_name
525
526or
527
528.. code-block:: yaml
529
530 vendor: disk_vendor_name
531
532
b3b6e05e
TL
533Size
534^^^^
f67539c2 535
b3b6e05e 536Specific disks can be targeted by `Size`:
f67539c2
TL
537
538.. code-block:: yaml
539
540 size: size_spec
541
b3b6e05e
TL
542Size specs
543__________
f67539c2 544
b3b6e05e 545Size specifications can be of the following forms:
f67539c2
TL
546
547* LOW:HIGH
548* :HIGH
549* LOW:
550* EXACT
551
552Concrete examples:
553
b3b6e05e 554To include disks of an exact size
f67539c2
TL
555
556.. code-block:: yaml
557
558 size: '10G'
559
b3b6e05e 560To include disks within a given range of size:
f67539c2
TL
561
562.. code-block:: yaml
563
564 size: '10G:40G'
565
b3b6e05e 566To include disks that are less than or equal to 10G in size:
f67539c2
TL
567
568.. code-block:: yaml
569
570 size: ':10G'
571
b3b6e05e 572To include disks equal to or greater than 40G in size:
f67539c2
TL
573
574.. code-block:: yaml
575
576 size: '40G:'
577
b3b6e05e 578Sizes don't have to be specified exclusively in Gigabytes(G).
f67539c2 579
b3b6e05e
TL
580Other units of size are supported: Megabyte(M), Gigabyte(G) and Terrabyte(T).
581Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``.
f67539c2
TL
582
583
b3b6e05e
TL
584Rotational
585^^^^^^^^^^
f67539c2
TL
586
587This operates on the 'rotational' attribute of the disk.
588
589.. code-block:: yaml
590
591 rotational: 0 | 1
592
593`1` to match all disks that are rotational
594
595`0` to match all disks that are non-rotational (SSD, NVME etc)
596
597
b3b6e05e
TL
598All
599^^^
f67539c2
TL
600
601This will take all disks that are 'available'
602
a4b75251 603.. note:: This is exclusive for the data_devices section.
f67539c2
TL
604
605.. code-block:: yaml
606
607 all: true
608
609
b3b6e05e
TL
610Limiter
611^^^^^^^
f67539c2 612
b3b6e05e 613If you have specified some valid filters but want to limit the number of disks that they match, use the ``limit`` directive:
f67539c2
TL
614
615.. code-block:: yaml
616
617 limit: 2
618
b3b6e05e
TL
619For example, if you used `vendor` to match all disks that are from `VendorA`
620but want to use only the first two, you could use `limit`:
f67539c2
TL
621
622.. code-block:: yaml
623
624 data_devices:
625 vendor: VendorA
626 limit: 2
627
a4b75251 628.. note:: `limit` is a last resort and shouldn't be used if it can be avoided.
f67539c2
TL
629
630
631Additional Options
632------------------
633
634There are multiple optional settings you can use to change the way OSDs are deployed.
a4b75251 635You can add these options to the base level of an OSD spec for it to take effect.
f67539c2
TL
636
637This example would deploy all OSDs with encryption enabled.
638
639.. code-block:: yaml
640
641 service_type: osd
642 service_id: example_osd_spec
643 placement:
644 host_pattern: '*'
a4b75251
TL
645 spec:
646 data_devices:
647 all: true
648 encrypted: true
f67539c2
TL
649
650See a full list in the DriveGroupSpecs
651
652.. py:currentmodule:: ceph.deployment.drive_group
653
654.. autoclass:: DriveGroupSpec
655 :members:
656 :exclude-members: from_json
657
658Examples
a4b75251 659========
f67539c2
TL
660
661The simple case
a4b75251 662---------------
f67539c2
TL
663
664All nodes with the same setup
665
666.. code-block:: none
667
668 20 HDDs
669 Vendor: VendorA
670 Model: HDD-123-foo
671 Size: 4TB
672
673 2 SSDs
674 Vendor: VendorB
675 Model: MC-55-44-ZX
676 Size: 512GB
677
678This is a common setup and can be described quite easily:
679
680.. code-block:: yaml
681
682 service_type: osd
683 service_id: osd_spec_default
684 placement:
685 host_pattern: '*'
a4b75251
TL
686 spec:
687 data_devices:
688 model: HDD-123-foo # Note, HDD-123 would also be valid
689 db_devices:
690 model: MC-55-44-XZ # Same here, MC-55-44 is valid
f67539c2
TL
691
692However, we can improve it by reducing the filters on core properties of the drives:
693
694.. code-block:: yaml
695
696 service_type: osd
697 service_id: osd_spec_default
698 placement:
699 host_pattern: '*'
a4b75251
TL
700 spec:
701 data_devices:
702 rotational: 1
703 db_devices:
704 rotational: 0
f67539c2
TL
705
706Now, we enforce all rotating devices to be declared as 'data devices' and all non-rotating devices will be used as shared_devices (wal, db)
707
708If you know that drives with more than 2TB will always be the slower data devices, you can also filter by size:
709
710.. code-block:: yaml
711
712 service_type: osd
713 service_id: osd_spec_default
714 placement:
715 host_pattern: '*'
a4b75251
TL
716 spec:
717 data_devices:
718 size: '2TB:'
719 db_devices:
720 size: ':2TB'
f67539c2 721
a4b75251 722.. note:: All of the above OSD specs are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change.
f67539c2
TL
723
724
a4b75251
TL
725Multiple OSD specs for a single host
726------------------------------------
f67539c2
TL
727
728Here we have two distinct setups
729
730.. code-block:: none
731
732 20 HDDs
733 Vendor: VendorA
734 Model: HDD-123-foo
735 Size: 4TB
736
737 12 SSDs
738 Vendor: VendorB
739 Model: MC-55-44-ZX
740 Size: 512GB
741
742 2 NVMEs
743 Vendor: VendorC
744 Model: NVME-QQQQ-987
745 Size: 256GB
746
747
748* 20 HDDs should share 2 SSDs
749* 10 SSDs should share 2 NVMes
750
751This can be described with two layouts.
752
753.. code-block:: yaml
754
755 service_type: osd
756 service_id: osd_spec_hdd
757 placement:
758 host_pattern: '*'
a4b75251
TL
759 spec:
760 data_devices:
761 rotational: 0
762 db_devices:
763 model: MC-55-44-XZ
764 limit: 2 # db_slots is actually to be favoured here, but it's not implemented yet
f67539c2
TL
765 ---
766 service_type: osd
767 service_id: osd_spec_ssd
768 placement:
769 host_pattern: '*'
a4b75251
TL
770 spec:
771 data_devices:
772 model: MC-55-44-XZ
773 db_devices:
774 vendor: VendorC
f67539c2
TL
775
776This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated db/wal devices.
777The remaining SSDs(8) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
778
a4b75251
TL
779Multiple hosts with the same disk layout
780----------------------------------------
781
782Assuming the cluster has different kinds of hosts each with similar disk
783layout, it is recommended to apply different OSD specs matching only one
784set of hosts. Typically you will have a spec for multiple hosts with the
785same layout.
f67539c2 786
20effc67
TL
787The service id as the unique key: In case a new OSD spec with an already
788applied service id is applied, the existing OSD spec will be superseded.
a4b75251
TL
789cephadm will now create new OSD daemons based on the new spec
790definition. Existing OSD daemons will not be affected. See :ref:`cephadm-osd-declarative`.
f67539c2
TL
791
792Node1-5
793
794.. code-block:: none
795
796 20 HDDs
797 Vendor: Intel
798 Model: SSD-123-foo
799 Size: 4TB
800 2 SSDs
801 Vendor: VendorA
802 Model: MC-55-44-ZX
803 Size: 512GB
804
805Node6-10
806
807.. code-block:: none
808
809 5 NVMEs
810 Vendor: Intel
811 Model: SSD-123-foo
812 Size: 4TB
813 20 SSDs
814 Vendor: VendorA
815 Model: MC-55-44-ZX
816 Size: 512GB
817
a4b75251 818You can use the 'placement' key in the layout to target certain nodes.
f67539c2
TL
819
820.. code-block:: yaml
821
822 service_type: osd
a4b75251 823 service_id: disk_layout_a
f67539c2 824 placement:
a4b75251
TL
825 label: disk_layout_a
826 spec:
827 data_devices:
828 rotational: 1
829 db_devices:
830 rotational: 0
f67539c2
TL
831 ---
832 service_type: osd
a4b75251 833 service_id: disk_layout_b
f67539c2 834 placement:
a4b75251
TL
835 label: disk_layout_b
836 spec:
837 data_devices:
838 model: MC-55-44-XZ
839 db_devices:
840 model: SSD-123-foo
841
842This applies different OSD specs to different hosts depending on the `placement` key.
843See :ref:`orchestrator-cli-placement-spec`
844
845.. note::
846
847 Assuming each host has a unique disk layout, each OSD
848 spec needs to have a different service id
f67539c2 849
f67539c2
TL
850
851Dedicated wal + db
a4b75251 852------------------
f67539c2
TL
853
854All previous cases co-located the WALs with the DBs.
855It's however possible to deploy the WAL on a dedicated device as well, if it makes sense.
856
857.. code-block:: none
858
859 20 HDDs
860 Vendor: VendorA
861 Model: SSD-123-foo
862 Size: 4TB
863
864 2 SSDs
865 Vendor: VendorB
866 Model: MC-55-44-ZX
867 Size: 512GB
868
869 2 NVMEs
870 Vendor: VendorC
871 Model: NVME-QQQQ-987
872 Size: 256GB
873
874
875The OSD spec for this case would look like the following (using the `model` filter):
876
877.. code-block:: yaml
878
879 service_type: osd
880 service_id: osd_spec_default
881 placement:
882 host_pattern: '*'
a4b75251
TL
883 spec:
884 data_devices:
885 model: MC-55-44-XZ
886 db_devices:
887 model: SSD-123-foo
888 wal_devices:
889 model: NVME-QQQQ-987
f67539c2
TL
890
891
892It is also possible to specify directly device paths in specific hosts like the following:
893
894.. code-block:: yaml
895
896 service_type: osd
897 service_id: osd_using_paths
898 placement:
899 hosts:
900 - Node01
901 - Node02
a4b75251
TL
902 spec:
903 data_devices:
904 paths:
f67539c2 905 - /dev/sdb
a4b75251
TL
906 db_devices:
907 paths:
f67539c2 908 - /dev/sdc
a4b75251
TL
909 wal_devices:
910 paths:
f67539c2
TL
911 - /dev/sdd
912
913
914This can easily be done with other filters, like `size` or `vendor` as well.
915
a4b75251
TL
916.. _cephadm-osd-activate:
917
f67539c2
TL
918Activate existing OSDs
919======================
920
921In case the OS of a host was reinstalled, existing OSDs need to be activated
922again. For this use case, cephadm provides a wrapper for :ref:`ceph-volume-lvm-activate` that
923activates all existing OSDs on a host.
924
925.. prompt:: bash #
926
927 ceph cephadm osd activate <host>...
928
929This will scan all existing disks for OSDs and deploy corresponding daemons.
a4b75251 930
20effc67
TL
931Further Reading
932===============
a4b75251
TL
933
934* :ref:`ceph-volume`
935* :ref:`rados-index`