3 =========================
4 OSD Service Specification
5 =========================
7 :ref:`orchestrator-cli-service-spec` of type ``osd`` are a way to describe a cluster layout using the properties of disks.
8 It gives the user an abstract way tell ceph which disks should turn into an OSD
9 with which configuration without knowing the specifics of device names and paths.
11 Instead of doing this::
13 [monitor 1] # ceph orch daemon add osd *<host>*:*<path-to-device>*
15 for each device and each host, we can define a yaml|json file that allows us to describe
16 the layout. Here's the most basic example.
18 Create a file called i.e. osd_spec.yml
23 service_id: default_drive_group <- name of the drive_group (name can be custom)
25 host_pattern: '*' <- which hosts to target, currently only supports globs
26 data_devices: <- the type of devices you are applying specs to
27 all: true <- a filter, check below for a full list
29 This would translate to:
31 Turn any available(ceph-volume decides what 'available' is) into an OSD on all hosts that match
32 the glob pattern '*'. (The glob pattern matches against the registered hosts from `host ls`)
33 There will be a more detailed section on host_pattern down below.
35 and pass it to `osd create` like so::
37 [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml
39 This will go out on all the matching hosts and deploy these OSDs.
41 Since we want to have more complex setups, there are more filters than just the 'all' filter.
43 Also, there is a `--dry-run` flag that can be passed to the `apply osd` command, which gives you a synopsis
44 of the proposed layout.
48 [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml --dry-run
56 Filters are applied using a `AND` gate by default. This essentially means that a drive needs to fulfill all filter
57 criteria in order to get selected.
58 If you wish to change this behavior you can adjust this behavior by setting
60 `filter_logic: OR` # valid arguments are `AND`, `OR`
62 in the OSD Specification.
64 You can assign disks to certain groups by their attributes using filters.
66 The attributes are based off of ceph-volume's disk query. You can retrieve the information
69 ceph-volume inventory </path/to/disk>
74 You can target specific disks by their Vendor or by their Model
78 model: disk_model_name
84 vendor: disk_vendor_name
90 You can also match by disk `Size`.
99 Size specification of format can be of form:
108 Includes disks of an exact size::
112 Includes disks which size is within the range::
116 Includes disks less than or equal to 10G in size::
121 Includes disks equal to or greater than 40G in size::
125 Sizes don't have to be exclusively in Gigabyte(G).
127 Supported units are Megabyte(M), Gigabyte(G) and Terrabyte(T). Also appending the (B) for byte is supported. MB, GB, TB
133 This operates on the 'rotational' attribute of the disk.
139 `1` to match all disks that are rotational
141 `0` to match all disks that are non-rotational (SSD, NVME etc)
147 This will take all disks that are 'available'
149 Note: This is exclusive for the data_devices section.
159 When you specified valid filters but want to limit the amount of matching disks you can use the 'limit' directive.
165 For example, if you used `vendor` to match all disks that are from `VendorA` but only want to use the first two
166 you could use `limit`.
174 Note: Be aware that `limit` is really just a last resort and shouldn't be used if it can be avoided.
180 There are multiple optional settings you can use to change the way OSDs are deployed.
181 You can add these options to the base level of a DriveGroup for it to take effect.
183 This example would deploy all OSDs with encryption enabled.
188 service_id: example_osd_spec
195 See a full list in the DriveGroupSpecs
197 .. py:currentmodule:: ceph.deployment.drive_group
199 .. autoclass:: DriveGroupSpec
201 :exclude-members: from_json
209 All nodes with the same setup::
221 This is a common setup and can be described quite easily:
226 service_id: osd_spec_default
230 model: HDD-123-foo <- note that HDD-123 would also be valid
232 model: MC-55-44-XZ <- same here, MC-55-44 is valid
234 However, we can improve it by reducing the filters on core properties of the drives:
239 service_id: osd_spec_default
247 Now, we enforce all rotating devices to be declared as 'data devices' and all non-rotating devices will be used as shared_devices (wal, db)
249 If you know that drives with more than 2TB will always be the slower data devices, you can also filter by size:
254 service_id: osd_spec_default
262 Note: All of the above DriveGroups are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change.
268 Here we have two distinct setups::
286 * 20 HDDs should share 2 SSDs
287 * 10 SSDs should share 2 NVMes
289 This can be described with two layouts.
294 service_id: osd_spec_hdd
301 limit: 2 (db_slots is actually to be favoured here, but it's not implemented yet)
304 service_id: osd_spec_ssd
312 This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated db/wal devices.
313 The remaining SSDs(8) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
315 The advanced case (with non-uniform nodes)
316 ------------------------------------------
318 The examples above assumed that all nodes have the same drives. That's however not always the case.
342 You can use the 'host_pattern' key in the layout to target certain nodes. Salt target notation helps to keep things easy.
348 service_id: osd_spec_node_one_to_five
350 host_pattern: 'node[1-5]'
358 service_id: osd_spec_six_to_ten
360 host_pattern: 'node[6-10]'
366 This applies different OSD specs to different hosts depending on the `host_pattern` key.
371 All previous cases co-located the WALs with the DBs.
372 It's however possible to deploy the WAL on a dedicated device as well, if it makes sense.
392 The OSD spec for this case would look like the following (using the `model` filter):
397 service_id: osd_spec_default
408 This can easily be done with other filters, like `size` or `vendor` as well.