]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/crush-map.rst
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / doc / rados / operations / crush-map.rst
1 ============
2 CRUSH Maps
3 ============
4
5 The :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)` algorithm
6 determines how to store and retrieve data by computing data storage locations.
7 CRUSH empowers Ceph clients to communicate with OSDs directly rather than
8 through a centralized server or broker. With an algorithmically determined
9 method of storing and retrieving data, Ceph avoids a single point of failure, a
10 performance bottleneck, and a physical limit to its scalability.
11
12 CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly
13 store and retrieve data in OSDs with a uniform distribution of data across the
14 cluster. For a detailed discussion of CRUSH, see
15 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_
16
17 CRUSH maps contain a list of :abbr:`OSDs (Object Storage Devices)`, a list of
18 'buckets' for aggregating the devices into physical locations, and a list of
19 rules that tell CRUSH how it should replicate data in a Ceph cluster's pools. By
20 reflecting the underlying physical organization of the installation, CRUSH can
21 model—and thereby address—potential sources of correlated device failures.
22 Typical sources include physical proximity, a shared power source, and a shared
23 network. By encoding this information into the cluster map, CRUSH placement
24 policies can separate object replicas across different failure domains while
25 still maintaining the desired distribution. For example, to address the
26 possibility of concurrent failures, it may be desirable to ensure that data
27 replicas are on devices using different shelves, racks, power supplies,
28 controllers, and/or physical locations.
29
30 When you create a configuration file and deploy Ceph with ``ceph-deploy``, Ceph
31 generates a default CRUSH map for your configuration. The default CRUSH map is
32 fine for your Ceph sandbox environment. However, when you deploy a large-scale
33 data cluster, you should give significant consideration to developing a custom
34 CRUSH map, because it will help you manage your Ceph cluster, improve
35 performance and ensure data safety.
36
37 For example, if an OSD goes down, a CRUSH map can help you to locate
38 the physical data center, room, row and rack of the host with the failed OSD in
39 the event you need to use onsite support or replace hardware.
40
41 Similarly, CRUSH may help you identify faults more quickly. For example, if all
42 OSDs in a particular rack go down simultaneously, the fault may lie with a
43 network switch or power to the rack rather than the OSDs themselves.
44
45 A custom CRUSH map can also help you identify the physical locations where
46 Ceph stores redundant copies of data when the placement group(s) associated
47 with a failed host are in a degraded state.
48
49 .. note:: Lines of code in example boxes may extend past the edge of the box.
50 Please scroll when reading or copying longer examples.
51
52
53 CRUSH Location
54 ==============
55
56 The location of an OSD in terms of the CRUSH map's hierarchy is referred to
57 as a 'crush location'. This location specifier takes the form of a list of
58 key and value pairs describing a position. For example, if an OSD is in a
59 particular row, rack, chassis and host, and is part of the 'default' CRUSH
60 tree, its crush location could be described as::
61
62 root=default row=a rack=a2 chassis=a2a host=a2a1
63
64 Note:
65
66 #. Note that the order of the keys does not matter.
67 #. The key name (left of ``=``) must be a valid CRUSH ``type``. By default
68 these include root, datacenter, room, row, pod, pdu, rack, chassis and host,
69 but those types can be customized to be anything appropriate by modifying
70 the CRUSH map.
71 #. Not all keys need to be specified. For example, by default, Ceph
72 automatically sets a ``ceph-osd`` daemon's location to be
73 ``root=default host=HOSTNAME`` (based on the output from ``hostname -s``).
74
75 ceph-crush-location hook
76 ------------------------
77
78 By default, the ``ceph-crush-location`` utility will generate a CRUSH
79 location string for a given daemon. The location is based on, in order of
80 preference:
81
82 #. A ``TYPE crush location`` option in ceph.conf. For example, this
83 is ``osd crush location`` for OSD daemons.
84 #. A ``crush location`` option in ceph.conf.
85 #. A default of ``root=default host=HOSTNAME`` where the hostname is
86 generated with the ``hostname -s`` command.
87
88 In a typical deployment scenario, provisioning software (or the system
89 administrator) can simply set the 'crush location' field in a host's
90 ceph.conf to describe that machine's location within the datacenter or
91 cluster. This will provide location awareness to both Ceph daemons
92 and clients alike.
93
94 It is possible to manage the CRUSH map entirely manually by toggling
95 the hook off in the configuration::
96
97 osd crush update on start = false
98
99 Custom location hooks
100 ---------------------
101
102 A customized location hook can be used in place of the generic hook for OSD
103 daemon placement in the hierarchy. (On startup, each OSD ensures its position is
104 correct.)::
105
106 osd crush location hook = /path/to/script
107
108 This hook is passed several arguments (below) and should output a single line
109 to stdout with the CRUSH location description.::
110
111 $ ceph-crush-location --cluster CLUSTER --id ID --type TYPE
112
113 where the cluster name is typically 'ceph', the id is the daemon
114 identifier (the OSD number), and the daemon type is typically ``osd``.
115
116
117 Editing a CRUSH Map
118 ===================
119
120 To edit an existing CRUSH map:
121
122 #. `Get the CRUSH map`_.
123 #. `Decompile`_ the CRUSH map.
124 #. Edit at least one of `Devices`_, `Buckets`_ and `Rules`_.
125 #. `Recompile`_ the CRUSH map.
126 #. `Set the CRUSH map`_.
127
128 To activate CRUSH map rules for a specific pool, identify the common ruleset
129 number for those rules and specify that ruleset number for the pool. See `Set
130 Pool Values`_ for details.
131
132 .. _Get the CRUSH map: #getcrushmap
133 .. _Decompile: #decompilecrushmap
134 .. _Devices: #crushmapdevices
135 .. _Buckets: #crushmapbuckets
136 .. _Rules: #crushmaprules
137 .. _Recompile: #compilecrushmap
138 .. _Set the CRUSH map: #setcrushmap
139 .. _Set Pool Values: ../pools#setpoolvalues
140
141 .. _getcrushmap:
142
143 Get a CRUSH Map
144 ---------------
145
146 To get the CRUSH map for your cluster, execute the following::
147
148 ceph osd getcrushmap -o {compiled-crushmap-filename}
149
150 Ceph will output (-o) a compiled CRUSH map to the filename you specified. Since
151 the CRUSH map is in a compiled form, you must decompile it first before you can
152 edit it.
153
154 .. _decompilecrushmap:
155
156 Decompile a CRUSH Map
157 ---------------------
158
159 To decompile a CRUSH map, execute the following::
160
161 crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}
162
163 Ceph will decompile (-d) the compiled CRUSH map and output (-o) it to the
164 filename you specified.
165
166
167 .. _compilecrushmap:
168
169 Compile a CRUSH Map
170 -------------------
171
172 To compile a CRUSH map, execute the following::
173
174 crushtool -c {decompiled-crush-map-filename} -o {compiled-crush-map-filename}
175
176 Ceph will store a compiled CRUSH map to the filename you specified.
177
178
179 .. _setcrushmap:
180
181 Set a CRUSH Map
182 ---------------
183
184 To set the CRUSH map for your cluster, execute the following::
185
186 ceph osd setcrushmap -i {compiled-crushmap-filename}
187
188 Ceph will input the compiled CRUSH map of the filename you specified as the
189 CRUSH map for the cluster.
190
191
192
193 CRUSH Map Parameters
194 ====================
195
196 There are four main sections to a CRUSH Map.
197
198 #. **Devices:** Devices consist of any object storage device--i.e., the storage
199 drive corresponding to a ``ceph-osd`` daemon. You should have a device for
200 each OSD daemon in your Ceph configuration file.
201
202 #. **Bucket Types**: Bucket ``types`` define the types of buckets used in your
203 CRUSH hierarchy. Buckets consist of a hierarchical aggregation of storage
204 locations (e.g., rows, racks, chassis, hosts, etc.) and their assigned
205 weights.
206
207 #. **Bucket Instances:** Once you define bucket types, you must declare bucket
208 instances for your hosts, and any other failure domain partitioning
209 you choose.
210
211 #. **Rules:** Rules consist of the manner of selecting buckets.
212
213 If you launched Ceph using one of our Quick Start guides, you'll notice
214 that you didn't need to create a CRUSH map. Ceph's deployment tools generate
215 a default CRUSH map that lists devices from the OSDs you defined in your
216 Ceph configuration file, and it declares a bucket for each host you specified
217 in the ``[osd]`` sections of your Ceph configuration file. You should create
218 your own CRUSH maps with buckets that reflect your cluster's failure domains
219 to better ensure data safety and availability.
220
221 .. note:: The generated CRUSH map doesn't take your larger grained failure
222 domains into account. So you should modify your CRUSH map to account for
223 larger grained failure domains such as chassis, racks, rows, data
224 centers, etc.
225
226
227
228 .. _crushmapdevices:
229
230 CRUSH Map Devices
231 -----------------
232
233 To map placement groups to OSDs, a CRUSH map requires a list of OSD devices
234 (i.e., the names of the OSD daemons from the Ceph configuration file). The list
235 of devices appears first in the CRUSH map. To declare a device in the CRUSH map,
236 create a new line under your list of devices, enter ``device`` followed by a
237 unique numeric ID, followed by the corresponding ``ceph-osd`` daemon instance.
238 The device class can optionaly be added to group devices so they can be
239 conveniently targetted by a crush rule.
240
241 ::
242
243 #devices
244 device {num} {osd.name} [class {class}]
245
246 For example::
247
248 #devices
249 device 0 osd.0 class ssd
250 device 1 osd.1 class hdd
251 device 2 osd.2
252 device 3 osd.3
253
254 As a general rule, an OSD daemon maps to a single storage drive or to a RAID.
255
256
257 CRUSH Map Bucket Types
258 ----------------------
259
260 The second list in the CRUSH map defines 'bucket' types. Buckets facilitate
261 a hierarchy of nodes and leaves. Node (or non-leaf) buckets typically represent
262 physical locations in a hierarchy. Nodes aggregate other nodes or leaves.
263 Leaf buckets represent ``ceph-osd`` daemons and their corresponding storage
264 media.
265
266 .. tip:: The term "bucket" used in the context of CRUSH means a node in
267 the hierarchy, i.e. a location or a piece of physical hardware. It
268 is a different concept from the term "bucket" when used in the
269 context of RADOS Gateway APIs.
270
271 To add a bucket type to the CRUSH map, create a new line under your list of
272 bucket types. Enter ``type`` followed by a unique numeric ID and a bucket name.
273 By convention, there is one leaf bucket and it is ``type 0``; however, you may
274 give it any name you like (e.g., osd, disk, drive, storage, etc.)::
275
276 #types
277 type {num} {bucket-name}
278
279 For example::
280
281 # types
282 type 0 osd
283 type 1 host
284 type 2 chassis
285 type 3 rack
286 type 4 row
287 type 5 pdu
288 type 6 pod
289 type 7 room
290 type 8 datacenter
291 type 9 region
292 type 10 root
293
294
295
296 .. _crushmapbuckets:
297
298 CRUSH Map Bucket Hierarchy
299 --------------------------
300
301 The CRUSH algorithm distributes data objects among storage devices according
302 to a per-device weight value, approximating a uniform probability distribution.
303 CRUSH distributes objects and their replicas according to the hierarchical
304 cluster map you define. Your CRUSH map represents the available storage
305 devices and the logical elements that contain them.
306
307 To map placement groups to OSDs across failure domains, a CRUSH map defines a
308 hierarchical list of bucket types (i.e., under ``#types`` in the generated CRUSH
309 map). The purpose of creating a bucket hierarchy is to segregate the
310 leaf nodes by their failure domains, such as hosts, chassis, racks, power
311 distribution units, pods, rows, rooms, and data centers. With the exception of
312 the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary, and
313 you may define it according to your own needs.
314
315 We recommend adapting your CRUSH map to your firms's hardware naming conventions
316 and using instances names that reflect the physical hardware. Your naming
317 practice can make it easier to administer the cluster and troubleshoot
318 problems when an OSD and/or other hardware malfunctions and the administrator
319 need access to physical hardware.
320
321 In the following example, the bucket hierarchy has a leaf bucket named ``osd``,
322 and two node buckets named ``host`` and ``rack`` respectively.
323
324 .. ditaa::
325 +-----------+
326 | {o}rack |
327 | Bucket |
328 +-----+-----+
329 |
330 +---------------+---------------+
331 | |
332 +-----+-----+ +-----+-----+
333 | {o}host | | {o}host |
334 | Bucket | | Bucket |
335 +-----+-----+ +-----+-----+
336 | |
337 +-------+-------+ +-------+-------+
338 | | | |
339 +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
340 | osd | | osd | | osd | | osd |
341 | Bucket | | Bucket | | Bucket | | Bucket |
342 +-----------+ +-----------+ +-----------+ +-----------+
343
344 .. note:: The higher numbered ``rack`` bucket type aggregates the lower
345 numbered ``host`` bucket type.
346
347 Since leaf nodes reflect storage devices declared under the ``#devices`` list
348 at the beginning of the CRUSH map, you do not need to declare them as bucket
349 instances. The second lowest bucket type in your hierarchy usually aggregates
350 the devices (i.e., it's usually the computer containing the storage media, and
351 uses whatever term you prefer to describe it, such as "node", "computer",
352 "server," "host", "machine", etc.). In high density environments, it is
353 increasingly common to see multiple hosts/nodes per chassis. You should account
354 for chassis failure too--e.g., the need to pull a chassis if a node fails may
355 result in bringing down numerous hosts/nodes and their OSDs.
356
357 When declaring a bucket instance, you must specify its type, give it a unique
358 name (string), assign it a unique ID expressed as a negative integer (optional),
359 specify a weight relative to the total capacity/capability of its item(s),
360 specify the bucket algorithm (usually ``straw``), and the hash (usually ``0``,
361 reflecting hash algorithm ``rjenkins1``). A bucket may have one or more items.
362 The items may consist of node buckets or leaves. Items may have a weight that
363 reflects the relative weight of the item.
364
365 You may declare a node bucket with the following syntax::
366
367 [bucket-type] [bucket-name] {
368 id [a unique negative numeric ID]
369 weight [the relative capacity/capability of the item(s)]
370 alg [the bucket type: uniform | list | tree | straw ]
371 hash [the hash type: 0 by default]
372 item [item-name] weight [weight]
373 }
374
375 For example, using the diagram above, we would define two host buckets
376 and one rack bucket. The OSDs are declared as items within the host buckets::
377
378 host node1 {
379 id -1
380 alg straw
381 hash 0
382 item osd.0 weight 1.00
383 item osd.1 weight 1.00
384 }
385
386 host node2 {
387 id -2
388 alg straw
389 hash 0
390 item osd.2 weight 1.00
391 item osd.3 weight 1.00
392 }
393
394 rack rack1 {
395 id -3
396 alg straw
397 hash 0
398 item node1 weight 2.00
399 item node2 weight 2.00
400 }
401
402 .. note:: In the foregoing example, note that the rack bucket does not contain
403 any OSDs. Rather it contains lower level host buckets, and includes the
404 sum total of their weight in the item entry.
405
406 .. topic:: Bucket Types
407
408 Ceph supports four bucket types, each representing a tradeoff between
409 performance and reorganization efficiency. If you are unsure of which bucket
410 type to use, we recommend using a ``straw`` bucket. For a detailed
411 discussion of bucket types, refer to
412 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_,
413 and more specifically to **Section 3.4**. The bucket types are:
414
415 #. **Uniform:** Uniform buckets aggregate devices with **exactly** the same
416 weight. For example, when firms commission or decommission hardware, they
417 typically do so with many machines that have exactly the same physical
418 configuration (e.g., bulk purchases). When storage devices have exactly
419 the same weight, you may use the ``uniform`` bucket type, which allows
420 CRUSH to map replicas into uniform buckets in constant time. With
421 non-uniform weights, you should use another bucket algorithm.
422
423 #. **List**: List buckets aggregate their content as linked lists. Based on
424 the :abbr:`RUSH (Replication Under Scalable Hashing)` :sub:`P` algorithm,
425 a list is a natural and intuitive choice for an **expanding cluster**:
426 either an object is relocated to the newest device with some appropriate
427 probability, or it remains on the older devices as before. The result is
428 optimal data migration when items are added to the bucket. Items removed
429 from the middle or tail of the list, however, can result in a significant
430 amount of unnecessary movement, making list buckets most suitable for
431 circumstances in which they **never (or very rarely) shrink**.
432
433 #. **Tree**: Tree buckets use a binary search tree. They are more efficient
434 than list buckets when a bucket contains a larger set of items. Based on
435 the :abbr:`RUSH (Replication Under Scalable Hashing)` :sub:`R` algorithm,
436 tree buckets reduce the placement time to O(log :sub:`n`), making them
437 suitable for managing much larger sets of devices or nested buckets.
438
439 #. **Straw:** List and Tree buckets use a divide and conquer strategy
440 in a way that either gives certain items precedence (e.g., those
441 at the beginning of a list) or obviates the need to consider entire
442 subtrees of items at all. That improves the performance of the replica
443 placement process, but can also introduce suboptimal reorganization
444 behavior when the contents of a bucket change due an addition, removal,
445 or re-weighting of an item. The straw bucket type allows all items to
446 fairly “compete” against each other for replica placement through a
447 process analogous to a draw of straws.
448
449 .. topic:: Hash
450
451 Each bucket uses a hash algorithm. Currently, Ceph supports ``rjenkins1``.
452 Enter ``0`` as your hash setting to select ``rjenkins1``.
453
454
455 .. _weightingbucketitems:
456
457 .. topic:: Weighting Bucket Items
458
459 Ceph expresses bucket weights as doubles, which allows for fine
460 weighting. A weight is the relative difference between device capacities. We
461 recommend using ``1.00`` as the relative weight for a 1TB storage device.
462 In such a scenario, a weight of ``0.5`` would represent approximately 500GB,
463 and a weight of ``3.00`` would represent approximately 3TB. Higher level
464 buckets have a weight that is the sum total of the leaf items aggregated by
465 the bucket.
466
467 A bucket item weight is one dimensional, but you may also calculate your
468 item weights to reflect the performance of the storage drive. For example,
469 if you have many 1TB drives where some have relatively low data transfer
470 rate and the others have a relatively high data transfer rate, you may
471 weight them differently, even though they have the same capacity (e.g.,
472 a weight of 0.80 for the first set of drives with lower total throughput,
473 and 1.20 for the second set of drives with higher total throughput).
474
475
476 .. _crushmaprules:
477
478 CRUSH Map Rules
479 ---------------
480
481 CRUSH maps support the notion of 'CRUSH rules', which are the rules that
482 determine data placement for a pool. For large clusters, you will likely create
483 many pools where each pool may have its own CRUSH ruleset and rules. The default
484 CRUSH map has a rule for each pool, and one ruleset assigned to each of the
485 default pools.
486
487 .. note:: In most cases, you will not need to modify the default rules. When
488 you create a new pool, its default ruleset is ``0``.
489
490
491 CRUSH rules define placement and replication strategies or distribution policies
492 that allow you to specify exactly how CRUSH places object replicas. For
493 example, you might create a rule selecting a pair of targets for 2-way
494 mirroring, another rule for selecting three targets in two different data
495 centers for 3-way mirroring, and yet another rule for erasure coding over six
496 storage devices. For a detailed discussion of CRUSH rules, refer to
497 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_,
498 and more specifically to **Section 3.2**.
499
500 A rule takes the following form::
501
502 rule <rulename> {
503
504 ruleset <ruleset>
505 type [ replicated | erasure ]
506 min_size <min-size>
507 max_size <max-size>
508 step take <bucket-name> [class <device-class>]
509 step [choose|chooseleaf] [firstn|indep] <N> <bucket-type>
510 step emit
511 }
512
513
514 ``ruleset``
515
516 :Description: A means of classifying a rule as belonging to a set of rules.
517 Activated by `setting the ruleset in a pool`_.
518
519 :Purpose: A component of the rule mask.
520 :Type: Integer
521 :Required: Yes
522 :Default: 0
523
524 .. _setting the ruleset in a pool: ../pools#setpoolvalues
525
526
527 ``type``
528
529 :Description: Describes a rule for either a storage drive (replicated)
530 or a RAID.
531
532 :Purpose: A component of the rule mask.
533 :Type: String
534 :Required: Yes
535 :Default: ``replicated``
536 :Valid Values: Currently only ``replicated`` and ``erasure``
537
538 ``min_size``
539
540 :Description: If a pool makes fewer replicas than this number, CRUSH will
541 **NOT** select this rule.
542
543 :Type: Integer
544 :Purpose: A component of the rule mask.
545 :Required: Yes
546 :Default: ``1``
547
548 ``max_size``
549
550 :Description: If a pool makes more replicas than this number, CRUSH will
551 **NOT** select this rule.
552
553 :Type: Integer
554 :Purpose: A component of the rule mask.
555 :Required: Yes
556 :Default: 10
557
558
559 ``step take <bucket-name> [class <device-class>]``
560
561 :Description: Takes a bucket name, and begins iterating down the tree.
562 If the ``device-class`` is specified, it must match
563 a class previously used when defining a device. All
564 devices that do not belong to the class are excluded.
565 :Purpose: A component of the rule.
566 :Required: Yes
567 :Example: ``step take data``
568
569
570 ``step choose firstn {num} type {bucket-type}``
571
572 :Description: Selects the number of buckets of the given type. The number is
573 usually the number of replicas in the pool (i.e., pool size).
574
575 - If ``{num} == 0``, choose ``pool-num-replicas`` buckets (all available).
576 - If ``{num} > 0 && < pool-num-replicas``, choose that many buckets.
577 - If ``{num} < 0``, it means ``pool-num-replicas - {num}``.
578
579 :Purpose: A component of the rule.
580 :Prerequisite: Follows ``step take`` or ``step choose``.
581 :Example: ``step choose firstn 1 type row``
582
583
584 ``step chooseleaf firstn {num} type {bucket-type}``
585
586 :Description: Selects a set of buckets of ``{bucket-type}`` and chooses a leaf
587 node from the subtree of each bucket in the set of buckets. The
588 number of buckets in the set is usually the number of replicas in
589 the pool (i.e., pool size).
590
591 - If ``{num} == 0``, choose ``pool-num-replicas`` buckets (all available).
592 - If ``{num} > 0 && < pool-num-replicas``, choose that many buckets.
593 - If ``{num} < 0``, it means ``pool-num-replicas - {num}``.
594
595 :Purpose: A component of the rule. Usage removes the need to select a device using two steps.
596 :Prerequisite: Follows ``step take`` or ``step choose``.
597 :Example: ``step chooseleaf firstn 0 type row``
598
599
600
601 ``step emit``
602
603 :Description: Outputs the current value and empties the stack. Typically used
604 at the end of a rule, but may also be used to pick from different
605 trees in the same rule.
606
607 :Purpose: A component of the rule.
608 :Prerequisite: Follows ``step choose``.
609 :Example: ``step emit``
610
611 .. important:: To activate one or more rules with a common ruleset number to a
612 pool, set the ruleset number of the pool.
613
614
615
616 Primary Affinity
617 ================
618
619 When a Ceph Client reads or writes data, it always contacts the primary OSD in
620 the acting set. For set ``[2, 3, 4]``, ``osd.2`` is the primary. Sometimes an
621 OSD isn't well suited to act as a primary compared to other OSDs (e.g., it has
622 a slow disk or a slow controller). To prevent performance bottlenecks
623 (especially on read operations) while maximizing utilization of your hardware,
624 you can set a Ceph OSD's primary affinity so that CRUSH is less likely to use
625 the OSD as a primary in an acting set. ::
626
627 ceph osd primary-affinity <osd-id> <weight>
628
629 Primary affinity is ``1`` by default (*i.e.,* an OSD may act as a primary). You
630 may set the OSD primary range from ``0-1``, where ``0`` means that the OSD may
631 **NOT** be used as a primary and ``1`` means that an OSD may be used as a
632 primary. When the weight is ``< 1``, it is less likely that CRUSH will select
633 the Ceph OSD Daemon to act as a primary.
634
635
636 Placing Different Pools on Different OSDS:
637 ==========================================
638
639 Suppose you want to have most pools default to OSDs backed by large hard drives,
640 but have some pools mapped to OSDs backed by fast solid-state drives (SSDs).
641 It's possible to have multiple independent CRUSH hierarchies within the same
642 CRUSH map. Define two hierarchies with two different root nodes--one for hard
643 disks (e.g., "root platter") and one for SSDs (e.g., "root ssd") as shown
644 below::
645
646 device 0 osd.0
647 device 1 osd.1
648 device 2 osd.2
649 device 3 osd.3
650 device 4 osd.4
651 device 5 osd.5
652 device 6 osd.6
653 device 7 osd.7
654
655 host ceph-osd-ssd-server-1 {
656 id -1
657 alg straw
658 hash 0
659 item osd.0 weight 1.00
660 item osd.1 weight 1.00
661 }
662
663 host ceph-osd-ssd-server-2 {
664 id -2
665 alg straw
666 hash 0
667 item osd.2 weight 1.00
668 item osd.3 weight 1.00
669 }
670
671 host ceph-osd-platter-server-1 {
672 id -3
673 alg straw
674 hash 0
675 item osd.4 weight 1.00
676 item osd.5 weight 1.00
677 }
678
679 host ceph-osd-platter-server-2 {
680 id -4
681 alg straw
682 hash 0
683 item osd.6 weight 1.00
684 item osd.7 weight 1.00
685 }
686
687 root platter {
688 id -5
689 alg straw
690 hash 0
691 item ceph-osd-platter-server-1 weight 2.00
692 item ceph-osd-platter-server-2 weight 2.00
693 }
694
695 root ssd {
696 id -6
697 alg straw
698 hash 0
699 item ceph-osd-ssd-server-1 weight 2.00
700 item ceph-osd-ssd-server-2 weight 2.00
701 }
702
703 rule data {
704 ruleset 0
705 type replicated
706 min_size 2
707 max_size 2
708 step take platter
709 step chooseleaf firstn 0 type host
710 step emit
711 }
712
713 rule metadata {
714 ruleset 1
715 type replicated
716 min_size 0
717 max_size 10
718 step take platter
719 step chooseleaf firstn 0 type host
720 step emit
721 }
722
723 rule rbd {
724 ruleset 2
725 type replicated
726 min_size 0
727 max_size 10
728 step take platter
729 step chooseleaf firstn 0 type host
730 step emit
731 }
732
733 rule platter {
734 ruleset 3
735 type replicated
736 min_size 0
737 max_size 10
738 step take platter
739 step chooseleaf firstn 0 type host
740 step emit
741 }
742
743 rule ssd {
744 ruleset 4
745 type replicated
746 min_size 0
747 max_size 4
748 step take ssd
749 step chooseleaf firstn 0 type host
750 step emit
751 }
752
753 rule ssd-primary {
754 ruleset 5
755 type replicated
756 min_size 5
757 max_size 10
758 step take ssd
759 step chooseleaf firstn 1 type host
760 step emit
761 step take platter
762 step chooseleaf firstn -1 type host
763 step emit
764 }
765
766 You can then set a pool to use the SSD rule by::
767
768 ceph osd pool set <poolname> crush_ruleset 4
769
770 Similarly, using the ``ssd-primary`` rule will cause each placement group in the
771 pool to be placed with an SSD as the primary and platters as the replicas.
772
773 .. _addosd:
774
775 Add/Move an OSD
776 ===============
777
778 To add or move an OSD in the CRUSH map of a running cluster, execute the
779 ``ceph osd crush set``. For Argonaut (v 0.48), execute the following::
780
781 ceph osd crush set {id} {name} {weight} pool={pool-name} [{bucket-type}={bucket-name} ...]
782
783 For Bobtail (v 0.56), execute the following::
784
785 ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]
786
787 Where:
788
789 ``id``
790
791 :Description: The numeric ID of the OSD.
792 :Type: Integer
793 :Required: Yes
794 :Example: ``0``
795
796
797 ``name``
798
799 :Description: The full name of the OSD.
800 :Type: String
801 :Required: Yes
802 :Example: ``osd.0``
803
804
805 ``weight``
806
807 :Description: The CRUSH weight for the OSD.
808 :Type: Double
809 :Required: Yes
810 :Example: ``2.0``
811
812
813 ``root``
814
815 :Description: The root of the tree in which the OSD resides.
816 :Type: Key/value pair.
817 :Required: Yes
818 :Example: ``root=default``
819
820
821 ``bucket-type``
822
823 :Description: You may specify the OSD's location in the CRUSH hierarchy.
824 :Type: Key/value pairs.
825 :Required: No
826 :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
827
828
829 The following example adds ``osd.0`` to the hierarchy, or moves the OSD from a
830 previous location. ::
831
832 ceph osd crush set osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1
833
834
835 Adjust an OSD's CRUSH Weight
836 ============================
837
838 To adjust an OSD's crush weight in the CRUSH map of a running cluster, execute
839 the following::
840
841 ceph osd crush reweight {name} {weight}
842
843 Where:
844
845 ``name``
846
847 :Description: The full name of the OSD.
848 :Type: String
849 :Required: Yes
850 :Example: ``osd.0``
851
852
853 ``weight``
854
855 :Description: The CRUSH weight for the OSD.
856 :Type: Double
857 :Required: Yes
858 :Example: ``2.0``
859
860
861 .. _removeosd:
862
863 Remove an OSD
864 =============
865
866 To remove an OSD from the CRUSH map of a running cluster, execute the following::
867
868 ceph osd crush remove {name}
869
870 Where:
871
872 ``name``
873
874 :Description: The full name of the OSD.
875 :Type: String
876 :Required: Yes
877 :Example: ``osd.0``
878
879 Add a Bucket
880 ============
881
882 To add a bucket in the CRUSH map of a running cluster, execute the ``ceph osd crush add-bucket`` command::
883
884 ceph osd crush add-bucket {bucket-name} {bucket-type}
885
886 Where:
887
888 ``bucket-name``
889
890 :Description: The full name of the bucket.
891 :Type: String
892 :Required: Yes
893 :Example: ``rack12``
894
895
896 ``bucket-type``
897
898 :Description: The type of the bucket. The type must already exist in the hierarchy.
899 :Type: String
900 :Required: Yes
901 :Example: ``rack``
902
903
904 The following example adds the ``rack12`` bucket to the hierarchy::
905
906 ceph osd crush add-bucket rack12 rack
907
908 Move a Bucket
909 =============
910
911 To move a bucket to a different location or position in the CRUSH map hierarchy,
912 execute the following::
913
914 ceph osd crush move {bucket-name} {bucket-type}={bucket-name}, [...]
915
916 Where:
917
918 ``bucket-name``
919
920 :Description: The name of the bucket to move/reposition.
921 :Type: String
922 :Required: Yes
923 :Example: ``foo-bar-1``
924
925 ``bucket-type``
926
927 :Description: You may specify the bucket's location in the CRUSH hierarchy.
928 :Type: Key/value pairs.
929 :Required: No
930 :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
931
932 Remove a Bucket
933 ===============
934
935 To remove a bucket from the CRUSH map hierarchy, execute the following::
936
937 ceph osd crush remove {bucket-name}
938
939 .. note:: A bucket must be empty before removing it from the CRUSH hierarchy.
940
941 Where:
942
943 ``bucket-name``
944
945 :Description: The name of the bucket that you'd like to remove.
946 :Type: String
947 :Required: Yes
948 :Example: ``rack12``
949
950 The following example removes the ``rack12`` bucket from the hierarchy::
951
952 ceph osd crush remove rack12
953
954 Tunables
955 ========
956
957 Over time, we have made (and continue to make) improvements to the
958 CRUSH algorithm used to calculate the placement of data. In order to
959 support the change in behavior, we have introduced a series of tunable
960 options that control whether the legacy or improved variation of the
961 algorithm is used.
962
963 In order to use newer tunables, both clients and servers must support
964 the new version of CRUSH. For this reason, we have created
965 ``profiles`` that are named after the Ceph version in which they were
966 introduced. For example, the ``firefly`` tunables are first supported
967 in the firefly release, and will not work with older (e.g., dumpling)
968 clients. Once a given set of tunables are changed from the legacy
969 default behavior, the ``ceph-mon`` and ``ceph-osd`` will prevent older
970 clients who do not support the new CRUSH features from connecting to
971 the cluster.
972
973 argonaut (legacy)
974 -----------------
975
976 The legacy CRUSH behavior used by argonaut and older releases works
977 fine for most clusters, provided there are not too many OSDs that have
978 been marked out.
979
980 bobtail (CRUSH_TUNABLES2)
981 -------------------------
982
983 The bobtail tunable profile fixes a few key misbehaviors:
984
985 * For hierarchies with a small number of devices in the leaf buckets,
986 some PGs map to fewer than the desired number of replicas. This
987 commonly happens for hierarchies with "host" nodes with a small
988 number (1-3) of OSDs nested beneath each one.
989
990 * For large clusters, some small percentages of PGs map to less than
991 the desired number of OSDs. This is more prevalent when there are
992 several layers of the hierarchy (e.g., row, rack, host, osd).
993
994 * When some OSDs are marked out, the data tends to get redistributed
995 to nearby OSDs instead of across the entire hierarchy.
996
997 The new tunables are:
998
999 * ``choose_local_tries``: Number of local retries. Legacy value is
1000 2, optimal value is 0.
1001
1002 * ``choose_local_fallback_tries``: Legacy value is 5, optimal value
1003 is 0.
1004
1005 * ``choose_total_tries``: Total number of attempts to choose an item.
1006 Legacy value was 19, subsequent testing indicates that a value of
1007 50 is more appropriate for typical clusters. For extremely large
1008 clusters, a larger value might be necessary.
1009
1010 * ``chooseleaf_descend_once``: Whether a recursive chooseleaf attempt
1011 will retry, or only try once and allow the original placement to
1012 retry. Legacy default is 0, optimal value is 1.
1013
1014 Migration impact:
1015
1016 * Moving from argonaut to bobtail tunables triggers a moderate amount
1017 of data movement. Use caution on a cluster that is already
1018 populated with data.
1019
1020 firefly (CRUSH_TUNABLES3)
1021 -------------------------
1022
1023 The firefly tunable profile fixes a problem
1024 with the ``chooseleaf`` CRUSH rule behavior that tends to result in PG
1025 mappings with too few results when too many OSDs have been marked out.
1026
1027 The new tunable is:
1028
1029 * ``chooseleaf_vary_r``: Whether a recursive chooseleaf attempt will
1030 start with a non-zero value of r, based on how many attempts the
1031 parent has already made. Legacy default is 0, but with this value
1032 CRUSH is sometimes unable to find a mapping. The optimal value (in
1033 terms of computational cost and correctness) is 1.
1034
1035 Migration impact:
1036
1037 * For existing clusters that have lots of existing data, changing
1038 from 0 to 1 will cause a lot of data to move; a value of 4 or 5
1039 will allow CRUSH to find a valid mapping but will make less data
1040 move.
1041
1042 straw_calc_version tunable (introduced with Firefly too)
1043 --------------------------------------------------------
1044
1045 There were some problems with the internal weights calculated and
1046 stored in the CRUSH map for ``straw`` buckets. Specifically, when
1047 there were items with a CRUSH weight of 0 or both a mix of weights and
1048 some duplicated weights CRUSH would distribute data incorrectly (i.e.,
1049 not in proportion to the weights).
1050
1051 The new tunable is:
1052
1053 * ``straw_calc_version``: A value of 0 preserves the old, broken
1054 internal weight calculation; a value of 1 fixes the behavior.
1055
1056 Migration impact:
1057
1058 * Moving to straw_calc_version 1 and then adjusting a straw bucket
1059 (by adding, removing, or reweighting an item, or by using the
1060 reweight-all command) can trigger a small to moderate amount of
1061 data movement *if* the cluster has hit one of the problematic
1062 conditions.
1063
1064 This tunable option is special because it has absolutely no impact
1065 concerning the required kernel version in the client side.
1066
1067 hammer (CRUSH_V4)
1068 -----------------
1069
1070 The hammer tunable profile does not affect the
1071 mapping of existing CRUSH maps simply by changing the profile. However:
1072
1073 * There is a new bucket type (``straw2``) supported. The new
1074 ``straw2`` bucket type fixes several limitations in the original
1075 ``straw`` bucket. Specifically, the old ``straw`` buckets would
1076 change some mappings that should have changed when a weight was
1077 adjusted, while ``straw2`` achieves the original goal of only
1078 changing mappings to or from the bucket item whose weight has
1079 changed.
1080
1081 * ``straw2`` is the default for any newly created buckets.
1082
1083 Migration impact:
1084
1085 * Changing a bucket type from ``straw`` to ``straw2`` will result in
1086 a reasonably small amount of data movement, depending on how much
1087 the bucket item weights vary from each other. When the weights are
1088 all the same no data will move, and when item weights vary
1089 significantly there will be more movement.
1090
1091 jewel (CRUSH_TUNABLES5)
1092 -----------------------
1093
1094 The jewel tunable profile improves the
1095 overall behavior of CRUSH such that significantly fewer mappings
1096 change when an OSD is marked out of the cluster.
1097
1098 The new tunable is:
1099
1100 * ``chooseleaf_stable``: Whether a recursive chooseleaf attempt will
1101 use a better value for an inner loop that greatly reduces the number
1102 of mapping changes when an OSD is marked out. The legacy value is 0,
1103 while the new value of 1 uses the new approach.
1104
1105 Migration impact:
1106
1107 * Changing this value on an existing cluster will result in a very
1108 large amount of data movement as almost every PG mapping is likely
1109 to change.
1110
1111
1112
1113
1114 Which client versions support CRUSH_TUNABLES
1115 --------------------------------------------
1116
1117 * argonaut series, v0.48.1 or later
1118 * v0.49 or later
1119 * Linux kernel version v3.6 or later (for the file system and RBD kernel clients)
1120
1121 Which client versions support CRUSH_TUNABLES2
1122 ---------------------------------------------
1123
1124 * v0.55 or later, including bobtail series (v0.56.x)
1125 * Linux kernel version v3.9 or later (for the file system and RBD kernel clients)
1126
1127 Which client versions support CRUSH_TUNABLES3
1128 ---------------------------------------------
1129
1130 * v0.78 (firefly) or later
1131 * Linux kernel version v3.15 or later (for the file system and RBD kernel clients)
1132
1133 Which client versions support CRUSH_V4
1134 --------------------------------------
1135
1136 * v0.94 (hammer) or later
1137 * Linux kernel version v4.1 or later (for the file system and RBD kernel clients)
1138
1139 Which client versions support CRUSH_TUNABLES5
1140 ---------------------------------------------
1141
1142 * v10.0.2 (jewel) or later
1143 * Linux kernel version v4.5 or later (for the file system and RBD kernel clients)
1144
1145 Warning when tunables are non-optimal
1146 -------------------------------------
1147
1148 Starting with version v0.74, Ceph will issue a health warning if the
1149 current CRUSH tunables don't include all the optimal values from the
1150 ``default`` profile (see below for the meaning of the ``default`` profile).
1151 To make this warning go away, you have two options:
1152
1153 1. Adjust the tunables on the existing cluster. Note that this will
1154 result in some data movement (possibly as much as 10%). This is the
1155 preferred route, but should be taken with care on a production cluster
1156 where the data movement may affect performance. You can enable optimal
1157 tunables with::
1158
1159 ceph osd crush tunables optimal
1160
1161 If things go poorly (e.g., too much load) and not very much
1162 progress has been made, or there is a client compatibility problem
1163 (old kernel cephfs or rbd clients, or pre-bobtail librados
1164 clients), you can switch back with::
1165
1166 ceph osd crush tunables legacy
1167
1168 2. You can make the warning go away without making any changes to CRUSH by
1169 adding the following option to your ceph.conf ``[mon]`` section::
1170
1171 mon warn on legacy crush tunables = false
1172
1173 For the change to take effect, you will need to restart the monitors, or
1174 apply the option to running monitors with::
1175
1176 ceph tell mon.\* injectargs --no-mon-warn-on-legacy-crush-tunables
1177
1178
1179 A few important points
1180 ----------------------
1181
1182 * Adjusting these values will result in the shift of some PGs between
1183 storage nodes. If the Ceph cluster is already storing a lot of
1184 data, be prepared for some fraction of the data to move.
1185 * The ``ceph-osd`` and ``ceph-mon`` daemons will start requiring the
1186 feature bits of new connections as soon as they get
1187 the updated map. However, already-connected clients are
1188 effectively grandfathered in, and will misbehave if they do not
1189 support the new feature.
1190 * If the CRUSH tunables are set to non-legacy values and then later
1191 changed back to the defult values, ``ceph-osd`` daemons will not be
1192 required to support the feature. However, the OSD peering process
1193 requires examining and understanding old maps. Therefore, you
1194 should not run old versions of the ``ceph-osd`` daemon
1195 if the cluster has previously used non-legacy CRUSH values, even if
1196 the latest version of the map has been switched back to using the
1197 legacy defaults.
1198
1199 Tuning CRUSH
1200 ------------
1201
1202 The simplest way to adjust the crush tunables is by changing to a known
1203 profile. Those are:
1204
1205 * ``legacy``: the legacy behavior from argonaut and earlier.
1206 * ``argonaut``: the legacy values supported by the original argonaut release
1207 * ``bobtail``: the values supported by the bobtail release
1208 * ``firefly``: the values supported by the firefly release
1209 * ``optimal``: the best (ie optimal) values of the current version of Ceph
1210 * ``default``: the default values of a new cluster installed from
1211 scratch. These values, which depend on the current version of Ceph,
1212 are hard coded and are generally a mix of optimal and legacy values.
1213 These values generally match the ``optimal`` profile of the previous
1214 LTS release, or the most recent release for which we generally except
1215 more users to have up to date clients for.
1216
1217 You can select a profile on a running cluster with the command::
1218
1219 ceph osd crush tunables {PROFILE}
1220
1221 Note that this may result in some data movement.
1222
1223
1224 Tuning CRUSH, the hard way
1225 --------------------------
1226
1227 If you can ensure that all clients are running recent code, you can
1228 adjust the tunables by extracting the CRUSH map, modifying the values,
1229 and reinjecting it into the cluster.
1230
1231 * Extract the latest CRUSH map::
1232
1233 ceph osd getcrushmap -o /tmp/crush
1234
1235 * Adjust tunables. These values appear to offer the best behavior
1236 for both large and small clusters we tested with. You will need to
1237 additionally specify the ``--enable-unsafe-tunables`` argument to
1238 ``crushtool`` for this to work. Please use this option with
1239 extreme care.::
1240
1241 crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new
1242
1243 * Reinject modified map::
1244
1245 ceph osd setcrushmap -i /tmp/crush.new
1246
1247 Legacy values
1248 -------------
1249
1250 For reference, the legacy values for the CRUSH tunables can be set
1251 with::
1252
1253 crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy
1254
1255 Again, the special ``--enable-unsafe-tunables`` option is required.
1256 Further, as noted above, be careful running old versions of the
1257 ``ceph-osd`` daemon after reverting to legacy values as the feature
1258 bit is not perfectly enforced.
1259
1260 .. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf