5 The :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)` algorithm
6 determines how to store and retrieve data by computing data storage locations.
7 CRUSH empowers Ceph clients to communicate with OSDs directly rather than
8 through a centralized server or broker. With an algorithmically determined
9 method of storing and retrieving data, Ceph avoids a single point of failure, a
10 performance bottleneck, and a physical limit to its scalability.
12 CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly
13 store and retrieve data in OSDs with a uniform distribution of data across the
14 cluster. For a detailed discussion of CRUSH, see
15 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_
17 CRUSH maps contain a list of :abbr:`OSDs (Object Storage Devices)`, a list of
18 'buckets' for aggregating the devices into physical locations, and a list of
19 rules that tell CRUSH how it should replicate data in a Ceph cluster's pools. By
20 reflecting the underlying physical organization of the installation, CRUSH can
21 model—and thereby address—potential sources of correlated device failures.
22 Typical sources include physical proximity, a shared power source, and a shared
23 network. By encoding this information into the cluster map, CRUSH placement
24 policies can separate object replicas across different failure domains while
25 still maintaining the desired distribution. For example, to address the
26 possibility of concurrent failures, it may be desirable to ensure that data
27 replicas are on devices using different shelves, racks, power supplies,
28 controllers, and/or physical locations.
30 When you create a configuration file and deploy Ceph with ``ceph-deploy``, Ceph
31 generates a default CRUSH map for your configuration. The default CRUSH map is
32 fine for your Ceph sandbox environment. However, when you deploy a large-scale
33 data cluster, you should give significant consideration to developing a custom
34 CRUSH map, because it will help you manage your Ceph cluster, improve
35 performance and ensure data safety.
37 For example, if an OSD goes down, a CRUSH map can help you to locate
38 the physical data center, room, row and rack of the host with the failed OSD in
39 the event you need to use onsite support or replace hardware.
41 Similarly, CRUSH may help you identify faults more quickly. For example, if all
42 OSDs in a particular rack go down simultaneously, the fault may lie with a
43 network switch or power to the rack rather than the OSDs themselves.
45 A custom CRUSH map can also help you identify the physical locations where
46 Ceph stores redundant copies of data when the placement group(s) associated
47 with a failed host are in a degraded state.
49 .. note:: Lines of code in example boxes may extend past the edge of the box.
50 Please scroll when reading or copying longer examples.
56 The location of an OSD in terms of the CRUSH map's hierarchy is referred to
57 as a 'crush location'. This location specifier takes the form of a list of
58 key and value pairs describing a position. For example, if an OSD is in a
59 particular row, rack, chassis and host, and is part of the 'default' CRUSH
60 tree, its crush location could be described as::
62 root=default row=a rack=a2 chassis=a2a host=a2a1
66 #. Note that the order of the keys does not matter.
67 #. The key name (left of ``=``) must be a valid CRUSH ``type``. By default
68 these include root, datacenter, room, row, pod, pdu, rack, chassis and host,
69 but those types can be customized to be anything appropriate by modifying
71 #. Not all keys need to be specified. For example, by default, Ceph
72 automatically sets a ``ceph-osd`` daemon's location to be
73 ``root=default host=HOSTNAME`` (based on the output from ``hostname -s``).
75 ceph-crush-location hook
76 ------------------------
78 By default, the ``ceph-crush-location`` utility will generate a CRUSH
79 location string for a given daemon. The location is based on, in order of
82 #. A ``TYPE crush location`` option in ceph.conf. For example, this
83 is ``osd crush location`` for OSD daemons.
84 #. A ``crush location`` option in ceph.conf.
85 #. A default of ``root=default host=HOSTNAME`` where the hostname is
86 generated with the ``hostname -s`` command.
88 In a typical deployment scenario, provisioning software (or the system
89 administrator) can simply set the 'crush location' field in a host's
90 ceph.conf to describe that machine's location within the datacenter or
91 cluster. This will provide location awareness to both Ceph daemons
94 It is possible to manage the CRUSH map entirely manually by toggling
95 the hook off in the configuration::
97 osd crush update on start = false
100 ---------------------
102 A customized location hook can be used in place of the generic hook for OSD
103 daemon placement in the hierarchy. (On startup, each OSD ensures its position is
106 osd crush location hook = /path/to/script
108 This hook is passed several arguments (below) and should output a single line
109 to stdout with the CRUSH location description.::
111 $ ceph-crush-location --cluster CLUSTER --id ID --type TYPE
113 where the cluster name is typically 'ceph', the id is the daemon
114 identifier (the OSD number), and the daemon type is typically ``osd``.
120 To edit an existing CRUSH map:
122 #. `Get the CRUSH map`_.
123 #. `Decompile`_ the CRUSH map.
124 #. Edit at least one of `Devices`_, `Buckets`_ and `Rules`_.
125 #. `Recompile`_ the CRUSH map.
126 #. `Set the CRUSH map`_.
128 To activate CRUSH map rules for a specific pool, identify the common ruleset
129 number for those rules and specify that ruleset number for the pool. See `Set
130 Pool Values`_ for details.
132 .. _Get the CRUSH map: #getcrushmap
133 .. _Decompile: #decompilecrushmap
134 .. _Devices: #crushmapdevices
135 .. _Buckets: #crushmapbuckets
136 .. _Rules: #crushmaprules
137 .. _Recompile: #compilecrushmap
138 .. _Set the CRUSH map: #setcrushmap
139 .. _Set Pool Values: ../pools#setpoolvalues
146 To get the CRUSH map for your cluster, execute the following::
148 ceph osd getcrushmap -o {compiled-crushmap-filename}
150 Ceph will output (-o) a compiled CRUSH map to the filename you specified. Since
151 the CRUSH map is in a compiled form, you must decompile it first before you can
154 .. _decompilecrushmap:
156 Decompile a CRUSH Map
157 ---------------------
159 To decompile a CRUSH map, execute the following::
161 crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}
163 Ceph will decompile (-d) the compiled CRUSH map and output (-o) it to the
164 filename you specified.
172 To compile a CRUSH map, execute the following::
174 crushtool -c {decompiled-crush-map-filename} -o {compiled-crush-map-filename}
176 Ceph will store a compiled CRUSH map to the filename you specified.
184 To set the CRUSH map for your cluster, execute the following::
186 ceph osd setcrushmap -i {compiled-crushmap-filename}
188 Ceph will input the compiled CRUSH map of the filename you specified as the
189 CRUSH map for the cluster.
196 There are four main sections to a CRUSH Map.
198 #. **Devices:** Devices consist of any object storage device--i.e., the storage
199 drive corresponding to a ``ceph-osd`` daemon. You should have a device for
200 each OSD daemon in your Ceph configuration file.
202 #. **Bucket Types**: Bucket ``types`` define the types of buckets used in your
203 CRUSH hierarchy. Buckets consist of a hierarchical aggregation of storage
204 locations (e.g., rows, racks, chassis, hosts, etc.) and their assigned
207 #. **Bucket Instances:** Once you define bucket types, you must declare bucket
208 instances for your hosts, and any other failure domain partitioning
211 #. **Rules:** Rules consist of the manner of selecting buckets.
213 If you launched Ceph using one of our Quick Start guides, you'll notice
214 that you didn't need to create a CRUSH map. Ceph's deployment tools generate
215 a default CRUSH map that lists devices from the OSDs you defined in your
216 Ceph configuration file, and it declares a bucket for each host you specified
217 in the ``[osd]`` sections of your Ceph configuration file. You should create
218 your own CRUSH maps with buckets that reflect your cluster's failure domains
219 to better ensure data safety and availability.
221 .. note:: The generated CRUSH map doesn't take your larger grained failure
222 domains into account. So you should modify your CRUSH map to account for
223 larger grained failure domains such as chassis, racks, rows, data
233 To map placement groups to OSDs, a CRUSH map requires a list of OSD devices
234 (i.e., the names of the OSD daemons from the Ceph configuration file). The list
235 of devices appears first in the CRUSH map. To declare a device in the CRUSH map,
236 create a new line under your list of devices, enter ``device`` followed by a
237 unique numeric ID, followed by the corresponding ``ceph-osd`` daemon instance.
238 The device class can optionaly be added to group devices so they can be
239 conveniently targetted by a crush rule.
244 device {num} {osd.name} [class {class}]
249 device 0 osd.0 class ssd
250 device 1 osd.1 class hdd
254 As a general rule, an OSD daemon maps to a single storage drive or to a RAID.
257 CRUSH Map Bucket Types
258 ----------------------
260 The second list in the CRUSH map defines 'bucket' types. Buckets facilitate
261 a hierarchy of nodes and leaves. Node (or non-leaf) buckets typically represent
262 physical locations in a hierarchy. Nodes aggregate other nodes or leaves.
263 Leaf buckets represent ``ceph-osd`` daemons and their corresponding storage
266 .. tip:: The term "bucket" used in the context of CRUSH means a node in
267 the hierarchy, i.e. a location or a piece of physical hardware. It
268 is a different concept from the term "bucket" when used in the
269 context of RADOS Gateway APIs.
271 To add a bucket type to the CRUSH map, create a new line under your list of
272 bucket types. Enter ``type`` followed by a unique numeric ID and a bucket name.
273 By convention, there is one leaf bucket and it is ``type 0``; however, you may
274 give it any name you like (e.g., osd, disk, drive, storage, etc.)::
277 type {num} {bucket-name}
298 CRUSH Map Bucket Hierarchy
299 --------------------------
301 The CRUSH algorithm distributes data objects among storage devices according
302 to a per-device weight value, approximating a uniform probability distribution.
303 CRUSH distributes objects and their replicas according to the hierarchical
304 cluster map you define. Your CRUSH map represents the available storage
305 devices and the logical elements that contain them.
307 To map placement groups to OSDs across failure domains, a CRUSH map defines a
308 hierarchical list of bucket types (i.e., under ``#types`` in the generated CRUSH
309 map). The purpose of creating a bucket hierarchy is to segregate the
310 leaf nodes by their failure domains, such as hosts, chassis, racks, power
311 distribution units, pods, rows, rooms, and data centers. With the exception of
312 the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary, and
313 you may define it according to your own needs.
315 We recommend adapting your CRUSH map to your firms's hardware naming conventions
316 and using instances names that reflect the physical hardware. Your naming
317 practice can make it easier to administer the cluster and troubleshoot
318 problems when an OSD and/or other hardware malfunctions and the administrator
319 need access to physical hardware.
321 In the following example, the bucket hierarchy has a leaf bucket named ``osd``,
322 and two node buckets named ``host`` and ``rack`` respectively.
330 +---------------+---------------+
332 +-----+-----+ +-----+-----+
333 | {o}host | | {o}host |
334 | Bucket | | Bucket |
335 +-----+-----+ +-----+-----+
337 +-------+-------+ +-------+-------+
339 +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
340 | osd | | osd | | osd | | osd |
341 | Bucket | | Bucket | | Bucket | | Bucket |
342 +-----------+ +-----------+ +-----------+ +-----------+
344 .. note:: The higher numbered ``rack`` bucket type aggregates the lower
345 numbered ``host`` bucket type.
347 Since leaf nodes reflect storage devices declared under the ``#devices`` list
348 at the beginning of the CRUSH map, you do not need to declare them as bucket
349 instances. The second lowest bucket type in your hierarchy usually aggregates
350 the devices (i.e., it's usually the computer containing the storage media, and
351 uses whatever term you prefer to describe it, such as "node", "computer",
352 "server," "host", "machine", etc.). In high density environments, it is
353 increasingly common to see multiple hosts/nodes per chassis. You should account
354 for chassis failure too--e.g., the need to pull a chassis if a node fails may
355 result in bringing down numerous hosts/nodes and their OSDs.
357 When declaring a bucket instance, you must specify its type, give it a unique
358 name (string), assign it a unique ID expressed as a negative integer (optional),
359 specify a weight relative to the total capacity/capability of its item(s),
360 specify the bucket algorithm (usually ``straw``), and the hash (usually ``0``,
361 reflecting hash algorithm ``rjenkins1``). A bucket may have one or more items.
362 The items may consist of node buckets or leaves. Items may have a weight that
363 reflects the relative weight of the item.
365 You may declare a node bucket with the following syntax::
367 [bucket-type] [bucket-name] {
368 id [a unique negative numeric ID]
369 weight [the relative capacity/capability of the item(s)]
370 alg [the bucket type: uniform | list | tree | straw ]
371 hash [the hash type: 0 by default]
372 item [item-name] weight [weight]
375 For example, using the diagram above, we would define two host buckets
376 and one rack bucket. The OSDs are declared as items within the host buckets::
382 item osd.0 weight 1.00
383 item osd.1 weight 1.00
390 item osd.2 weight 1.00
391 item osd.3 weight 1.00
398 item node1 weight 2.00
399 item node2 weight 2.00
402 .. note:: In the foregoing example, note that the rack bucket does not contain
403 any OSDs. Rather it contains lower level host buckets, and includes the
404 sum total of their weight in the item entry.
406 .. topic:: Bucket Types
408 Ceph supports four bucket types, each representing a tradeoff between
409 performance and reorganization efficiency. If you are unsure of which bucket
410 type to use, we recommend using a ``straw`` bucket. For a detailed
411 discussion of bucket types, refer to
412 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_,
413 and more specifically to **Section 3.4**. The bucket types are:
415 #. **Uniform:** Uniform buckets aggregate devices with **exactly** the same
416 weight. For example, when firms commission or decommission hardware, they
417 typically do so with many machines that have exactly the same physical
418 configuration (e.g., bulk purchases). When storage devices have exactly
419 the same weight, you may use the ``uniform`` bucket type, which allows
420 CRUSH to map replicas into uniform buckets in constant time. With
421 non-uniform weights, you should use another bucket algorithm.
423 #. **List**: List buckets aggregate their content as linked lists. Based on
424 the :abbr:`RUSH (Replication Under Scalable Hashing)` :sub:`P` algorithm,
425 a list is a natural and intuitive choice for an **expanding cluster**:
426 either an object is relocated to the newest device with some appropriate
427 probability, or it remains on the older devices as before. The result is
428 optimal data migration when items are added to the bucket. Items removed
429 from the middle or tail of the list, however, can result in a significant
430 amount of unnecessary movement, making list buckets most suitable for
431 circumstances in which they **never (or very rarely) shrink**.
433 #. **Tree**: Tree buckets use a binary search tree. They are more efficient
434 than list buckets when a bucket contains a larger set of items. Based on
435 the :abbr:`RUSH (Replication Under Scalable Hashing)` :sub:`R` algorithm,
436 tree buckets reduce the placement time to O(log :sub:`n`), making them
437 suitable for managing much larger sets of devices or nested buckets.
439 #. **Straw:** List and Tree buckets use a divide and conquer strategy
440 in a way that either gives certain items precedence (e.g., those
441 at the beginning of a list) or obviates the need to consider entire
442 subtrees of items at all. That improves the performance of the replica
443 placement process, but can also introduce suboptimal reorganization
444 behavior when the contents of a bucket change due an addition, removal,
445 or re-weighting of an item. The straw bucket type allows all items to
446 fairly “compete” against each other for replica placement through a
447 process analogous to a draw of straws.
451 Each bucket uses a hash algorithm. Currently, Ceph supports ``rjenkins1``.
452 Enter ``0`` as your hash setting to select ``rjenkins1``.
455 .. _weightingbucketitems:
457 .. topic:: Weighting Bucket Items
459 Ceph expresses bucket weights as doubles, which allows for fine
460 weighting. A weight is the relative difference between device capacities. We
461 recommend using ``1.00`` as the relative weight for a 1TB storage device.
462 In such a scenario, a weight of ``0.5`` would represent approximately 500GB,
463 and a weight of ``3.00`` would represent approximately 3TB. Higher level
464 buckets have a weight that is the sum total of the leaf items aggregated by
467 A bucket item weight is one dimensional, but you may also calculate your
468 item weights to reflect the performance of the storage drive. For example,
469 if you have many 1TB drives where some have relatively low data transfer
470 rate and the others have a relatively high data transfer rate, you may
471 weight them differently, even though they have the same capacity (e.g.,
472 a weight of 0.80 for the first set of drives with lower total throughput,
473 and 1.20 for the second set of drives with higher total throughput).
481 CRUSH maps support the notion of 'CRUSH rules', which are the rules that
482 determine data placement for a pool. For large clusters, you will likely create
483 many pools where each pool may have its own CRUSH ruleset and rules. The default
484 CRUSH map has a rule for each pool, and one ruleset assigned to each of the
487 .. note:: In most cases, you will not need to modify the default rules. When
488 you create a new pool, its default ruleset is ``0``.
491 CRUSH rules define placement and replication strategies or distribution policies
492 that allow you to specify exactly how CRUSH places object replicas. For
493 example, you might create a rule selecting a pair of targets for 2-way
494 mirroring, another rule for selecting three targets in two different data
495 centers for 3-way mirroring, and yet another rule for erasure coding over six
496 storage devices. For a detailed discussion of CRUSH rules, refer to
497 `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_,
498 and more specifically to **Section 3.2**.
500 A rule takes the following form::
505 type [ replicated | erasure ]
508 step take <bucket-name> [class <device-class>]
509 step [choose|chooseleaf] [firstn|indep] <N> <bucket-type>
516 :Description: A means of classifying a rule as belonging to a set of rules.
517 Activated by `setting the ruleset in a pool`_.
519 :Purpose: A component of the rule mask.
524 .. _setting the ruleset in a pool: ../pools#setpoolvalues
529 :Description: Describes a rule for either a storage drive (replicated)
532 :Purpose: A component of the rule mask.
535 :Default: ``replicated``
536 :Valid Values: Currently only ``replicated`` and ``erasure``
540 :Description: If a pool makes fewer replicas than this number, CRUSH will
541 **NOT** select this rule.
544 :Purpose: A component of the rule mask.
550 :Description: If a pool makes more replicas than this number, CRUSH will
551 **NOT** select this rule.
554 :Purpose: A component of the rule mask.
559 ``step take <bucket-name> [class <device-class>]``
561 :Description: Takes a bucket name, and begins iterating down the tree.
562 If the ``device-class`` is specified, it must match
563 a class previously used when defining a device. All
564 devices that do not belong to the class are excluded.
565 :Purpose: A component of the rule.
567 :Example: ``step take data``
570 ``step choose firstn {num} type {bucket-type}``
572 :Description: Selects the number of buckets of the given type. The number is
573 usually the number of replicas in the pool (i.e., pool size).
575 - If ``{num} == 0``, choose ``pool-num-replicas`` buckets (all available).
576 - If ``{num} > 0 && < pool-num-replicas``, choose that many buckets.
577 - If ``{num} < 0``, it means ``pool-num-replicas - {num}``.
579 :Purpose: A component of the rule.
580 :Prerequisite: Follows ``step take`` or ``step choose``.
581 :Example: ``step choose firstn 1 type row``
584 ``step chooseleaf firstn {num} type {bucket-type}``
586 :Description: Selects a set of buckets of ``{bucket-type}`` and chooses a leaf
587 node from the subtree of each bucket in the set of buckets. The
588 number of buckets in the set is usually the number of replicas in
589 the pool (i.e., pool size).
591 - If ``{num} == 0``, choose ``pool-num-replicas`` buckets (all available).
592 - If ``{num} > 0 && < pool-num-replicas``, choose that many buckets.
593 - If ``{num} < 0``, it means ``pool-num-replicas - {num}``.
595 :Purpose: A component of the rule. Usage removes the need to select a device using two steps.
596 :Prerequisite: Follows ``step take`` or ``step choose``.
597 :Example: ``step chooseleaf firstn 0 type row``
603 :Description: Outputs the current value and empties the stack. Typically used
604 at the end of a rule, but may also be used to pick from different
605 trees in the same rule.
607 :Purpose: A component of the rule.
608 :Prerequisite: Follows ``step choose``.
609 :Example: ``step emit``
611 .. important:: To activate one or more rules with a common ruleset number to a
612 pool, set the ruleset number of the pool.
619 When a Ceph Client reads or writes data, it always contacts the primary OSD in
620 the acting set. For set ``[2, 3, 4]``, ``osd.2`` is the primary. Sometimes an
621 OSD isn't well suited to act as a primary compared to other OSDs (e.g., it has
622 a slow disk or a slow controller). To prevent performance bottlenecks
623 (especially on read operations) while maximizing utilization of your hardware,
624 you can set a Ceph OSD's primary affinity so that CRUSH is less likely to use
625 the OSD as a primary in an acting set. ::
627 ceph osd primary-affinity <osd-id> <weight>
629 Primary affinity is ``1`` by default (*i.e.,* an OSD may act as a primary). You
630 may set the OSD primary range from ``0-1``, where ``0`` means that the OSD may
631 **NOT** be used as a primary and ``1`` means that an OSD may be used as a
632 primary. When the weight is ``< 1``, it is less likely that CRUSH will select
633 the Ceph OSD Daemon to act as a primary.
636 Placing Different Pools on Different OSDS:
637 ==========================================
639 Suppose you want to have most pools default to OSDs backed by large hard drives,
640 but have some pools mapped to OSDs backed by fast solid-state drives (SSDs).
641 It's possible to have multiple independent CRUSH hierarchies within the same
642 CRUSH map. Define two hierarchies with two different root nodes--one for hard
643 disks (e.g., "root platter") and one for SSDs (e.g., "root ssd") as shown
655 host ceph-osd-ssd-server-1 {
659 item osd.0 weight 1.00
660 item osd.1 weight 1.00
663 host ceph-osd-ssd-server-2 {
667 item osd.2 weight 1.00
668 item osd.3 weight 1.00
671 host ceph-osd-platter-server-1 {
675 item osd.4 weight 1.00
676 item osd.5 weight 1.00
679 host ceph-osd-platter-server-2 {
683 item osd.6 weight 1.00
684 item osd.7 weight 1.00
691 item ceph-osd-platter-server-1 weight 2.00
692 item ceph-osd-platter-server-2 weight 2.00
699 item ceph-osd-ssd-server-1 weight 2.00
700 item ceph-osd-ssd-server-2 weight 2.00
709 step chooseleaf firstn 0 type host
719 step chooseleaf firstn 0 type host
729 step chooseleaf firstn 0 type host
739 step chooseleaf firstn 0 type host
749 step chooseleaf firstn 0 type host
759 step chooseleaf firstn 1 type host
762 step chooseleaf firstn -1 type host
766 You can then set a pool to use the SSD rule by::
768 ceph osd pool set <poolname> crush_ruleset 4
770 Similarly, using the ``ssd-primary`` rule will cause each placement group in the
771 pool to be placed with an SSD as the primary and platters as the replicas.
778 To add or move an OSD in the CRUSH map of a running cluster, execute the
779 ``ceph osd crush set``. For Argonaut (v 0.48), execute the following::
781 ceph osd crush set {id} {name} {weight} pool={pool-name} [{bucket-type}={bucket-name} ...]
783 For Bobtail (v 0.56), execute the following::
785 ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]
791 :Description: The numeric ID of the OSD.
799 :Description: The full name of the OSD.
807 :Description: The CRUSH weight for the OSD.
815 :Description: The root of the tree in which the OSD resides.
816 :Type: Key/value pair.
818 :Example: ``root=default``
823 :Description: You may specify the OSD's location in the CRUSH hierarchy.
824 :Type: Key/value pairs.
826 :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
829 The following example adds ``osd.0`` to the hierarchy, or moves the OSD from a
830 previous location. ::
832 ceph osd crush set osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1
835 Adjust an OSD's CRUSH Weight
836 ============================
838 To adjust an OSD's crush weight in the CRUSH map of a running cluster, execute
841 ceph osd crush reweight {name} {weight}
847 :Description: The full name of the OSD.
855 :Description: The CRUSH weight for the OSD.
866 To remove an OSD from the CRUSH map of a running cluster, execute the following::
868 ceph osd crush remove {name}
874 :Description: The full name of the OSD.
882 To add a bucket in the CRUSH map of a running cluster, execute the ``ceph osd crush add-bucket`` command::
884 ceph osd crush add-bucket {bucket-name} {bucket-type}
890 :Description: The full name of the bucket.
898 :Description: The type of the bucket. The type must already exist in the hierarchy.
904 The following example adds the ``rack12`` bucket to the hierarchy::
906 ceph osd crush add-bucket rack12 rack
911 To move a bucket to a different location or position in the CRUSH map hierarchy,
912 execute the following::
914 ceph osd crush move {bucket-name} {bucket-type}={bucket-name}, [...]
920 :Description: The name of the bucket to move/reposition.
923 :Example: ``foo-bar-1``
927 :Description: You may specify the bucket's location in the CRUSH hierarchy.
928 :Type: Key/value pairs.
930 :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
935 To remove a bucket from the CRUSH map hierarchy, execute the following::
937 ceph osd crush remove {bucket-name}
939 .. note:: A bucket must be empty before removing it from the CRUSH hierarchy.
945 :Description: The name of the bucket that you'd like to remove.
950 The following example removes the ``rack12`` bucket from the hierarchy::
952 ceph osd crush remove rack12
957 Over time, we have made (and continue to make) improvements to the
958 CRUSH algorithm used to calculate the placement of data. In order to
959 support the change in behavior, we have introduced a series of tunable
960 options that control whether the legacy or improved variation of the
963 In order to use newer tunables, both clients and servers must support
964 the new version of CRUSH. For this reason, we have created
965 ``profiles`` that are named after the Ceph version in which they were
966 introduced. For example, the ``firefly`` tunables are first supported
967 in the firefly release, and will not work with older (e.g., dumpling)
968 clients. Once a given set of tunables are changed from the legacy
969 default behavior, the ``ceph-mon`` and ``ceph-osd`` will prevent older
970 clients who do not support the new CRUSH features from connecting to
976 The legacy CRUSH behavior used by argonaut and older releases works
977 fine for most clusters, provided there are not too many OSDs that have
980 bobtail (CRUSH_TUNABLES2)
981 -------------------------
983 The bobtail tunable profile fixes a few key misbehaviors:
985 * For hierarchies with a small number of devices in the leaf buckets,
986 some PGs map to fewer than the desired number of replicas. This
987 commonly happens for hierarchies with "host" nodes with a small
988 number (1-3) of OSDs nested beneath each one.
990 * For large clusters, some small percentages of PGs map to less than
991 the desired number of OSDs. This is more prevalent when there are
992 several layers of the hierarchy (e.g., row, rack, host, osd).
994 * When some OSDs are marked out, the data tends to get redistributed
995 to nearby OSDs instead of across the entire hierarchy.
997 The new tunables are:
999 * ``choose_local_tries``: Number of local retries. Legacy value is
1000 2, optimal value is 0.
1002 * ``choose_local_fallback_tries``: Legacy value is 5, optimal value
1005 * ``choose_total_tries``: Total number of attempts to choose an item.
1006 Legacy value was 19, subsequent testing indicates that a value of
1007 50 is more appropriate for typical clusters. For extremely large
1008 clusters, a larger value might be necessary.
1010 * ``chooseleaf_descend_once``: Whether a recursive chooseleaf attempt
1011 will retry, or only try once and allow the original placement to
1012 retry. Legacy default is 0, optimal value is 1.
1016 * Moving from argonaut to bobtail tunables triggers a moderate amount
1017 of data movement. Use caution on a cluster that is already
1018 populated with data.
1020 firefly (CRUSH_TUNABLES3)
1021 -------------------------
1023 The firefly tunable profile fixes a problem
1024 with the ``chooseleaf`` CRUSH rule behavior that tends to result in PG
1025 mappings with too few results when too many OSDs have been marked out.
1029 * ``chooseleaf_vary_r``: Whether a recursive chooseleaf attempt will
1030 start with a non-zero value of r, based on how many attempts the
1031 parent has already made. Legacy default is 0, but with this value
1032 CRUSH is sometimes unable to find a mapping. The optimal value (in
1033 terms of computational cost and correctness) is 1.
1037 * For existing clusters that have lots of existing data, changing
1038 from 0 to 1 will cause a lot of data to move; a value of 4 or 5
1039 will allow CRUSH to find a valid mapping but will make less data
1042 straw_calc_version tunable (introduced with Firefly too)
1043 --------------------------------------------------------
1045 There were some problems with the internal weights calculated and
1046 stored in the CRUSH map for ``straw`` buckets. Specifically, when
1047 there were items with a CRUSH weight of 0 or both a mix of weights and
1048 some duplicated weights CRUSH would distribute data incorrectly (i.e.,
1049 not in proportion to the weights).
1053 * ``straw_calc_version``: A value of 0 preserves the old, broken
1054 internal weight calculation; a value of 1 fixes the behavior.
1058 * Moving to straw_calc_version 1 and then adjusting a straw bucket
1059 (by adding, removing, or reweighting an item, or by using the
1060 reweight-all command) can trigger a small to moderate amount of
1061 data movement *if* the cluster has hit one of the problematic
1064 This tunable option is special because it has absolutely no impact
1065 concerning the required kernel version in the client side.
1070 The hammer tunable profile does not affect the
1071 mapping of existing CRUSH maps simply by changing the profile. However:
1073 * There is a new bucket type (``straw2``) supported. The new
1074 ``straw2`` bucket type fixes several limitations in the original
1075 ``straw`` bucket. Specifically, the old ``straw`` buckets would
1076 change some mappings that should have changed when a weight was
1077 adjusted, while ``straw2`` achieves the original goal of only
1078 changing mappings to or from the bucket item whose weight has
1081 * ``straw2`` is the default for any newly created buckets.
1085 * Changing a bucket type from ``straw`` to ``straw2`` will result in
1086 a reasonably small amount of data movement, depending on how much
1087 the bucket item weights vary from each other. When the weights are
1088 all the same no data will move, and when item weights vary
1089 significantly there will be more movement.
1091 jewel (CRUSH_TUNABLES5)
1092 -----------------------
1094 The jewel tunable profile improves the
1095 overall behavior of CRUSH such that significantly fewer mappings
1096 change when an OSD is marked out of the cluster.
1100 * ``chooseleaf_stable``: Whether a recursive chooseleaf attempt will
1101 use a better value for an inner loop that greatly reduces the number
1102 of mapping changes when an OSD is marked out. The legacy value is 0,
1103 while the new value of 1 uses the new approach.
1107 * Changing this value on an existing cluster will result in a very
1108 large amount of data movement as almost every PG mapping is likely
1114 Which client versions support CRUSH_TUNABLES
1115 --------------------------------------------
1117 * argonaut series, v0.48.1 or later
1119 * Linux kernel version v3.6 or later (for the file system and RBD kernel clients)
1121 Which client versions support CRUSH_TUNABLES2
1122 ---------------------------------------------
1124 * v0.55 or later, including bobtail series (v0.56.x)
1125 * Linux kernel version v3.9 or later (for the file system and RBD kernel clients)
1127 Which client versions support CRUSH_TUNABLES3
1128 ---------------------------------------------
1130 * v0.78 (firefly) or later
1131 * Linux kernel version v3.15 or later (for the file system and RBD kernel clients)
1133 Which client versions support CRUSH_V4
1134 --------------------------------------
1136 * v0.94 (hammer) or later
1137 * Linux kernel version v4.1 or later (for the file system and RBD kernel clients)
1139 Which client versions support CRUSH_TUNABLES5
1140 ---------------------------------------------
1142 * v10.0.2 (jewel) or later
1143 * Linux kernel version v4.5 or later (for the file system and RBD kernel clients)
1145 Warning when tunables are non-optimal
1146 -------------------------------------
1148 Starting with version v0.74, Ceph will issue a health warning if the
1149 current CRUSH tunables don't include all the optimal values from the
1150 ``default`` profile (see below for the meaning of the ``default`` profile).
1151 To make this warning go away, you have two options:
1153 1. Adjust the tunables on the existing cluster. Note that this will
1154 result in some data movement (possibly as much as 10%). This is the
1155 preferred route, but should be taken with care on a production cluster
1156 where the data movement may affect performance. You can enable optimal
1159 ceph osd crush tunables optimal
1161 If things go poorly (e.g., too much load) and not very much
1162 progress has been made, or there is a client compatibility problem
1163 (old kernel cephfs or rbd clients, or pre-bobtail librados
1164 clients), you can switch back with::
1166 ceph osd crush tunables legacy
1168 2. You can make the warning go away without making any changes to CRUSH by
1169 adding the following option to your ceph.conf ``[mon]`` section::
1171 mon warn on legacy crush tunables = false
1173 For the change to take effect, you will need to restart the monitors, or
1174 apply the option to running monitors with::
1176 ceph tell mon.\* injectargs --no-mon-warn-on-legacy-crush-tunables
1179 A few important points
1180 ----------------------
1182 * Adjusting these values will result in the shift of some PGs between
1183 storage nodes. If the Ceph cluster is already storing a lot of
1184 data, be prepared for some fraction of the data to move.
1185 * The ``ceph-osd`` and ``ceph-mon`` daemons will start requiring the
1186 feature bits of new connections as soon as they get
1187 the updated map. However, already-connected clients are
1188 effectively grandfathered in, and will misbehave if they do not
1189 support the new feature.
1190 * If the CRUSH tunables are set to non-legacy values and then later
1191 changed back to the defult values, ``ceph-osd`` daemons will not be
1192 required to support the feature. However, the OSD peering process
1193 requires examining and understanding old maps. Therefore, you
1194 should not run old versions of the ``ceph-osd`` daemon
1195 if the cluster has previously used non-legacy CRUSH values, even if
1196 the latest version of the map has been switched back to using the
1202 The simplest way to adjust the crush tunables is by changing to a known
1205 * ``legacy``: the legacy behavior from argonaut and earlier.
1206 * ``argonaut``: the legacy values supported by the original argonaut release
1207 * ``bobtail``: the values supported by the bobtail release
1208 * ``firefly``: the values supported by the firefly release
1209 * ``optimal``: the best (ie optimal) values of the current version of Ceph
1210 * ``default``: the default values of a new cluster installed from
1211 scratch. These values, which depend on the current version of Ceph,
1212 are hard coded and are generally a mix of optimal and legacy values.
1213 These values generally match the ``optimal`` profile of the previous
1214 LTS release, or the most recent release for which we generally except
1215 more users to have up to date clients for.
1217 You can select a profile on a running cluster with the command::
1219 ceph osd crush tunables {PROFILE}
1221 Note that this may result in some data movement.
1224 Tuning CRUSH, the hard way
1225 --------------------------
1227 If you can ensure that all clients are running recent code, you can
1228 adjust the tunables by extracting the CRUSH map, modifying the values,
1229 and reinjecting it into the cluster.
1231 * Extract the latest CRUSH map::
1233 ceph osd getcrushmap -o /tmp/crush
1235 * Adjust tunables. These values appear to offer the best behavior
1236 for both large and small clusters we tested with. You will need to
1237 additionally specify the ``--enable-unsafe-tunables`` argument to
1238 ``crushtool`` for this to work. Please use this option with
1241 crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new
1243 * Reinject modified map::
1245 ceph osd setcrushmap -i /tmp/crush.new
1250 For reference, the legacy values for the CRUSH tunables can be set
1253 crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy
1255 Again, the special ``--enable-unsafe-tunables`` option is required.
1256 Further, as noted above, be careful running old versions of the
1257 ``ceph-osd`` daemon after reverting to legacy values as the feature
1258 bit is not perfectly enforced.
1260 .. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf