X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=ceph%2Fdoc%2Frados%2Foperations%2Fcrush-map.rst;h=792bbcdf219ed9caed5873d70e1b48b3e42b7098;hb=20effc670b57271cb089376d6d0800990e5218d5;hp=68c12d1d5f734516ccc08e8655f759c0d977a7b3;hpb=a71831dadd1e1f3e0fa70405511f65cc33db0498;p=ceph.git diff --git a/ceph/doc/rados/operations/crush-map.rst b/ceph/doc/rados/operations/crush-map.rst index 68c12d1d5..792bbcdf2 100644 --- a/ceph/doc/rados/operations/crush-map.rst +++ b/ceph/doc/rados/operations/crush-map.rst @@ -128,6 +128,8 @@ Since the Luminous release, devices may also have a *device class* assigned (e.g ``hdd`` or ``ssd`` or ``nvme``), allowing them to be conveniently targeted by CRUSH rules. This is especially useful when mixing device types within hosts. +.. _crush_map_default_types: + Types and Buckets ----------------- @@ -714,7 +716,7 @@ The ``bobtail`` tunable profile fixes a few key misbehaviors: * For large clusters, some small percentages of PGs map to fewer than the desired number of OSDs. This is more prevalent when there are - mutiple hierarchy layers in use (e.g., ``row``, ``rack``, ``host``, ``osd``). + multiple hierarchy layers in use (e.g., ``row``, ``rack``, ``host``, ``osd``). * When some OSDs are marked out, the data tends to get redistributed to nearby OSDs instead of across the entire hierarchy.