]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/erasure-code-shec.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / rados / operations / erasure-code-shec.rst
1 ========================
2 SHEC erasure code plugin
3 ========================
4
5 The *shec* plugin encapsulates the `multiple SHEC
6 <http://tracker.ceph.com/projects/ceph/wiki/Shingled_Erasure_Code_(SHEC)>`_
7 library. It allows ceph to recover data more efficiently than Reed Solomon codes.
8
9 Create an SHEC profile
10 ======================
11
12 To create a new *shec* erasure code profile::
13
14 ceph osd erasure-code-profile set {name} \
15 plugin=shec \
16 [k={data-chunks}] \
17 [m={coding-chunks}] \
18 [c={durability-estimator}] \
19 [crush-root={root}] \
20 [crush-failure-domain={bucket-type}] \
21 [crush-device-class={device-class}] \
22 [directory={directory}] \
23 [--force]
24
25 Where:
26
27 ``k={data-chunks}``
28
29 :Description: Each object is split in **data-chunks** parts,
30 each stored on a different OSD.
31
32 :Type: Integer
33 :Required: No.
34 :Default: 4
35
36 ``m={coding-chunks}``
37
38 :Description: Compute **coding-chunks** for each object and store them on
39 different OSDs. The number of **coding-chunks** does not necessarily
40 equal the number of OSDs that can be down without losing data.
41
42 :Type: Integer
43 :Required: No.
44 :Default: 3
45
46 ``c={durability-estimator}``
47
48 :Description: The number of parity chunks each of which includes each data chunk in its
49 calculation range. The number is used as a **durability estimator**.
50 For instance, if c=2, 2 OSDs can be down without losing data.
51
52 :Type: Integer
53 :Required: No.
54 :Default: 2
55
56 ``crush-root={root}``
57
58 :Description: The name of the crush bucket used for the first step of
59 the ruleset. For intance **step take default**.
60
61 :Type: String
62 :Required: No.
63 :Default: default
64
65 ``crush-failure-domain={bucket-type}``
66
67 :Description: Ensure that no two chunks are in a bucket with the same
68 failure domain. For instance, if the failure domain is
69 **host** no two chunks will be stored on the same
70 host. It is used to create a ruleset step such as **step
71 chooseleaf host**.
72
73 :Type: String
74 :Required: No.
75 :Default: host
76
77 ``crush-device-class={device-class}``
78
79 :Description: Restrict placement to devices of a specific class (e.g.,
80 ``ssd`` or ``hdd``), using the crush device class names
81 in the CRUSH map.
82
83 :Type: String
84 :Required: No.
85 :Default:
86
87 ``directory={directory}``
88
89 :Description: Set the **directory** name from which the erasure code
90 plugin is loaded.
91
92 :Type: String
93 :Required: No.
94 :Default: /usr/lib/ceph/erasure-code
95
96 ``--force``
97
98 :Description: Override an existing profile by the same name.
99
100 :Type: String
101 :Required: No.
102
103 Brief description of SHEC's layouts
104 ===================================
105
106 Space Efficiency
107 ----------------
108
109 Space efficiency is a ratio of data chunks to all ones in a object and
110 represented as k/(k+m).
111 In order to improve space efficiency, you should increase k or decrease m.
112
113 ::
114
115 space efficiency of SHEC(4,3,2) = 4/(4+3) = 0.57
116 SHEC(5,3,2) or SHEC(4,2,2) improves SHEC(4,3,2)'s space efficiency
117
118 Durability
119 ----------
120
121 The third parameter of SHEC (=c) is a durability estimator, which approximates
122 the number of OSDs that can be down without losing data.
123
124 ``durability estimator of SHEC(4,3,2) = 2``
125
126 Recovery Efficiency
127 -------------------
128
129 Describing calculation of recovery efficiency is beyond the scope of this document,
130 but at least increasing m without increasing c achieves improvement of recovery efficiency.
131 (However, we must pay attention to the sacrifice of space efficiency in this case.)
132
133 ``SHEC(4,2,2) -> SHEC(4,3,2) : achieves improvement of recovery efficiency``
134
135 Erasure code profile examples
136 =============================
137
138 ::
139
140 $ ceph osd erasure-code-profile set SHECprofile \
141 plugin=shec \
142 k=8 m=4 c=3 \
143 crush-failure-domain=host
144 $ ceph osd pool create shecpool 256 256 erasure SHECprofile