]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/erasure-code-shec.rst
6c3b8486336f8391bdf93c147f78abaca9332ce9
[ceph.git] / ceph / doc / rados / operations / erasure-code-shec.rst
1 ========================
2 SHEC erasure code plugin
3 ========================
4
5 The *shec* plugin encapsulates the `multiple SHEC
6 <http://tracker.ceph.com/projects/ceph/wiki/Shingled_Erasure_Code_(SHEC)>`_
7 library. It allows ceph to recover data more efficiently than Reed Solomon codes.
8
9 Create an SHEC profile
10 ======================
11
12 To create a new *shec* erasure code profile::
13
14 ceph osd erasure-code-profile set {name} \
15 plugin=shec \
16 [k={data-chunks}] \
17 [m={coding-chunks}] \
18 [c={durability-estimator}] \
19 [ruleset-root={root}] \
20 [ruleset-failure-domain={bucket-type}] \
21 [directory={directory}] \
22 [--force]
23
24 Where:
25
26 ``k={data-chunks}``
27
28 :Description: Each object is split in **data-chunks** parts,
29 each stored on a different OSD.
30
31 :Type: Integer
32 :Required: No.
33 :Default: 4
34
35 ``m={coding-chunks}``
36
37 :Description: Compute **coding-chunks** for each object and store them on
38 different OSDs. The number of **coding-chunks** does not necessarily
39 equal the number of OSDs that can be down without losing data.
40
41 :Type: Integer
42 :Required: No.
43 :Default: 3
44
45 ``c={durability-estimator}``
46
47 :Description: The number of parity chunks each of which includes each data chunk in its
48 calculation range. The number is used as a **durability estimator**.
49 For instance, if c=2, 2 OSDs can be down without losing data.
50
51 :Type: Integer
52 :Required: No.
53 :Default: 2
54
55 ``ruleset-root={root}``
56
57 :Description: The name of the crush bucket used for the first step of
58 the ruleset. For intance **step take default**.
59
60 :Type: String
61 :Required: No.
62 :Default: default
63
64 ``ruleset-failure-domain={bucket-type}``
65
66 :Description: Ensure that no two chunks are in a bucket with the same
67 failure domain. For instance, if the failure domain is
68 **host** no two chunks will be stored on the same
69 host. It is used to create a ruleset step such as **step
70 chooseleaf host**.
71
72 :Type: String
73 :Required: No.
74 :Default: host
75
76 ``directory={directory}``
77
78 :Description: Set the **directory** name from which the erasure code
79 plugin is loaded.
80
81 :Type: String
82 :Required: No.
83 :Default: /usr/lib/ceph/erasure-code
84
85 ``--force``
86
87 :Description: Override an existing profile by the same name.
88
89 :Type: String
90 :Required: No.
91
92 Brief description of SHEC's layouts
93 ===================================
94
95 Space Efficiency
96 ----------------
97
98 Space efficiency is a ratio of data chunks to all ones in a object and
99 represented as k/(k+m).
100 In order to improve space efficiency, you should increase k or decrease m.
101
102 ::
103
104 space efficiency of SHEC(4,3,2) = 4/(4+3) = 0.57
105 SHEC(5,3,2) or SHEC(4,2,2) improves SHEC(4,3,2)'s space efficiency
106
107 Durability
108 ----------
109
110 The third parameter of SHEC (=c) is a durability estimator, which approximates
111 the number of OSDs that can be down without losing data.
112
113 ``durability estimator of SHEC(4,3,2) = 2``
114
115 Recovery Efficiency
116 -------------------
117
118 Describing calculation of recovery efficiency is beyond the scope of this document,
119 but at least increasing m without increasing c achieves improvement of recovery efficiency.
120 (However, we must pay attention to the sacrifice of space efficiency in this case.)
121
122 ``SHEC(4,2,2) -> SHEC(4,3,2) : achieves improvement of recovery efficiency``
123
124 Erasure code profile examples
125 =============================
126
127 ::
128
129 $ ceph osd erasure-code-profile set SHECprofile \
130 plugin=shec \
131 k=8 m=4 c=3 \
132 ruleset-failure-domain=host
133 $ ceph osd pool create shecpool 256 256 erasure SHECprofile