1 ======================================
2 Locally repairable erasure code plugin
3 ======================================
5 With the *jerasure* plugin, when an erasure coded object is stored on
6 multiple OSDs, recovering from the loss of one OSD requires reading
7 from all the others. For instance if *jerasure* is configured with
8 *k=8* and *m=4*, losing one OSD requires reading from the eleven
11 The *lrc* erasure code plugin creates local parity chunks to be able
12 to recover using less OSDs. For instance if *lrc* is configured with
13 *k=8*, *m=4* and *l=4*, it will create an additional parity chunk for
14 every four OSDs. When a single OSD is lost, it can be recovered with
15 only four OSDs instead of eleven.
17 Erasure code profile examples
18 =============================
20 Reduce recovery bandwidth between hosts
21 ---------------------------------------
23 Although it is probably not an interesting use case when all hosts are
24 connected to the same switch, reduced bandwidth usage can actually be
27 $ ceph osd erasure-code-profile set LRCprofile \
30 ruleset-failure-domain=host
31 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
34 Reduce recovery bandwidth between racks
35 ---------------------------------------
37 In Firefly the reduced bandwidth will only be observed if the primary
38 OSD is in the same rack as the lost chunk.::
40 $ ceph osd erasure-code-profile set LRCprofile \
43 ruleset-locality=rack \
44 ruleset-failure-domain=host
45 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
51 To create a new lrc erasure code profile::
53 ceph osd erasure-code-profile set {name} \
58 [ruleset-root={root}] \
59 [ruleset-locality={bucket-type}] \
60 [ruleset-failure-domain={bucket-type}] \
61 [directory={directory}] \
68 :Description: Each object is split in **data-chunks** parts,
69 each stored on a different OSD.
77 :Description: Compute **coding chunks** for each object and store them
78 on different OSDs. The number of coding chunks is also
79 the number of OSDs that can be down without losing data.
87 :Description: Group the coding and data chunks into sets of size
88 **locality**. For instance, for **k=4** and **m=2**,
89 when **locality=3** two groups of three are created.
90 Each set can be recovered without reading chunks
97 ``ruleset-root={root}``
99 :Description: The name of the crush bucket used for the first step of
100 the ruleset. For intance **step take default**.
106 ``ruleset-locality={bucket-type}``
108 :Description: The type of the crush bucket in which each set of chunks
109 defined by **l** will be stored. For instance, if it is
110 set to **rack**, each group of **l** chunks will be
111 placed in a different rack. It is used to create a
112 ruleset step such as **step choose rack**. If it is not
113 set, no such grouping is done.
118 ``ruleset-failure-domain={bucket-type}``
120 :Description: Ensure that no two chunks are in a bucket with the same
121 failure domain. For instance, if the failure domain is
122 **host** no two chunks will be stored on the same
123 host. It is used to create a ruleset step such as **step
130 ``directory={directory}``
132 :Description: Set the **directory** name from which the erasure code
137 :Default: /usr/lib/ceph/erasure-code
141 :Description: Override an existing profile by the same name.
146 Low level plugin configuration
147 ==============================
149 The sum of **k** and **m** must be a multiple of the **l** parameter.
150 The low level configuration parameters do not impose such a
151 restriction and it may be more convienient to use it for specific
152 purposes. It is for instance possible to define two groups, one with 4
153 chunks and another with 3 chunks. It is also possible to recursively
154 define locality sets, for instance datacenters and racks into
155 datacenters. The **k/m/l** are implemented by generating a low level
158 The *lrc* erasure code plugin recursively applies erasure code
159 techniques so that recovering from the loss of some chunks only
160 requires a subset of the available chunks, most of the time.
162 For instance, when three coding steps are described as::
169 where *c* are coding chunks calculated from the data chunks *D*, the
170 loss of chunk *7* can be recovered with the last four chunks. And the
171 loss of chunk *2* chunk can be recovered with the first four
174 Erasure code profile examples using low level configuration
175 ===========================================================
180 It is strictly equivalent to using the default erasure code profile. The *DD*
181 implies *K=2*, the *c* implies *M=1* and the *jerasure* plugin is used
184 $ ceph osd erasure-code-profile set LRCprofile \
187 layers='[ [ "DDc", "" ] ]'
188 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
190 Reduce recovery bandwidth between hosts
191 ---------------------------------------
193 Although it is probably not an interesting use case when all hosts are
194 connected to the same switch, reduced bandwidth usage can actually be
195 observed. It is equivalent to **k=4**, **m=2** and **l=3** although
196 the layout of the chunks is different::
198 $ ceph osd erasure-code-profile set LRCprofile \
206 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
209 Reduce recovery bandwidth between racks
210 ---------------------------------------
212 In Firefly the reduced bandwidth will only be observed if the primary
213 OSD is in the same rack as the lost chunk.::
215 $ ceph osd erasure-code-profile set LRCprofile \
224 [ "choose", "rack", 2 ],
225 [ "chooseleaf", "host", 4 ],
227 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
229 Testing with different Erasure Code backends
230 --------------------------------------------
232 LRC now uses jerasure as the default EC backend. It is possible to
233 specify the EC backend/algorithm on a per layer basis using the low
234 level configuration. The second argument in layers='[ [ "DDc", "" ] ]'
235 is actually an erasure code profile to be used for this level. The
236 example below specifies the ISA backend with the cauchy technique to
237 be used in the lrcpool.::
239 $ ceph osd erasure-code-profile set LRCprofile \
242 layers='[ [ "DDc", "plugin=isa technique=cauchy" ] ]'
243 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
245 You could also use a different erasure code profile for for each
248 $ ceph osd erasure-code-profile set LRCprofile \
252 [ "_cDD_cDD", "plugin=isa technique=cauchy" ],
253 [ "cDDD____", "plugin=isa" ],
254 [ "____cDDD", "plugin=jerasure" ],
256 $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
260 Erasure coding and decoding algorithm
261 =====================================
263 The steps found in the layers description::
271 are applied in order. For instance, if a 4K object is encoded, it will
272 first go thru *step 1* and be divided in four 1K chunks (the four
273 uppercase D). They are stored in the chunks 2, 3, 6 and 7, in
274 order. From these, two coding chunks are calculated (the two lowercase
275 c). The coding chunks are stored in the chunks 1 and 5, respectively.
277 The *step 2* re-uses the content created by *step 1* in a similar
278 fashion and stores a single coding chunk *c* at position 0. The last four
279 chunks, marked with an underscore (*_*) for readability, are ignored.
281 The *step 3* stores a single coding chunk *c* at position 4. The three
282 chunks created by *step 1* are used to compute this coding chunk,
283 i.e. the coding chunk from *step 1* becomes a data chunk in *step 3*.
285 If chunk *2* is lost::
293 decoding will attempt to recover it by walking the steps in reverse
294 order: *step 3* then *step 2* and finally *step 1*.
296 The *step 3* knows nothing about chunk *2* (i.e. it is an underscore)
299 The coding chunk from *step 2*, stored in chunk *0*, allows it to
300 recover the content of chunk *2*. There are no more chunks to recover
301 and the process stops, without considering *step 1*.
303 Recovering chunk *2* requires reading chunks *0, 1, 3* and writing
306 If chunk *2, 3, 6* are lost::
314 The *step 3* can recover the content of chunk *6*::
322 The *step 2* fails to recover and is skipped because there are two
323 chunks missing (*2, 3*) and it can only recover from one missing
326 The coding chunk from *step 1*, stored in chunk *1, 5*, allows it to
327 recover the content of chunk *2, 3*::
335 Controlling crush placement
336 ===========================
338 The default crush ruleset provides OSDs that are on different hosts. For instance::
346 needs exactly *8* OSDs, one for each chunk. If the hosts are in two
347 adjacent racks, the first four chunks can be placed in the first rack
348 and the last four in the second rack. So that recovering from the loss
349 of a single OSD does not require using bandwidth between the two
354 ruleset-steps='[ [ "choose", "rack", 2 ], [ "chooseleaf", "host", 4 ] ]'
356 will create a ruleset that will select two crush buckets of type
357 *rack* and for each of them choose four OSDs, each of them located in
358 different buckets of type *host*.
360 The ruleset can also be manually crafted for finer control.