]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ======= |
2 | Pools | |
3 | ======= | |
4 | ||
5 | When you first deploy a cluster without creating a pool, Ceph uses the default | |
6 | pools for storing data. A pool provides you with: | |
7 | ||
8 | - **Resilience**: You can set how many OSD are allowed to fail without losing data. | |
9 | For replicated pools, it is the desired number of copies/replicas of an object. | |
10 | A typical configuration stores an object and one additional copy | |
11 | (i.e., ``size = 2``), but you can determine the number of copies/replicas. | |
12 | For `erasure coded pools <../erasure-code>`_, it is the number of coding chunks | |
13 | (i.e. ``m=2`` in the **erasure code profile**) | |
14 | ||
15 | - **Placement Groups**: You can set the number of placement groups for the pool. | |
16 | A typical configuration uses approximately 100 placement groups per OSD to | |
17 | provide optimal balancing without using up too many computing resources. When | |
18 | setting up multiple pools, be careful to ensure you set a reasonable number of | |
19 | placement groups for both the pool and the cluster as a whole. | |
20 | ||
21 | - **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the | |
22 | pool enables CRUSH to identify a rule for the placement of the object | |
23 | and its replicas (or chunks for erasure coded pools) in your cluster. | |
24 | You can create a custom CRUSH rule for your pool. | |
25 | ||
26 | - **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``, | |
27 | you effectively take a snapshot of a particular pool. | |
28 | ||
29 | To organize data into pools, you can list, create, and remove pools. | |
30 | You can also view the utilization statistics for each pool. | |
31 | ||
32 | List Pools | |
33 | ========== | |
34 | ||
35 | To list your cluster's pools, execute:: | |
36 | ||
37 | ceph osd lspools | |
38 | ||
39 | On a freshly installed cluster, only the ``rbd`` pool exists. | |
40 | ||
41 | ||
42 | .. _createpool: | |
43 | ||
44 | Create a Pool | |
45 | ============= | |
46 | ||
47 | Before creating pools, refer to the `Pool, PG and CRUSH Config Reference`_. | |
48 | Ideally, you should override the default value for the number of placement | |
49 | groups in your Ceph configuration file, as the default is NOT ideal. | |
50 | For details on placement group numbers refer to `setting the number of placement groups`_ | |
51 | ||
c07f9fc5 FG |
52 | .. note:: Starting with Luminous, all pools need to be associated to the |
53 | application using the pool. See `Associate Pool to Application`_ below for | |
54 | more information. | |
55 | ||
7c673cae FG |
56 | For example:: |
57 | ||
58 | osd pool default pg num = 100 | |
59 | osd pool default pgp num = 100 | |
60 | ||
61 | To create a pool, execute:: | |
62 | ||
63 | ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ | |
224ce89b | 64 | [crush-rule-name] [expected-num-objects] |
7c673cae | 65 | ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \ |
224ce89b | 66 | [erasure-code-profile] [crush-rule-name] [expected_num_objects] |
7c673cae FG |
67 | |
68 | Where: | |
69 | ||
70 | ``{pool-name}`` | |
71 | ||
72 | :Description: The name of the pool. It must be unique. | |
73 | :Type: String | |
74 | :Required: Yes. | |
75 | ||
76 | ``{pg-num}`` | |
77 | ||
78 | :Description: The total number of placement groups for the pool. See `Placement | |
79 | Groups`_ for details on calculating a suitable number. The | |
80 | default value ``8`` is NOT suitable for most systems. | |
81 | ||
82 | :Type: Integer | |
83 | :Required: Yes. | |
84 | :Default: 8 | |
85 | ||
86 | ``{pgp-num}`` | |
87 | ||
88 | :Description: The total number of placement groups for placement purposes. This | |
89 | **should be equal to the total number of placement groups**, except | |
90 | for placement group splitting scenarios. | |
91 | ||
92 | :Type: Integer | |
93 | :Required: Yes. Picks up default or Ceph configuration value if not specified. | |
94 | :Default: 8 | |
95 | ||
96 | ``{replicated|erasure}`` | |
97 | ||
98 | :Description: The pool type which may either be **replicated** to | |
99 | recover from lost OSDs by keeping multiple copies of the | |
100 | objects or **erasure** to get a kind of | |
101 | `generalized RAID5 <../erasure-code>`_ capability. | |
102 | The **replicated** pools require more | |
103 | raw storage but implement all Ceph operations. The | |
104 | **erasure** pools require less raw storage but only | |
105 | implement a subset of the available operations. | |
106 | ||
107 | :Type: String | |
108 | :Required: No. | |
109 | :Default: replicated | |
110 | ||
224ce89b | 111 | ``[crush-rule-name]`` |
7c673cae | 112 | |
224ce89b WB |
113 | :Description: The name of a CRUSH rule to use for this pool. The specified |
114 | rule must exist. | |
7c673cae FG |
115 | |
116 | :Type: String | |
117 | :Required: No. | |
118 | :Default: For **replicated** pools it is the ruleset specified by the ``osd | |
119 | pool default crush replicated ruleset`` config variable. This | |
120 | ruleset must exist. | |
121 | For **erasure** pools it is ``erasure-code`` if the ``default`` | |
122 | `erasure code profile`_ is used or ``{pool-name}`` otherwise. This | |
123 | ruleset will be created implicitly if it doesn't exist already. | |
124 | ||
125 | ||
126 | ``[erasure-code-profile=profile]`` | |
127 | ||
128 | .. _erasure code profile: ../erasure-code-profile | |
129 | ||
130 | :Description: For **erasure** pools only. Use the `erasure code profile`_. It | |
131 | must be an existing profile as defined by | |
132 | **osd erasure-code-profile set**. | |
133 | ||
134 | :Type: String | |
135 | :Required: No. | |
136 | ||
137 | When you create a pool, set the number of placement groups to a reasonable value | |
138 | (e.g., ``100``). Consider the total number of placement groups per OSD too. | |
139 | Placement groups are computationally expensive, so performance will degrade when | |
140 | you have many pools with many placement groups (e.g., 50 pools with 100 | |
141 | placement groups each). The point of diminishing returns depends upon the power | |
142 | of the OSD host. | |
143 | ||
144 | See `Placement Groups`_ for details on calculating an appropriate number of | |
145 | placement groups for your pool. | |
146 | ||
147 | .. _Placement Groups: ../placement-groups | |
148 | ||
149 | ``[expected-num-objects]`` | |
150 | ||
151 | :Description: The expected number of objects for this pool. By setting this value ( | |
152 | together with a negative **filestore merge threshold**), the PG folder | |
153 | splitting would happen at the pool creation time, to avoid the latency | |
154 | impact to do a runtime folder splitting. | |
155 | ||
156 | :Type: Integer | |
157 | :Required: No. | |
158 | :Default: 0, no splitting at the pool creation time. | |
159 | ||
c07f9fc5 FG |
160 | Associate Pool to Application |
161 | ============================= | |
162 | ||
163 | Pools need to be associated with an application before use. Pools that will be | |
164 | used with CephFS or pools that are automatically created by RGW are | |
165 | automatically associated. Pools that are intended for use with RBD should be | |
166 | initialized using the ``rbd`` tool (see `Block Device Commands`_ for more | |
167 | information). | |
168 | ||
169 | For other cases, you can manually associate a free-form application name to | |
170 | a pool.:: | |
171 | ||
172 | ceph osd pool application enable {pool-name} {application-name} | |
173 | ||
174 | .. note:: CephFS uses the application name ``cephfs``, RBD uses the | |
175 | application name ``rbd``, and RGW uses the application name ``rgw``. | |
176 | ||
7c673cae FG |
177 | Set Pool Quotas |
178 | =============== | |
179 | ||
180 | You can set pool quotas for the maximum number of bytes and/or the maximum | |
181 | number of objects per pool. :: | |
182 | ||
183 | ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] | |
184 | ||
185 | For example:: | |
186 | ||
187 | ceph osd pool set-quota data max_objects 10000 | |
188 | ||
189 | To remove a quota, set its value to ``0``. | |
190 | ||
191 | ||
192 | Delete a Pool | |
193 | ============= | |
194 | ||
195 | To delete a pool, execute:: | |
196 | ||
197 | ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] | |
198 | ||
199 | ||
200 | To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor's | |
201 | configuration. Otherwise they will refuse to remove a pool. | |
202 | ||
203 | See `Monitor Configuration`_ for more information. | |
204 | ||
205 | .. _Monitor Configuration: ../../configuration/mon-config-ref | |
206 | ||
207 | If you created your own rulesets and rules for a pool you created, you should | |
208 | consider removing them when you no longer need your pool:: | |
209 | ||
210 | ceph osd pool get {pool-name} crush_ruleset | |
211 | ||
212 | If the ruleset was "123", for example, you can check the other pools like so:: | |
213 | ||
214 | ceph osd dump | grep "^pool" | grep "crush_ruleset 123" | |
215 | ||
216 | If no other pools use that custom ruleset, then it's safe to delete that | |
217 | ruleset from the cluster. | |
218 | ||
219 | If you created users with permissions strictly for a pool that no longer | |
220 | exists, you should consider deleting those users too:: | |
221 | ||
c07f9fc5 | 222 | ceph auth ls | grep -C 5 {pool-name} |
7c673cae FG |
223 | ceph auth del {user} |
224 | ||
225 | ||
226 | Rename a Pool | |
227 | ============= | |
228 | ||
229 | To rename a pool, execute:: | |
230 | ||
231 | ceph osd pool rename {current-pool-name} {new-pool-name} | |
232 | ||
233 | If you rename a pool and you have per-pool capabilities for an authenticated | |
234 | user, you must update the user's capabilities (i.e., caps) with the new pool | |
235 | name. | |
236 | ||
237 | .. note:: Version ``0.48`` Argonaut and above. | |
238 | ||
239 | Show Pool Statistics | |
240 | ==================== | |
241 | ||
242 | To show a pool's utilization statistics, execute:: | |
243 | ||
244 | rados df | |
245 | ||
246 | ||
247 | Make a Snapshot of a Pool | |
248 | ========================= | |
249 | ||
250 | To make a snapshot of a pool, execute:: | |
251 | ||
252 | ceph osd pool mksnap {pool-name} {snap-name} | |
253 | ||
254 | .. note:: Version ``0.48`` Argonaut and above. | |
255 | ||
256 | ||
257 | Remove a Snapshot of a Pool | |
258 | =========================== | |
259 | ||
260 | To remove a snapshot of a pool, execute:: | |
261 | ||
262 | ceph osd pool rmsnap {pool-name} {snap-name} | |
263 | ||
264 | .. note:: Version ``0.48`` Argonaut and above. | |
265 | ||
266 | .. _setpoolvalues: | |
267 | ||
268 | ||
269 | Set Pool Values | |
270 | =============== | |
271 | ||
272 | To set a value to a pool, execute the following:: | |
273 | ||
274 | ceph osd pool set {pool-name} {key} {value} | |
275 | ||
276 | You may set values for the following keys: | |
277 | ||
d2e6a577 FG |
278 | .. _compression_algorithm: |
279 | ||
280 | ``compression_algorithm`` | |
281 | :Description: Sets inline compression algorithm to use for underlying BlueStore. | |
282 | This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression algorithm``. | |
283 | ||
284 | :Type: String | |
285 | :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd`` | |
286 | ||
287 | ``compression_mode`` | |
288 | ||
289 | :Description: Sets the policy for the inline compression algorithm for underlying BlueStore. | |
290 | This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression mode``. | |
291 | ||
292 | :Type: String | |
293 | :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force`` | |
294 | ||
295 | ``compression_min_blob_size`` | |
296 | ||
297 | :Description: Chunks smaller than this are never compressed. | |
298 | This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression min blob *``. | |
299 | ||
300 | :Type: Unsigned Integer | |
301 | ||
302 | ``compression_max_blob_size`` | |
303 | ||
304 | :Description: Chunks larger than this are broken into smaller blobs sizing | |
305 | ``compression_max_blob_size`` before being compressed. | |
306 | ||
307 | :Type: Unsigned Integer | |
308 | ||
7c673cae FG |
309 | .. _size: |
310 | ||
311 | ``size`` | |
312 | ||
313 | :Description: Sets the number of replicas for objects in the pool. | |
314 | See `Set the Number of Object Replicas`_ for further details. | |
315 | Replicated pools only. | |
316 | ||
317 | :Type: Integer | |
318 | ||
319 | .. _min_size: | |
320 | ||
321 | ``min_size`` | |
322 | ||
323 | :Description: Sets the minimum number of replicas required for I/O. | |
324 | See `Set the Number of Object Replicas`_ for further details. | |
325 | Replicated pools only. | |
326 | ||
327 | :Type: Integer | |
328 | :Version: ``0.54`` and above | |
329 | ||
330 | .. _pg_num: | |
331 | ||
332 | ``pg_num`` | |
333 | ||
334 | :Description: The effective number of placement groups to use when calculating | |
335 | data placement. | |
336 | :Type: Integer | |
337 | :Valid Range: Superior to ``pg_num`` current value. | |
338 | ||
339 | .. _pgp_num: | |
340 | ||
341 | ``pgp_num`` | |
342 | ||
343 | :Description: The effective number of placement groups for placement to use | |
344 | when calculating data placement. | |
345 | ||
346 | :Type: Integer | |
347 | :Valid Range: Equal to or less than ``pg_num``. | |
348 | ||
349 | .. _crush_ruleset: | |
350 | ||
351 | ``crush_ruleset`` | |
352 | ||
353 | :Description: The ruleset to use for mapping object placement in the cluster. | |
354 | :Type: Integer | |
355 | ||
356 | .. _allow_ec_overwrites: | |
357 | ||
358 | ``allow_ec_overwrites`` | |
359 | ||
360 | :Description: Whether writes to an erasure coded pool can update part | |
361 | of an object, so cephfs and rbd can use it. See | |
362 | `Erasure Coding with Overwrites`_ for more details. | |
363 | :Type: Boolean | |
364 | :Version: ``12.2.0`` and above | |
365 | ||
366 | .. _hashpspool: | |
367 | ||
368 | ``hashpspool`` | |
369 | ||
370 | :Description: Set/Unset HASHPSPOOL flag on a given pool. | |
371 | :Type: Integer | |
372 | :Valid Range: 1 sets flag, 0 unsets flag | |
373 | :Version: Version ``0.48`` Argonaut and above. | |
374 | ||
375 | .. _nodelete: | |
376 | ||
377 | ``nodelete`` | |
378 | ||
379 | :Description: Set/Unset NODELETE flag on a given pool. | |
380 | :Type: Integer | |
381 | :Valid Range: 1 sets flag, 0 unsets flag | |
382 | :Version: Version ``FIXME`` | |
383 | ||
384 | .. _nopgchange: | |
385 | ||
386 | ``nopgchange`` | |
387 | ||
388 | :Description: Set/Unset NOPGCHANGE flag on a given pool. | |
389 | :Type: Integer | |
390 | :Valid Range: 1 sets flag, 0 unsets flag | |
391 | :Version: Version ``FIXME`` | |
392 | ||
393 | .. _nosizechange: | |
394 | ||
395 | ``nosizechange`` | |
396 | ||
397 | :Description: Set/Unset NOSIZECHANGE flag on a given pool. | |
398 | :Type: Integer | |
399 | :Valid Range: 1 sets flag, 0 unsets flag | |
400 | :Version: Version ``FIXME`` | |
401 | ||
402 | .. _write_fadvise_dontneed: | |
403 | ||
404 | ``write_fadvise_dontneed`` | |
405 | ||
406 | :Description: Set/Unset WRITE_FADVISE_DONTNEED flag on a given pool. | |
407 | :Type: Integer | |
408 | :Valid Range: 1 sets flag, 0 unsets flag | |
409 | ||
410 | .. _noscrub: | |
411 | ||
412 | ``noscrub`` | |
413 | ||
414 | :Description: Set/Unset NOSCRUB flag on a given pool. | |
415 | :Type: Integer | |
416 | :Valid Range: 1 sets flag, 0 unsets flag | |
417 | ||
418 | .. _nodeep-scrub: | |
419 | ||
420 | ``nodeep-scrub`` | |
421 | ||
422 | :Description: Set/Unset NODEEP_SCRUB flag on a given pool. | |
423 | :Type: Integer | |
424 | :Valid Range: 1 sets flag, 0 unsets flag | |
425 | ||
426 | .. _hit_set_type: | |
427 | ||
428 | ``hit_set_type`` | |
429 | ||
430 | :Description: Enables hit set tracking for cache pools. | |
431 | See `Bloom Filter`_ for additional information. | |
432 | ||
433 | :Type: String | |
434 | :Valid Settings: ``bloom``, ``explicit_hash``, ``explicit_object`` | |
435 | :Default: ``bloom``. Other values are for testing. | |
436 | ||
437 | .. _hit_set_count: | |
438 | ||
439 | ``hit_set_count`` | |
440 | ||
441 | :Description: The number of hit sets to store for cache pools. The higher | |
442 | the number, the more RAM consumed by the ``ceph-osd`` daemon. | |
443 | ||
444 | :Type: Integer | |
445 | :Valid Range: ``1``. Agent doesn't handle > 1 yet. | |
446 | ||
447 | .. _hit_set_period: | |
448 | ||
449 | ``hit_set_period`` | |
450 | ||
451 | :Description: The duration of a hit set period in seconds for cache pools. | |
452 | The higher the number, the more RAM consumed by the | |
453 | ``ceph-osd`` daemon. | |
454 | ||
455 | :Type: Integer | |
456 | :Example: ``3600`` 1hr | |
457 | ||
458 | .. _hit_set_fpp: | |
459 | ||
460 | ``hit_set_fpp`` | |
461 | ||
462 | :Description: The false positive probability for the ``bloom`` hit set type. | |
463 | See `Bloom Filter`_ for additional information. | |
464 | ||
465 | :Type: Double | |
466 | :Valid Range: 0.0 - 1.0 | |
467 | :Default: ``0.05`` | |
468 | ||
469 | .. _cache_target_dirty_ratio: | |
470 | ||
471 | ``cache_target_dirty_ratio`` | |
472 | ||
473 | :Description: The percentage of the cache pool containing modified (dirty) | |
474 | objects before the cache tiering agent will flush them to the | |
475 | backing storage pool. | |
476 | ||
477 | :Type: Double | |
478 | :Default: ``.4`` | |
479 | ||
480 | .. _cache_target_dirty_high_ratio: | |
481 | ||
482 | ``cache_target_dirty_high_ratio`` | |
483 | ||
484 | :Description: The percentage of the cache pool containing modified (dirty) | |
485 | objects before the cache tiering agent will flush them to the | |
486 | backing storage pool with a higher speed. | |
487 | ||
488 | :Type: Double | |
489 | :Default: ``.6`` | |
490 | ||
491 | .. _cache_target_full_ratio: | |
492 | ||
493 | ``cache_target_full_ratio`` | |
494 | ||
495 | :Description: The percentage of the cache pool containing unmodified (clean) | |
496 | objects before the cache tiering agent will evict them from the | |
497 | cache pool. | |
498 | ||
499 | :Type: Double | |
500 | :Default: ``.8`` | |
501 | ||
502 | .. _target_max_bytes: | |
503 | ||
504 | ``target_max_bytes`` | |
505 | ||
506 | :Description: Ceph will begin flushing or evicting objects when the | |
507 | ``max_bytes`` threshold is triggered. | |
508 | ||
509 | :Type: Integer | |
510 | :Example: ``1000000000000`` #1-TB | |
511 | ||
512 | .. _target_max_objects: | |
513 | ||
514 | ``target_max_objects`` | |
515 | ||
516 | :Description: Ceph will begin flushing or evicting objects when the | |
517 | ``max_objects`` threshold is triggered. | |
518 | ||
519 | :Type: Integer | |
520 | :Example: ``1000000`` #1M objects | |
521 | ||
522 | ||
523 | ``hit_set_grade_decay_rate`` | |
524 | ||
525 | :Description: Temperature decay rate between two successive hit_sets | |
526 | :Type: Integer | |
527 | :Valid Range: 0 - 100 | |
528 | :Default: ``20`` | |
529 | ||
530 | ||
531 | ``hit_set_search_last_n`` | |
532 | ||
533 | :Description: Count at most N appearance in hit_sets for temperature calculation | |
534 | :Type: Integer | |
535 | :Valid Range: 0 - hit_set_count | |
536 | :Default: ``1`` | |
537 | ||
538 | ||
539 | .. _cache_min_flush_age: | |
540 | ||
541 | ``cache_min_flush_age`` | |
542 | ||
543 | :Description: The time (in seconds) before the cache tiering agent will flush | |
544 | an object from the cache pool to the storage pool. | |
545 | ||
546 | :Type: Integer | |
547 | :Example: ``600`` 10min | |
548 | ||
549 | .. _cache_min_evict_age: | |
550 | ||
551 | ``cache_min_evict_age`` | |
552 | ||
553 | :Description: The time (in seconds) before the cache tiering agent will evict | |
554 | an object from the cache pool. | |
555 | ||
556 | :Type: Integer | |
557 | :Example: ``1800`` 30min | |
558 | ||
559 | .. _fast_read: | |
560 | ||
561 | ``fast_read`` | |
562 | ||
563 | :Description: On Erasure Coding pool, if this flag is turned on, the read request | |
564 | would issue sub reads to all shards, and waits until it receives enough | |
565 | shards to decode to serve the client. In the case of jerasure and isa | |
566 | erasure plugins, once the first K replies return, client's request is | |
567 | served immediately using the data decoded from these replies. This | |
568 | helps to tradeoff some resources for better performance. Currently this | |
569 | flag is only supported for Erasure Coding pool. | |
570 | ||
571 | :Type: Boolean | |
572 | :Defaults: ``0`` | |
573 | ||
574 | .. _scrub_min_interval: | |
575 | ||
576 | ``scrub_min_interval`` | |
577 | ||
578 | :Description: The minimum interval in seconds for pool scrubbing when | |
579 | load is low. If it is 0, the value osd_scrub_min_interval | |
580 | from config is used. | |
581 | ||
582 | :Type: Double | |
583 | :Default: ``0`` | |
584 | ||
585 | .. _scrub_max_interval: | |
586 | ||
587 | ``scrub_max_interval`` | |
588 | ||
589 | :Description: The maximum interval in seconds for pool scrubbing | |
590 | irrespective of cluster load. If it is 0, the value | |
591 | osd_scrub_max_interval from config is used. | |
592 | ||
593 | :Type: Double | |
594 | :Default: ``0`` | |
595 | ||
596 | .. _deep_scrub_interval: | |
597 | ||
598 | ``deep_scrub_interval`` | |
599 | ||
600 | :Description: The interval in seconds for pool “deep” scrubbing. If it | |
601 | is 0, the value osd_deep_scrub_interval from config is used. | |
602 | ||
603 | :Type: Double | |
604 | :Default: ``0`` | |
605 | ||
606 | ||
607 | Get Pool Values | |
608 | =============== | |
609 | ||
610 | To get a value from a pool, execute the following:: | |
611 | ||
612 | ceph osd pool get {pool-name} {key} | |
613 | ||
614 | You may get values for the following keys: | |
615 | ||
616 | ``size`` | |
617 | ||
618 | :Description: see size_ | |
619 | ||
620 | :Type: Integer | |
621 | ||
622 | ``min_size`` | |
623 | ||
624 | :Description: see min_size_ | |
625 | ||
626 | :Type: Integer | |
627 | :Version: ``0.54`` and above | |
628 | ||
629 | ``pg_num`` | |
630 | ||
631 | :Description: see pg_num_ | |
632 | ||
633 | :Type: Integer | |
634 | ||
635 | ||
636 | ``pgp_num`` | |
637 | ||
638 | :Description: see pgp_num_ | |
639 | ||
640 | :Type: Integer | |
641 | :Valid Range: Equal to or less than ``pg_num``. | |
642 | ||
643 | ||
644 | ``crush_ruleset`` | |
645 | ||
646 | :Description: see crush_ruleset_ | |
647 | ||
648 | ||
649 | ``hit_set_type`` | |
650 | ||
651 | :Description: see hit_set_type_ | |
652 | ||
653 | :Type: String | |
654 | :Valid Settings: ``bloom``, ``explicit_hash``, ``explicit_object`` | |
655 | ||
656 | ``hit_set_count`` | |
657 | ||
658 | :Description: see hit_set_count_ | |
659 | ||
660 | :Type: Integer | |
661 | ||
662 | ||
663 | ``hit_set_period`` | |
664 | ||
665 | :Description: see hit_set_period_ | |
666 | ||
667 | :Type: Integer | |
668 | ||
669 | ||
670 | ``hit_set_fpp`` | |
671 | ||
672 | :Description: see hit_set_fpp_ | |
673 | ||
674 | :Type: Double | |
675 | ||
676 | ||
677 | ``cache_target_dirty_ratio`` | |
678 | ||
679 | :Description: see cache_target_dirty_ratio_ | |
680 | ||
681 | :Type: Double | |
682 | ||
683 | ||
684 | ``cache_target_dirty_high_ratio`` | |
685 | ||
686 | :Description: see cache_target_dirty_high_ratio_ | |
687 | ||
688 | :Type: Double | |
689 | ||
690 | ||
691 | ``cache_target_full_ratio`` | |
692 | ||
693 | :Description: see cache_target_full_ratio_ | |
694 | ||
695 | :Type: Double | |
696 | ||
697 | ||
698 | ``target_max_bytes`` | |
699 | ||
700 | :Description: see target_max_bytes_ | |
701 | ||
702 | :Type: Integer | |
703 | ||
704 | ||
705 | ``target_max_objects`` | |
706 | ||
707 | :Description: see target_max_objects_ | |
708 | ||
709 | :Type: Integer | |
710 | ||
711 | ||
712 | ``cache_min_flush_age`` | |
713 | ||
714 | :Description: see cache_min_flush_age_ | |
715 | ||
716 | :Type: Integer | |
717 | ||
718 | ||
719 | ``cache_min_evict_age`` | |
720 | ||
721 | :Description: see cache_min_evict_age_ | |
722 | ||
723 | :Type: Integer | |
724 | ||
725 | ||
726 | ``fast_read`` | |
727 | ||
728 | :Description: see fast_read_ | |
729 | ||
730 | :Type: Boolean | |
731 | ||
732 | ||
733 | ``scrub_min_interval`` | |
734 | ||
735 | :Description: see scrub_min_interval_ | |
736 | ||
737 | :Type: Double | |
738 | ||
739 | ||
740 | ``scrub_max_interval`` | |
741 | ||
742 | :Description: see scrub_max_interval_ | |
743 | ||
744 | :Type: Double | |
745 | ||
746 | ||
747 | ``deep_scrub_interval`` | |
748 | ||
749 | :Description: see deep_scrub_interval_ | |
750 | ||
751 | :Type: Double | |
752 | ||
753 | ||
754 | Set the Number of Object Replicas | |
755 | ================================= | |
756 | ||
757 | To set the number of object replicas on a replicated pool, execute the following:: | |
758 | ||
759 | ceph osd pool set {poolname} size {num-replicas} | |
760 | ||
761 | .. important:: The ``{num-replicas}`` includes the object itself. | |
762 | If you want the object and two copies of the object for a total of | |
763 | three instances of the object, specify ``3``. | |
764 | ||
765 | For example:: | |
766 | ||
767 | ceph osd pool set data size 3 | |
768 | ||
769 | You may execute this command for each pool. **Note:** An object might accept | |
770 | I/Os in degraded mode with fewer than ``pool size`` replicas. To set a minimum | |
771 | number of required replicas for I/O, you should use the ``min_size`` setting. | |
772 | For example:: | |
773 | ||
774 | ceph osd pool set data min_size 2 | |
775 | ||
776 | This ensures that no object in the data pool will receive I/O with fewer than | |
777 | ``min_size`` replicas. | |
778 | ||
779 | ||
780 | Get the Number of Object Replicas | |
781 | ================================= | |
782 | ||
783 | To get the number of object replicas, execute the following:: | |
784 | ||
785 | ceph osd dump | grep 'replicated size' | |
786 | ||
787 | Ceph will list the pools, with the ``replicated size`` attribute highlighted. | |
788 | By default, ceph creates two replicas of an object (a total of three copies, or | |
789 | a size of 3). | |
790 | ||
791 | ||
792 | ||
793 | .. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref | |
794 | .. _Bloom Filter: http://en.wikipedia.org/wiki/Bloom_filter | |
795 | .. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups | |
796 | .. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites | |
c07f9fc5 FG |
797 | .. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool |
798 |