]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/configuration/pool-pg-config-ref.rst
bump version to 18.2.4-pve3
[ceph.git] / ceph / doc / rados / configuration / pool-pg-config-ref.rst
CommitLineData
aee94f69
TL
1.. _rados_config_pool_pg_crush_ref:
2
7c673cae
FG
3======================================
4 Pool, PG and CRUSH Config Reference
5======================================
6
7.. index:: pools; configuration
8
f38dd50b
TL
9The number of placement groups that the CRUSH algorithm assigns to each pool is
10determined by the values of variables in the centralized configuration database
11in the monitor cluster.
12
13Both containerized deployments of Ceph (deployments made using ``cephadm`` or
14Rook) and non-containerized deployments of Ceph rely on the values in the
15central configuration database in the monitor cluster to assign placement
16groups to pools.
17
18Example Commands
19----------------
20
21To see the value of the variable that governs the number of placement groups in a given pool, run a command of the following form:
22
23.. prompt:: bash
24
25 ceph config get osd osd_pool_default_pg_num
26
27To set the value of the variable that governs the number of placement groups in a given pool, run a command of the following form:
28
29.. prompt:: bash
30
31 ceph config set osd osd_pool_default_pg_num
32
33Manual Tuning
34-------------
35In some cases, it might be advisable to override some of the defaults. For
36example, you might determine that it is wise to set a pool's replica size and
37to override the default number of placement groups in the pool. You can set
38these values when running `pool`_ commands.
39
40See Also
41--------
42
43See :ref:`pg-autoscaler`.
7c673cae
FG
44
45
46.. literalinclude:: pool-pg.conf
47 :language: ini
48
20effc67
TL
49.. confval:: mon_max_pool_pg_num
50.. confval:: mon_pg_stuck_threshold
51.. confval:: mon_pg_warn_min_per_osd
52.. confval:: mon_pg_warn_min_objects
53.. confval:: mon_pg_warn_min_pool_objects
54.. confval:: mon_pg_check_down_all_threshold
55.. confval:: mon_pg_warn_max_object_skew
56.. confval:: mon_delta_reset_interval
57.. confval:: osd_crush_chooseleaf_type
58.. confval:: osd_crush_initial_weight
59.. confval:: osd_pool_default_crush_rule
60.. confval:: osd_pool_erasure_code_stripe_unit
61.. confval:: osd_pool_default_size
62.. confval:: osd_pool_default_min_size
63.. confval:: osd_pool_default_pg_num
64.. confval:: osd_pool_default_pgp_num
65.. confval:: osd_pool_default_pg_autoscale_mode
66.. confval:: osd_pool_default_flags
67.. confval:: osd_max_pgls
68.. confval:: osd_min_pg_log_entries
69.. confval:: osd_max_pg_log_entries
70.. confval:: osd_default_data_pool_replay_window
71.. confval:: osd_max_pg_per_osd_hard_ratio
11fdf7f2 72
7c673cae
FG
73.. _pool: ../../operations/pools
74.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
75.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems