]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/configuration/pool-pg-config-ref.rst
update sources to 12.2.2
[ceph.git] / ceph / doc / rados / configuration / pool-pg-config-ref.rst
CommitLineData
7c673cae
FG
1======================================
2 Pool, PG and CRUSH Config Reference
3======================================
4
5.. index:: pools; configuration
6
7When you create pools and set the number of placement groups for the pool, Ceph
8uses default values when you don't specifically override the defaults. **We
9recommend** overridding some of the defaults. Specifically, we recommend setting
10a pool's replica size and overriding the default number of placement groups. You
11can specifically set these values when running `pool`_ commands. You can also
12override the defaults by adding new ones in the ``[global]`` section of your
13Ceph configuration file.
14
15
16.. literalinclude:: pool-pg.conf
17 :language: ini
18
19
20
21``mon max pool pg num``
22
23:Description: The maximum number of placement groups per pool.
24:Type: Integer
25:Default: ``65536``
26
27
28``mon pg create interval``
29
30:Description: Number of seconds between PG creation in the same
31 Ceph OSD Daemon.
32
33:Type: Float
34:Default: ``30.0``
35
36
37``mon pg stuck threshold``
38
39:Description: Number of seconds after which PGs can be considered as
40 being stuck.
41
42:Type: 32-bit Integer
43:Default: ``300``
44
31f18b77 45``mon pg min inactive``
7c673cae 46
31f18b77
FG
47:Description: Issue a ``HEALTH_ERR`` in cluster log if the number of PGs stay
48 inactive longer than ``mon_pg_stuck_threshold`` exceeds this
49 setting. A non-positive number means disabled, never go into ERR.
50:Type: Integer
51:Default: ``1``
52
53
54``mon pg warn min per osd``
55
56:Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
57 of PGs per (in) OSD is under this number. (a non-positive number
58 disables this)
59:Type: Integer
60:Default: ``30``
61
62
63``mon pg warn max per osd``
64
65:Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
66 of PGs per (in) OSD is above this number. (a non-positive number
67 disables this)
68:Type: Integer
69:Default: ``300``
70
71
72``mon pg warn min objects``
73
74:Description: Do not warn if the total number of objects in cluster is below
75 this number
76:Type: Integer
77:Default: ``1000``
78
79
80``mon pg warn min pool objects``
81
82:Description: Do not warn on pools whose object number is below this number
83:Type: Integer
84:Default: ``1000``
85
86
87``mon pg check down all threshold``
88
89:Description: Threshold of down OSDs percentage after which we check all PGs
90 for stale ones.
91:Type: Float
92:Default: ``0.5``
93
94
95``mon pg warn max object skew``
96
97:Description: Issue a ``HEALTH_WARN`` in cluster log if the average object number
98 of a certain pool is greater than ``mon pg warn max object skew`` times
99 the average object number of the whole pool. (a non-positive number
100 disables this)
101:Type: Float
102:Default: ``10``
103
104
105``mon delta reset interval``
106
107:Description: Seconds of inactivity before we reset the pg delta to 0. We keep
108 track of the delta of the used space of each pool, so, for
109 example, it would be easier for us to understand the progress of
110 recovery or the performance of cache tier. But if there's no
111 activity reported for a certain pool, we just reset the history of
112 deltas of that pool.
113:Type: Integer
114:Default: ``10``
115
116
117``mon osd max op age``
118
119:Description: Maximum op age before we get concerned (make it a power of 2).
120 A ``HEALTH_WARN`` will be issued if a request has been blocked longer
121 than this limit.
122:Type: Float
123:Default: ``32.0``
124
125
126``osd pg bits``
7c673cae
FG
127
128:Description: Placement group bits per Ceph OSD Daemon.
129:Type: 32-bit Integer
130:Default: ``6``
131
132
31f18b77 133``osd pgp bits``
7c673cae
FG
134
135:Description: The number of bits per Ceph OSD Daemon for PGPs.
136:Type: 32-bit Integer
137:Default: ``6``
138
139
140``osd crush chooseleaf type``
141
142:Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
143 ordinal rank rather than name.
144
145:Type: 32-bit Integer
146:Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.
147
148
149``osd crush initial weight``
150
151:Description: The initial crush weight for newly added osds into crushmap.
152
153:Type: Double
154:Default: ``the size of newly added osd in TB``. By default, the initial crush
155 weight for the newly added osd is set to its volume size in TB.
156 See `Weighting Bucket Items`_ for details.
157
158
159``osd pool default crush replicated ruleset``
160
161:Description: The default CRUSH ruleset to use when creating a replicated pool.
162:Type: 8-bit Integer
163:Default: ``CEPH_DEFAULT_CRUSH_REPLICATED_RULESET``, which means "pick
164 a ruleset with the lowest numerical ID and use that". This is to
165 make pool creation work in the absence of ruleset 0.
166
167
168``osd pool erasure code stripe unit``
169
170:Description: Sets the default size, in bytes, of a chunk of an object
171 stripe for erasure coded pools. Every object of size S
172 will be stored as N stripes, with each data chunk
173 receiving ``stripe unit`` bytes. Each stripe of ``N *
174 stripe unit`` bytes will be encoded/decoded
175 individually. This option can is overridden by the
176 ``stripe_unit`` setting in an erasure code profile.
177
178:Type: Unsigned 32-bit Integer
179:Default: ``4096``
180
181
182``osd pool default size``
183
184:Description: Sets the number of replicas for objects in the pool. The default
185 value is the same as
186 ``ceph osd pool set {pool-name} size {size}``.
187
188:Type: 32-bit Integer
189:Default: ``3``
190
191
192``osd pool default min size``
193
194:Description: Sets the minimum number of written replicas for objects in the
195 pool in order to acknowledge a write operation to the client.
196 If minimum is not met, Ceph will not acknowledge the write to the
197 client. This setting ensures a minimum number of replicas when
198 operating in ``degraded`` mode.
199
200:Type: 32-bit Integer
201:Default: ``0``, which means no particular minimum. If ``0``,
202 minimum is ``size - (size / 2)``.
203
204
205``osd pool default pg num``
206
207:Description: The default number of placement groups for a pool. The default
208 value is the same as ``pg_num`` with ``mkpool``.
209
210:Type: 32-bit Integer
211:Default: ``8``
212
213
214``osd pool default pgp num``
215
216:Description: The default number of placement groups for placement for a pool.
217 The default value is the same as ``pgp_num`` with ``mkpool``.
218 PG and PGP should be equal (for now).
219
220:Type: 32-bit Integer
221:Default: ``8``
222
223
224``osd pool default flags``
225
226:Description: The default flags for new pools.
227:Type: 32-bit Integer
228:Default: ``0``
229
230
231``osd max pgls``
232
233:Description: The maximum number of placement groups to list. A client
234 requesting a large number can tie up the Ceph OSD Daemon.
235
236:Type: Unsigned 64-bit Integer
237:Default: ``1024``
238:Note: Default should be fine.
239
240
241``osd min pg log entries``
242
243:Description: The minimum number of placement group logs to maintain
244 when trimming log files.
245
246:Type: 32-bit Int Unsigned
247:Default: ``1000``
248
249
250``osd default data pool replay window``
251
252:Description: The time (in seconds) for an OSD to wait for a client to replay
253 a request.
254
255:Type: 32-bit Integer
256:Default: ``45``
257
3efd9988
FG
258``osd max pg per osd hard ratio``
259
260:Description: The ratio of number of PGs per OSD allowed by the cluster before
261 OSD refuses to create new PGs. OSD stops creating new PGs if the number
262 of PGs it serves exceeds
263 ``osd max pg per osd hard ratio`` \* ``mon max pg per osd``.
264
265:Type: Float
266:Default: ``2``
7c673cae
FG
267
268.. _pool: ../../operations/pools
269.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
270.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems