]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/configuration/pool-pg-config-ref.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / doc / rados / configuration / pool-pg-config-ref.rst
CommitLineData
7c673cae
FG
1======================================
2 Pool, PG and CRUSH Config Reference
3======================================
4
5.. index:: pools; configuration
6
7When you create pools and set the number of placement groups for the pool, Ceph
8uses default values when you don't specifically override the defaults. **We
11fdf7f2 9recommend** overriding some of the defaults. Specifically, we recommend setting
7c673cae
FG
10a pool's replica size and overriding the default number of placement groups. You
11can specifically set these values when running `pool`_ commands. You can also
12override the defaults by adding new ones in the ``[global]`` section of your
11fdf7f2 13Ceph configuration file.
7c673cae
FG
14
15
16.. literalinclude:: pool-pg.conf
17 :language: ini
18
19
20
21``mon max pool pg num``
22
23:Description: The maximum number of placement groups per pool.
24:Type: Integer
25:Default: ``65536``
26
27
11fdf7f2 28``mon pg create interval``
7c673cae 29
11fdf7f2 30:Description: Number of seconds between PG creation in the same
7c673cae
FG
31 Ceph OSD Daemon.
32
33:Type: Float
34:Default: ``30.0``
35
36
11fdf7f2 37``mon pg stuck threshold``
7c673cae 38
11fdf7f2 39:Description: Number of seconds after which PGs can be considered as
7c673cae
FG
40 being stuck.
41
42:Type: 32-bit Integer
43:Default: ``300``
44
31f18b77 45``mon pg min inactive``
7c673cae 46
31f18b77
FG
47:Description: Issue a ``HEALTH_ERR`` in cluster log if the number of PGs stay
48 inactive longer than ``mon_pg_stuck_threshold`` exceeds this
49 setting. A non-positive number means disabled, never go into ERR.
50:Type: Integer
51:Default: ``1``
52
53
54``mon pg warn min per osd``
55
56:Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
57 of PGs per (in) OSD is under this number. (a non-positive number
58 disables this)
59:Type: Integer
60:Default: ``30``
61
62
31f18b77
FG
63``mon pg warn min objects``
64
65:Description: Do not warn if the total number of objects in cluster is below
66 this number
67:Type: Integer
68:Default: ``1000``
69
70
71``mon pg warn min pool objects``
72
73:Description: Do not warn on pools whose object number is below this number
74:Type: Integer
75:Default: ``1000``
76
77
78``mon pg check down all threshold``
79
80:Description: Threshold of down OSDs percentage after which we check all PGs
81 for stale ones.
82:Type: Float
83:Default: ``0.5``
84
85
86``mon pg warn max object skew``
87
88:Description: Issue a ``HEALTH_WARN`` in cluster log if the average object number
89 of a certain pool is greater than ``mon pg warn max object skew`` times
9f95a23c
TL
90 the average object number of the whole pool. (zero or a non-positive
91 number disables this). Note that this option applies to the managers.
31f18b77
FG
92:Type: Float
93:Default: ``10``
94
95
96``mon delta reset interval``
97
98:Description: Seconds of inactivity before we reset the pg delta to 0. We keep
99 track of the delta of the used space of each pool, so, for
100 example, it would be easier for us to understand the progress of
101 recovery or the performance of cache tier. But if there's no
102 activity reported for a certain pool, we just reset the history of
103 deltas of that pool.
104:Type: Integer
105:Default: ``10``
106
107
108``mon osd max op age``
109
110:Description: Maximum op age before we get concerned (make it a power of 2).
111 A ``HEALTH_WARN`` will be issued if a request has been blocked longer
112 than this limit.
113:Type: Float
114:Default: ``32.0``
115
116
117``osd pg bits``
7c673cae
FG
118
119:Description: Placement group bits per Ceph OSD Daemon.
120:Type: 32-bit Integer
11fdf7f2 121:Default: ``6``
7c673cae
FG
122
123
31f18b77 124``osd pgp bits``
7c673cae
FG
125
126:Description: The number of bits per Ceph OSD Daemon for PGPs.
127:Type: 32-bit Integer
128:Default: ``6``
129
130
131``osd crush chooseleaf type``
132
11fdf7f2 133:Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
7c673cae
FG
134 ordinal rank rather than name.
135
136:Type: 32-bit Integer
137:Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.
138
139
140``osd crush initial weight``
141
142:Description: The initial crush weight for newly added osds into crushmap.
143
144:Type: Double
145:Default: ``the size of newly added osd in TB``. By default, the initial crush
146 weight for the newly added osd is set to its volume size in TB.
147 See `Weighting Bucket Items`_ for details.
148
149
b32b8144 150``osd pool default crush rule``
7c673cae 151
b32b8144 152:Description: The default CRUSH rule to use when creating a replicated pool.
7c673cae 153:Type: 8-bit Integer
b32b8144
FG
154:Default: ``-1``, which means "pick the rule with the lowest numerical ID and
155 use that". This is to make pool creation work in the absence of rule 0.
7c673cae
FG
156
157
158``osd pool erasure code stripe unit``
159
160:Description: Sets the default size, in bytes, of a chunk of an object
161 stripe for erasure coded pools. Every object of size S
162 will be stored as N stripes, with each data chunk
163 receiving ``stripe unit`` bytes. Each stripe of ``N *
164 stripe unit`` bytes will be encoded/decoded
165 individually. This option can is overridden by the
166 ``stripe_unit`` setting in an erasure code profile.
167
168:Type: Unsigned 32-bit Integer
169:Default: ``4096``
170
171
172``osd pool default size``
173
174:Description: Sets the number of replicas for objects in the pool. The default
175 value is the same as
176 ``ceph osd pool set {pool-name} size {size}``.
177
178:Type: 32-bit Integer
179:Default: ``3``
180
181
182``osd pool default min size``
183
11fdf7f2
TL
184:Description: Sets the minimum number of written replicas for objects in the
185 pool in order to acknowledge a write operation to the client. If
186 minimum is not met, Ceph will not acknowledge the write to the
187 client, **which may result in data loss**. This setting ensures
188 a minimum number of replicas when operating in ``degraded`` mode.
7c673cae
FG
189
190:Type: 32-bit Integer
11fdf7f2 191:Default: ``0``, which means no particular minimum. If ``0``,
7c673cae
FG
192 minimum is ``size - (size / 2)``.
193
194
11fdf7f2 195``osd pool default pg num``
7c673cae 196
11fdf7f2 197:Description: The default number of placement groups for a pool. The default
7c673cae
FG
198 value is the same as ``pg_num`` with ``mkpool``.
199
200:Type: 32-bit Integer
92f5a8d4 201:Default: ``32``
7c673cae
FG
202
203
11fdf7f2 204``osd pool default pgp num``
7c673cae 205
11fdf7f2
TL
206:Description: The default number of placement groups for placement for a pool.
207 The default value is the same as ``pgp_num`` with ``mkpool``.
7c673cae
FG
208 PG and PGP should be equal (for now).
209
210:Type: 32-bit Integer
211:Default: ``8``
212
213
214``osd pool default flags``
215
11fdf7f2 216:Description: The default flags for new pools.
7c673cae
FG
217:Type: 32-bit Integer
218:Default: ``0``
219
220
221``osd max pgls``
222
11fdf7f2 223:Description: The maximum number of placement groups to list. A client
7c673cae
FG
224 requesting a large number can tie up the Ceph OSD Daemon.
225
226:Type: Unsigned 64-bit Integer
227:Default: ``1024``
228:Note: Default should be fine.
229
230
11fdf7f2 231``osd min pg log entries``
7c673cae 232
11fdf7f2 233:Description: The minimum number of placement group logs to maintain
7c673cae
FG
234 when trimming log files.
235
236:Type: 32-bit Int Unsigned
9f95a23c
TL
237:Default: ``250``
238
239
240``osd max pg log entries``
241
242:Description: The maximum number of placement group logs to maintain
243 when trimming log files.
244
245:Type: 32-bit Int Unsigned
246:Default: ``10000``
7c673cae
FG
247
248
249``osd default data pool replay window``
250
251:Description: The time (in seconds) for an OSD to wait for a client to replay
252 a request.
253
254:Type: 32-bit Integer
255:Default: ``45``
256
3efd9988
FG
257``osd max pg per osd hard ratio``
258
259:Description: The ratio of number of PGs per OSD allowed by the cluster before
260 OSD refuses to create new PGs. OSD stops creating new PGs if the number
261 of PGs it serves exceeds
262 ``osd max pg per osd hard ratio`` \* ``mon max pg per osd``.
263
264:Type: Float
265:Default: ``2``
7c673cae 266
11fdf7f2
TL
267``osd recovery priority``
268
269:Description: Priority of recovery in the work queue.
270
271:Type: Integer
272:Default: ``5``
273
274``osd recovery op priority``
275
276:Description: Default priority used for recovery operations if pool doesn't override.
277
278:Type: Integer
279:Default: ``3``
280
7c673cae
FG
281.. _pool: ../../operations/pools
282.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
283.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems