]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/configuration/pool-pg-config-ref.rst
update sources to v12.2.3
[ceph.git] / ceph / doc / rados / configuration / pool-pg-config-ref.rst
1 ======================================
2 Pool, PG and CRUSH Config Reference
3 ======================================
4
5 .. index:: pools; configuration
6
7 When you create pools and set the number of placement groups for the pool, Ceph
8 uses default values when you don't specifically override the defaults. **We
9 recommend** overridding some of the defaults. Specifically, we recommend setting
10 a pool's replica size and overriding the default number of placement groups. You
11 can specifically set these values when running `pool`_ commands. You can also
12 override the defaults by adding new ones in the ``[global]`` section of your
13 Ceph configuration file.
14
15
16 .. literalinclude:: pool-pg.conf
17 :language: ini
18
19
20
21 ``mon max pool pg num``
22
23 :Description: The maximum number of placement groups per pool.
24 :Type: Integer
25 :Default: ``65536``
26
27
28 ``mon pg create interval``
29
30 :Description: Number of seconds between PG creation in the same
31 Ceph OSD Daemon.
32
33 :Type: Float
34 :Default: ``30.0``
35
36
37 ``mon pg stuck threshold``
38
39 :Description: Number of seconds after which PGs can be considered as
40 being stuck.
41
42 :Type: 32-bit Integer
43 :Default: ``300``
44
45 ``mon pg min inactive``
46
47 :Description: Issue a ``HEALTH_ERR`` in cluster log if the number of PGs stay
48 inactive longer than ``mon_pg_stuck_threshold`` exceeds this
49 setting. A non-positive number means disabled, never go into ERR.
50 :Type: Integer
51 :Default: ``1``
52
53
54 ``mon pg warn min per osd``
55
56 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
57 of PGs per (in) OSD is under this number. (a non-positive number
58 disables this)
59 :Type: Integer
60 :Default: ``30``
61
62
63 ``mon pg warn max per osd``
64
65 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
66 of PGs per (in) OSD is above this number. (a non-positive number
67 disables this)
68 :Type: Integer
69 :Default: ``300``
70
71
72 ``mon pg warn min objects``
73
74 :Description: Do not warn if the total number of objects in cluster is below
75 this number
76 :Type: Integer
77 :Default: ``1000``
78
79
80 ``mon pg warn min pool objects``
81
82 :Description: Do not warn on pools whose object number is below this number
83 :Type: Integer
84 :Default: ``1000``
85
86
87 ``mon pg check down all threshold``
88
89 :Description: Threshold of down OSDs percentage after which we check all PGs
90 for stale ones.
91 :Type: Float
92 :Default: ``0.5``
93
94
95 ``mon pg warn max object skew``
96
97 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average object number
98 of a certain pool is greater than ``mon pg warn max object skew`` times
99 the average object number of the whole pool. (a non-positive number
100 disables this)
101 :Type: Float
102 :Default: ``10``
103
104
105 ``mon delta reset interval``
106
107 :Description: Seconds of inactivity before we reset the pg delta to 0. We keep
108 track of the delta of the used space of each pool, so, for
109 example, it would be easier for us to understand the progress of
110 recovery or the performance of cache tier. But if there's no
111 activity reported for a certain pool, we just reset the history of
112 deltas of that pool.
113 :Type: Integer
114 :Default: ``10``
115
116
117 ``mon osd max op age``
118
119 :Description: Maximum op age before we get concerned (make it a power of 2).
120 A ``HEALTH_WARN`` will be issued if a request has been blocked longer
121 than this limit.
122 :Type: Float
123 :Default: ``32.0``
124
125
126 ``osd pg bits``
127
128 :Description: Placement group bits per Ceph OSD Daemon.
129 :Type: 32-bit Integer
130 :Default: ``6``
131
132
133 ``osd pgp bits``
134
135 :Description: The number of bits per Ceph OSD Daemon for PGPs.
136 :Type: 32-bit Integer
137 :Default: ``6``
138
139
140 ``osd crush chooseleaf type``
141
142 :Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
143 ordinal rank rather than name.
144
145 :Type: 32-bit Integer
146 :Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.
147
148
149 ``osd crush initial weight``
150
151 :Description: The initial crush weight for newly added osds into crushmap.
152
153 :Type: Double
154 :Default: ``the size of newly added osd in TB``. By default, the initial crush
155 weight for the newly added osd is set to its volume size in TB.
156 See `Weighting Bucket Items`_ for details.
157
158
159 ``osd pool default crush rule``
160
161 :Description: The default CRUSH rule to use when creating a replicated pool.
162 :Type: 8-bit Integer
163 :Default: ``-1``, which means "pick the rule with the lowest numerical ID and
164 use that". This is to make pool creation work in the absence of rule 0.
165
166
167 ``osd pool erasure code stripe unit``
168
169 :Description: Sets the default size, in bytes, of a chunk of an object
170 stripe for erasure coded pools. Every object of size S
171 will be stored as N stripes, with each data chunk
172 receiving ``stripe unit`` bytes. Each stripe of ``N *
173 stripe unit`` bytes will be encoded/decoded
174 individually. This option can is overridden by the
175 ``stripe_unit`` setting in an erasure code profile.
176
177 :Type: Unsigned 32-bit Integer
178 :Default: ``4096``
179
180
181 ``osd pool default size``
182
183 :Description: Sets the number of replicas for objects in the pool. The default
184 value is the same as
185 ``ceph osd pool set {pool-name} size {size}``.
186
187 :Type: 32-bit Integer
188 :Default: ``3``
189
190
191 ``osd pool default min size``
192
193 :Description: Sets the minimum number of written replicas for objects in the
194 pool in order to acknowledge a write operation to the client.
195 If minimum is not met, Ceph will not acknowledge the write to the
196 client. This setting ensures a minimum number of replicas when
197 operating in ``degraded`` mode.
198
199 :Type: 32-bit Integer
200 :Default: ``0``, which means no particular minimum. If ``0``,
201 minimum is ``size - (size / 2)``.
202
203
204 ``osd pool default pg num``
205
206 :Description: The default number of placement groups for a pool. The default
207 value is the same as ``pg_num`` with ``mkpool``.
208
209 :Type: 32-bit Integer
210 :Default: ``8``
211
212
213 ``osd pool default pgp num``
214
215 :Description: The default number of placement groups for placement for a pool.
216 The default value is the same as ``pgp_num`` with ``mkpool``.
217 PG and PGP should be equal (for now).
218
219 :Type: 32-bit Integer
220 :Default: ``8``
221
222
223 ``osd pool default flags``
224
225 :Description: The default flags for new pools.
226 :Type: 32-bit Integer
227 :Default: ``0``
228
229
230 ``osd max pgls``
231
232 :Description: The maximum number of placement groups to list. A client
233 requesting a large number can tie up the Ceph OSD Daemon.
234
235 :Type: Unsigned 64-bit Integer
236 :Default: ``1024``
237 :Note: Default should be fine.
238
239
240 ``osd min pg log entries``
241
242 :Description: The minimum number of placement group logs to maintain
243 when trimming log files.
244
245 :Type: 32-bit Int Unsigned
246 :Default: ``1000``
247
248
249 ``osd default data pool replay window``
250
251 :Description: The time (in seconds) for an OSD to wait for a client to replay
252 a request.
253
254 :Type: 32-bit Integer
255 :Default: ``45``
256
257 ``osd max pg per osd hard ratio``
258
259 :Description: The ratio of number of PGs per OSD allowed by the cluster before
260 OSD refuses to create new PGs. OSD stops creating new PGs if the number
261 of PGs it serves exceeds
262 ``osd max pg per osd hard ratio`` \* ``mon max pg per osd``.
263
264 :Type: Float
265 :Default: ``2``
266
267 .. _pool: ../../operations/pools
268 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
269 .. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems