]> git.proxmox.com Git - mirror_zfs.git/blob - man/man7/zpoolprops.7
Move properties, parameters, events, and concepts around manual sections
[mirror_zfs.git] / man / man7 / zpoolprops.7
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24 .\" Copyright (c) 2017 Datto Inc.
25 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26 .\" Copyright 2017 Nexenta Systems, Inc.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28 .\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
29 .\"
30 .Dd May 27, 2021
31 .Dt ZPOOLPROPS 7
32 .Os
33 .
34 .Sh NAME
35 .Nm zpoolprops
36 .Nd properties of ZFS storage pools
37 .
38 .Sh DESCRIPTION
39 Each pool has several properties associated with it.
40 Some properties are read-only statistics while others are configurable and
41 change the behavior of the pool.
42 .Pp
43 The following are read-only properties:
44 .Bl -tag -width "unsupported@guid"
45 .It Cm allocated
46 Amount of storage used within the pool.
47 See
48 .Sy fragmentation
49 and
50 .Sy free
51 for more information.
52 .It Sy capacity
53 Percentage of pool space used.
54 This property can also be referred to by its shortened column name,
55 .Sy cap .
56 .It Sy expandsize
57 Amount of uninitialized space within the pool or device that can be used to
58 increase the total capacity of the pool.
59 On whole-disk vdevs, this is the space beyond the end of the GPT –
60 typically occurring when a LUN is dynamically expanded
61 or a disk replaced with a larger one.
62 On partition vdevs, this is the space appended to the partition after it was
63 added to the pool – most likely by resizing it in-place.
64 The space can be claimed for the pool by bringing it online with
65 .Sy autoexpand=on
66 or using
67 .Nm zpool Cm online Fl e .
68 .It Sy fragmentation
69 The amount of fragmentation in the pool.
70 As the amount of space
71 .Sy allocated
72 increases, it becomes more difficult to locate
73 .Sy free
74 space.
75 This may result in lower write performance compared to pools with more
76 unfragmented free space.
77 .It Sy free
78 The amount of free space available in the pool.
79 By contrast, the
80 .Xr zfs 8
81 .Sy available
82 property describes how much new data can be written to ZFS filesystems/volumes.
83 The zpool
84 .Sy free
85 property is not generally useful for this purpose, and can be substantially more than the zfs
86 .Sy available
87 space.
88 This discrepancy is due to several factors, including raidz parity;
89 zfs reservation, quota, refreservation, and refquota properties; and space set aside by
90 .Sy spa_slop_shift
91 (see
92 .Xr zfs 4
93 for more information).
94 .It Sy freeing
95 After a file system or snapshot is destroyed, the space it was using is
96 returned to the pool asynchronously.
97 .Sy freeing
98 is the amount of space remaining to be reclaimed.
99 Over time
100 .Sy freeing
101 will decrease while
102 .Sy free
103 increases.
104 .It Sy health
105 The current health of the pool.
106 Health can be one of
107 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
108 .It Sy guid
109 A unique identifier for the pool.
110 .It Sy load_guid
111 A unique identifier for the pool.
112 Unlike the
113 .Sy guid
114 property, this identifier is generated every time we load the pool (i.e. does
115 not persist across imports/exports) and never changes while the pool is loaded
116 (even if a
117 .Sy reguid
118 operation takes place).
119 .It Sy size
120 Total size of the storage pool.
121 .It Sy unsupported@ Ns Em guid
122 Information about unsupported features that are enabled on the pool.
123 See
124 .Xr zpool-features 7
125 for details.
126 .El
127 .Pp
128 The space usage properties report actual physical space available to the
129 storage pool.
130 The physical space can be different from the total amount of space that any
131 contained datasets can actually use.
132 The amount of space used in a raidz configuration depends on the characteristics
133 of the data being written.
134 In addition, ZFS reserves some space for internal accounting that the
135 .Xr zfs 8
136 command takes into account, but the
137 .Nm
138 command does not.
139 For non-full pools of a reasonable size, these effects should be invisible.
140 For small pools, or pools that are close to being completely full, these
141 discrepancies may become more noticeable.
142 .Pp
143 The following property can be set at creation time and import time:
144 .Bl -tag -width Ds
145 .It Sy altroot
146 Alternate root directory.
147 If set, this directory is prepended to any mount points within the pool.
148 This can be used when examining an unknown pool where the mount points cannot be
149 trusted, or in an alternate boot environment, where the typical paths are not
150 valid.
151 .Sy altroot
152 is not a persistent property.
153 It is valid only while the system is up.
154 Setting
155 .Sy altroot
156 defaults to using
157 .Sy cachefile Ns = Ns Sy none ,
158 though this may be overridden using an explicit setting.
159 .El
160 .Pp
161 The following property can be set only at import time:
162 .Bl -tag -width Ds
163 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
164 If set to
165 .Sy on ,
166 the pool will be imported in read-only mode.
167 This property can also be referred to by its shortened column name,
168 .Sy rdonly .
169 .El
170 .Pp
171 The following properties can be set at creation time and import time, and later
172 changed with the
173 .Nm zpool Cm set
174 command:
175 .Bl -tag -width Ds
176 .It Sy ashift Ns = Ns Sy ashift
177 Pool sector size exponent, to the power of
178 .Sy 2
179 (internally referred to as
180 .Sy ashift ) .
181 Values from 9 to 16, inclusive, are valid; also, the
182 value 0 (the default) means to auto-detect using the kernel's block
183 layer and a ZFS internal exception list.
184 I/O operations will be aligned to the specified size boundaries.
185 Additionally, the minimum (disk)
186 write size will be set to the specified size, so this represents a
187 space vs. performance trade-off.
188 For optimal performance, the pool sector size should be greater than
189 or equal to the sector size of the underlying disks.
190 The typical case for setting this property is when
191 performance is important and the underlying disks use 4KiB sectors but
192 report 512B sectors to the OS (for compatibility reasons); in that
193 case, set
194 .Sy ashift Ns = Ns Sy 12
195 (which is
196 .Sy 1<<12 No = Sy 4096 ) .
197 When set, this property is
198 used as the default hint value in subsequent vdev operations (add,
199 attach and replace).
200 Changing this value will not modify any existing
201 vdev, not even on disk replacement; however it can be used, for
202 instance, to replace a dying 512B sectors disk with a newer 4KiB
203 sectors device: this will probably result in bad performance but at the
204 same time could prevent loss of data.
205 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
206 Controls automatic pool expansion when the underlying LUN is grown.
207 If set to
208 .Sy on ,
209 the pool will be resized according to the size of the expanded device.
210 If the device is part of a mirror or raidz then all devices within that
211 mirror/raidz group must be expanded before the new space is made available to
212 the pool.
213 The default behavior is
214 .Sy off .
215 This property can also be referred to by its shortened column name,
216 .Sy expand .
217 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
218 Controls automatic device replacement.
219 If set to
220 .Sy off ,
221 device replacement must be initiated by the administrator by using the
222 .Nm zpool Cm replace
223 command.
224 If set to
225 .Sy on ,
226 any new device, found in the same physical location as a device that previously
227 belonged to the pool, is automatically formatted and replaced.
228 The default behavior is
229 .Sy off .
230 This property can also be referred to by its shortened column name,
231 .Sy replace .
232 Autoreplace can also be used with virtual disks (like device
233 mapper) provided that you use the /dev/disk/by-vdev paths setup by
234 vdev_id.conf.
235 See the
236 .Xr vdev_id 8
237 manual page for more details.
238 Autoreplace and autoonline require the ZFS Event Daemon be configured and
239 running.
240 See the
241 .Xr zed 8
242 manual page for more details.
243 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
244 When set to
245 .Sy on
246 space which has been recently freed, and is no longer allocated by the pool,
247 will be periodically trimmed.
248 This allows block device vdevs which support
249 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
250 supports hole-punching, to reclaim unused blocks.
251 The default value for this property is
252 .Sy off .
253 .Pp
254 Automatic TRIM does not immediately reclaim blocks after a free.
255 Instead, it will optimistically delay allowing smaller ranges to be aggregated
256 into a few larger ones.
257 These can then be issued more efficiently to the storage.
258 TRIM on L2ARC devices is enabled by setting
259 .Sy l2arc_trim_ahead > 0 .
260 .Pp
261 Be aware that automatic trimming of recently freed data blocks can put
262 significant stress on the underlying storage devices.
263 This will vary depending of how well the specific device handles these commands.
264 For lower-end devices it is often possible to achieve most of the benefits
265 of automatic trimming by running an on-demand (manual) TRIM periodically
266 using the
267 .Nm zpool Cm trim
268 command.
269 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
270 Identifies the default bootable dataset for the root pool.
271 This property is expected to be set mainly by the installation and upgrade programs.
272 Not all Linux distribution boot processes use the bootfs property.
273 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
274 Controls the location of where the pool configuration is cached.
275 Discovering all pools on system startup requires a cached copy of the
276 configuration data that is stored on the root file system.
277 All pools in this cache are automatically imported when the system boots.
278 Some environments, such as install and clustering, need to cache this
279 information in a different location so that pools are not automatically
280 imported.
281 Setting this property caches the pool configuration in a different location that
282 can later be imported with
283 .Nm zpool Cm import Fl c .
284 Setting it to the value
285 .Sy none
286 creates a temporary pool that is never cached, and the
287 .Qq
288 .Pq empty string
289 uses the default location.
290 .Pp
291 Multiple pools can share the same cache file.
292 Because the kernel destroys and recreates this file when pools are added and
293 removed, care should be taken when attempting to access this file.
294 When the last pool using a
295 .Sy cachefile
296 is exported or destroyed, the file will be empty.
297 .It Sy comment Ns = Ns Ar text
298 A text string consisting of printable ASCII characters that will be stored
299 such that it is available even if the pool becomes faulted.
300 An administrator can provide additional information about a pool using this
301 property.
302 .It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
303 Specifies that the pool maintain compatibility with specific feature sets.
304 When set to
305 .Sy off
306 (or unset) compatibility is disabled (all features may be enabled); when set to
307 .Sy legacy Ns
308 no features may be enabled.
309 When set to a comma-separated list of filenames
310 (each filename may either be an absolute path, or relative to
311 .Pa /etc/zfs/compatibility.d
312 or
313 .Pa /usr/share/zfs/compatibility.d )
314 the lists of requested features are read from those files, separated by
315 whitespace and/or commas.
316 Only features present in all files may be enabled.
317 .Pp
318 See
319 .Xr zpool-features 7 ,
320 .Xr zpool-create 8
321 and
322 .Xr zpool-upgrade 8
323 for more information on the operation of compatibility feature sets.
324 .It Sy dedupditto Ns = Ns Ar number
325 This property is deprecated and no longer has any effect.
326 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
327 Controls whether a non-privileged user is granted access based on the dataset
328 permissions defined on the dataset.
329 See
330 .Xr zfs 8
331 for more information on ZFS delegated administration.
332 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
333 Controls the system behavior in the event of catastrophic pool failure.
334 This condition is typically a result of a loss of connectivity to the underlying
335 storage device(s) or a failure of all devices within the pool.
336 The behavior of such an event is determined as follows:
337 .Bl -tag -width "continue"
338 .It Sy wait
339 Blocks all I/O access until the device connectivity is recovered and the errors
340 are cleared.
341 This is the default behavior.
342 .It Sy continue
343 Returns
344 .Er EIO
345 to any new write I/O requests but allows reads to any of the remaining healthy
346 devices.
347 Any write requests that have yet to be committed to disk would be blocked.
348 .It Sy panic
349 Prints out a message to the console and generates a system crash dump.
350 .El
351 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
352 The value of this property is the current state of
353 .Ar feature_name .
354 The only valid value when setting this property is
355 .Sy enabled
356 which moves
357 .Ar feature_name
358 to the enabled state.
359 See
360 .Xr zpool-features 7
361 for details on feature states.
362 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
363 Controls whether information about snapshots associated with this pool is
364 output when
365 .Nm zfs Cm list
366 is run without the
367 .Fl t
368 option.
369 The default value is
370 .Sy off .
371 This property can also be referred to by its shortened name,
372 .Sy listsnaps .
373 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
374 Controls whether a pool activity check should be performed during
375 .Nm zpool Cm import .
376 When a pool is determined to be active it cannot be imported, even with the
377 .Fl f
378 option.
379 This property is intended to be used in failover configurations
380 where multiple hosts have access to a pool on shared storage.
381 .Pp
382 Multihost provides protection on import only.
383 It does not protect against an
384 individual device being used in multiple pools, regardless of the type of vdev.
385 See the discussion under
386 .Nm zpool Cm create .
387 .Pp
388 When this property is on, periodic writes to storage occur to show the pool is
389 in use.
390 See
391 .Sy zfs_multihost_interval
392 in the
393 .Xr zfs 4
394 manual page.
395 In order to enable this property each host must set a unique hostid.
396 See
397 .Xr genhostid 1
398 .Xr zgenhostid 8
399 .Xr spl 4
400 for additional details.
401 The default value is
402 .Sy off .
403 .It Sy version Ns = Ns Ar version
404 The current on-disk version of the pool.
405 This can be increased, but never decreased.
406 The preferred method of updating pools is with the
407 .Nm zpool Cm upgrade
408 command, though this property can be used when a specific version is needed for
409 backwards compatibility.
410 Once feature flags are enabled on a pool this property will no longer have a
411 value.
412 .El