]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Hint about zpool free vs zfs available
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd November 29, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm initialize
119 .Op Fl c | Fl s
120 .Ar pool
121 .Op Ar device Ns ...
122 .Nm
123 .Cm iostat
124 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125 .Op Fl T Sy u Ns | Ns Sy d
126 .Op Fl ghHLnpPvy
127 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm labelclear
131 .Op Fl f
132 .Ar device
133 .Nm
134 .Cm list
135 .Op Fl HgLpPv
136 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137 .Op Fl T Sy u Ns | Ns Sy d
138 .Oo Ar pool Oc Ns ...
139 .Op Ar interval Op Ar count
140 .Nm
141 .Cm offline
142 .Op Fl f
143 .Op Fl t
144 .Ar pool Ar device Ns ...
145 .Nm
146 .Cm online
147 .Op Fl e
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm reguid
151 .Ar pool
152 .Nm
153 .Cm reopen
154 .Op Fl n
155 .Ar pool
156 .Nm
157 .Cm remove
158 .Op Fl np
159 .Ar pool Ar device Ns ...
160 .Nm
161 .Cm remove
162 .Fl s
163 .Ar pool
164 .Nm
165 .Cm replace
166 .Op Fl f
167 .Oo Fl o Ar property Ns = Ns Ar value Oc
168 .Ar pool Ar device Op Ar new_device
169 .Nm
170 .Cm resilver
171 .Ar pool Ns ...
172 .Nm
173 .Cm scrub
174 .Op Fl s | Fl p
175 .Ar pool Ns ...
176 .Nm
177 .Cm trim
178 .Op Fl d
179 .Op Fl r Ar rate
180 .Op Fl c | Fl s
181 .Ar pool
182 .Op Ar device Ns ...
183 .Nm
184 .Cm set
185 .Ar property Ns = Ns Ar value
186 .Ar pool
187 .Nm
188 .Cm split
189 .Op Fl gLlnP
190 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
191 .Op Fl R Ar root
192 .Ar pool newpool
193 .Oo Ar device Oc Ns ...
194 .Nm
195 .Cm status
196 .Oo Fl c Ar SCRIPT Oc
197 .Op Fl DigLpPstvx
198 .Op Fl T Sy u Ns | Ns Sy d
199 .Oo Ar pool Oc Ns ...
200 .Op Ar interval Op Ar count
201 .Nm
202 .Cm sync
203 .Oo Ar pool Oc Ns ...
204 .Nm
205 .Cm upgrade
206 .Nm
207 .Cm upgrade
208 .Fl v
209 .Nm
210 .Cm upgrade
211 .Op Fl V Ar version
212 .Fl a Ns | Ns Ar pool Ns ...
213 .Sh DESCRIPTION
214 The
215 .Nm
216 command configures ZFS storage pools.
217 A storage pool is a collection of devices that provides physical storage and
218 data replication for ZFS datasets.
219 All datasets within a storage pool share the same space.
220 See
221 .Xr zfs 8
222 for information on managing datasets.
223 .Ss Virtual Devices (vdevs)
224 A "virtual device" describes a single device or a collection of devices
225 organized according to certain performance and fault characteristics.
226 The following virtual devices are supported:
227 .Bl -tag -width Ds
228 .It Sy disk
229 A block device, typically located under
230 .Pa /dev .
231 ZFS can use individual slices or partitions, though the recommended mode of
232 operation is to use whole disks.
233 A disk can be specified by a full path, or it can be a shorthand name
234 .Po the relative portion of the path under
235 .Pa /dev
236 .Pc .
237 A whole disk can be specified by omitting the slice or partition designation.
238 For example,
239 .Pa sda
240 is equivalent to
241 .Pa /dev/sda .
242 When given a whole disk, ZFS automatically labels the disk, if necessary.
243 .It Sy file
244 A regular file.
245 The use of files as a backing store is strongly discouraged.
246 It is designed primarily for experimental purposes, as the fault tolerance of a
247 file is only as good as the file system of which it is a part.
248 A file must be specified by a full path.
249 .It Sy mirror
250 A mirror of two or more devices.
251 Data is replicated in an identical fashion across all components of a mirror.
252 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
253 failing before data integrity is compromised.
254 .It Sy raidz , raidz1 , raidz2 , raidz3
255 A variation on RAID-5 that allows for better distribution of parity and
256 eliminates the RAID-5
257 .Qq write hole
258 .Pq in which data and parity become inconsistent after a power loss .
259 Data and parity is striped across all disks within a raidz group.
260 .Pp
261 A raidz group can have single-, double-, or triple-parity, meaning that the
262 raidz group can sustain one, two, or three failures, respectively, without
263 losing any data.
264 The
265 .Sy raidz1
266 vdev type specifies a single-parity raidz group; the
267 .Sy raidz2
268 vdev type specifies a double-parity raidz group; and the
269 .Sy raidz3
270 vdev type specifies a triple-parity raidz group.
271 The
272 .Sy raidz
273 vdev type is an alias for
274 .Sy raidz1 .
275 .Pp
276 A raidz group with N disks of size X with P parity disks can hold approximately
277 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
278 compromised.
279 The minimum number of devices in a raidz group is one more than the number of
280 parity disks.
281 The recommended number is between 3 and 9 to help increase performance.
282 .It Sy spare
283 A special pseudo-vdev which keeps track of available hot spares for a pool.
284 For more information, see the
285 .Sx Hot Spares
286 section.
287 .It Sy log
288 A separate intent log device.
289 If more than one log device is specified, then writes are load-balanced between
290 devices.
291 Log devices can be mirrored.
292 However, raidz vdev types are not supported for the intent log.
293 For more information, see the
294 .Sx Intent Log
295 section.
296 .It Sy dedup
297 A device dedicated solely for allocating dedup data.
298 The redundancy of this device should match the redundancy of the other normal
299 devices in the pool. If more than one dedup device is specified, then
300 allocations are load-balanced between devices.
301 .It Sy special
302 A device dedicated solely for allocating various kinds of internal metadata,
303 and optionally small file data.
304 The redundancy of this device should match the redundancy of the other normal
305 devices in the pool. If more than one special device is specified, then
306 allocations are load-balanced between devices.
307 .Pp
308 For more information on special allocations, see the
309 .Sx Special Allocation Class
310 section.
311 .It Sy cache
312 A device used to cache storage pool data.
313 A cache device cannot be configured as a mirror or raidz group.
314 For more information, see the
315 .Sx Cache Devices
316 section.
317 .El
318 .Pp
319 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
320 contain files or disks.
321 Mirrors of mirrors
322 .Pq or other combinations
323 are not allowed.
324 .Pp
325 A pool can have any number of virtual devices at the top of the configuration
326 .Po known as
327 .Qq root vdevs
328 .Pc .
329 Data is dynamically distributed across all top-level devices to balance data
330 among devices.
331 As new virtual devices are added, ZFS automatically places data on the newly
332 available devices.
333 .Pp
334 Virtual devices are specified one at a time on the command line, separated by
335 whitespace.
336 The keywords
337 .Sy mirror
338 and
339 .Sy raidz
340 are used to distinguish where a group ends and another begins.
341 For example, the following creates two root vdevs, each a mirror of two disks:
342 .Bd -literal
343 # zpool create mypool mirror sda sdb mirror sdc sdd
344 .Ed
345 .Ss Device Failure and Recovery
346 ZFS supports a rich set of mechanisms for handling device failure and data
347 corruption.
348 All metadata and data is checksummed, and ZFS automatically repairs bad data
349 from a good copy when corruption is detected.
350 .Pp
351 In order to take advantage of these features, a pool must make use of some form
352 of redundancy, using either mirrored or raidz groups.
353 While ZFS supports running in a non-redundant configuration, where each root
354 vdev is simply a disk or file, this is strongly discouraged.
355 A single case of bit corruption can render some or all of your data unavailable.
356 .Pp
357 A pool's health status is described by one of three states: online, degraded,
358 or faulted.
359 An online pool has all devices operating normally.
360 A degraded pool is one in which one or more devices have failed, but the data is
361 still available due to a redundant configuration.
362 A faulted pool has corrupted metadata, or one or more faulted devices, and
363 insufficient replicas to continue functioning.
364 .Pp
365 The health of the top-level vdev, such as mirror or raidz device, is
366 potentially impacted by the state of its associated vdevs, or component
367 devices.
368 A top-level vdev or component device is in one of the following states:
369 .Bl -tag -width "DEGRADED"
370 .It Sy DEGRADED
371 One or more top-level vdevs is in the degraded state because one or more
372 component devices are offline.
373 Sufficient replicas exist to continue functioning.
374 .Pp
375 One or more component devices is in the degraded or faulted state, but
376 sufficient replicas exist to continue functioning.
377 The underlying conditions are as follows:
378 .Bl -bullet
379 .It
380 The number of checksum errors exceeds acceptable levels and the device is
381 degraded as an indication that something may be wrong.
382 ZFS continues to use the device as necessary.
383 .It
384 The number of I/O errors exceeds acceptable levels.
385 The device could not be marked as faulted because there are insufficient
386 replicas to continue functioning.
387 .El
388 .It Sy FAULTED
389 One or more top-level vdevs is in the faulted state because one or more
390 component devices are offline.
391 Insufficient replicas exist to continue functioning.
392 .Pp
393 One or more component devices is in the faulted state, and insufficient
394 replicas exist to continue functioning.
395 The underlying conditions are as follows:
396 .Bl -bullet
397 .It
398 The device could be opened, but the contents did not match expected values.
399 .It
400 The number of I/O errors exceeds acceptable levels and the device is faulted to
401 prevent further use of the device.
402 .El
403 .It Sy OFFLINE
404 The device was explicitly taken offline by the
405 .Nm zpool Cm offline
406 command.
407 .It Sy ONLINE
408 The device is online and functioning.
409 .It Sy REMOVED
410 The device was physically removed while the system was running.
411 Device removal detection is hardware-dependent and may not be supported on all
412 platforms.
413 .It Sy UNAVAIL
414 The device could not be opened.
415 If a pool is imported when a device was unavailable, then the device will be
416 identified by a unique identifier instead of its path since the path was never
417 correct in the first place.
418 .El
419 .Pp
420 If a device is removed and later re-attached to the system, ZFS attempts
421 to put the device online automatically.
422 Device attach detection is hardware-dependent and might not be supported on all
423 platforms.
424 .Ss Hot Spares
425 ZFS allows devices to be associated with pools as
426 .Qq hot spares .
427 These devices are not actively used in the pool, but when an active device
428 fails, it is automatically replaced by a hot spare.
429 To create a pool with hot spares, specify a
430 .Sy spare
431 vdev with any number of devices.
432 For example,
433 .Bd -literal
434 # zpool create pool mirror sda sdb spare sdc sdd
435 .Ed
436 .Pp
437 Spares can be shared across multiple pools, and can be added with the
438 .Nm zpool Cm add
439 command and removed with the
440 .Nm zpool Cm remove
441 command.
442 Once a spare replacement is initiated, a new
443 .Sy spare
444 vdev is created within the configuration that will remain there until the
445 original device is replaced.
446 At this point, the hot spare becomes available again if another device fails.
447 .Pp
448 If a pool has a shared spare that is currently being used, the pool can not be
449 exported since other pools may use this shared spare, which may lead to
450 potential data corruption.
451 .Pp
452 Shared spares add some risk. If the pools are imported on different hosts, and
453 both pools suffer a device failure at the same time, both could attempt to use
454 the spare at the same time. This may not be detected, resulting in data
455 corruption.
456 .Pp
457 An in-progress spare replacement can be cancelled by detaching the hot spare.
458 If the original faulted device is detached, then the hot spare assumes its
459 place in the configuration, and is removed from the spare list of all active
460 pools.
461 .Pp
462 Spares cannot replace log devices.
463 .Ss Intent Log
464 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
465 transactions.
466 For instance, databases often require their transactions to be on stable storage
467 devices when returning from a system call.
468 NFS and other applications can also use
469 .Xr fsync 2
470 to ensure data stability.
471 By default, the intent log is allocated from blocks within the main pool.
472 However, it might be possible to get better performance using separate intent
473 log devices such as NVRAM or a dedicated disk.
474 For example:
475 .Bd -literal
476 # zpool create pool sda sdb log sdc
477 .Ed
478 .Pp
479 Multiple log devices can also be specified, and they can be mirrored.
480 See the
481 .Sx EXAMPLES
482 section for an example of mirroring multiple log devices.
483 .Pp
484 Log devices can be added, replaced, attached, detached and removed. In
485 addition, log devices are imported and exported as part of the pool
486 that contains them.
487 Mirrored devices can be removed by specifying the top-level mirror vdev.
488 .Ss Cache Devices
489 Devices can be added to a storage pool as
490 .Qq cache devices .
491 These devices provide an additional layer of caching between main memory and
492 disk.
493 For read-heavy workloads, where the working set size is much larger than what
494 can be cached in main memory, using cache devices allow much more of this
495 working set to be served from low latency media.
496 Using cache devices provides the greatest performance improvement for random
497 read-workloads of mostly static content.
498 .Pp
499 To create a pool with cache devices, specify a
500 .Sy cache
501 vdev with any number of devices.
502 For example:
503 .Bd -literal
504 # zpool create pool sda sdb cache sdc sdd
505 .Ed
506 .Pp
507 Cache devices cannot be mirrored or part of a raidz configuration.
508 If a read error is encountered on a cache device, that read I/O is reissued to
509 the original storage pool device, which might be part of a mirrored or raidz
510 configuration.
511 .Pp
512 The content of the cache devices is considered volatile, as is the case with
513 other system caches.
514 .Ss Pool checkpoint
515 Before starting critical procedures that include destructive actions (e.g
516 .Nm zfs Cm destroy
517 ), an administrator can checkpoint the pool's state and in the case of a
518 mistake or failure, rewind the entire pool back to the checkpoint.
519 Otherwise, the checkpoint can be discarded when the procedure has completed
520 successfully.
521 .Pp
522 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
523 with care as it contains every part of the pool's state, from properties to vdev
524 configuration.
525 Thus, while a pool has a checkpoint certain operations are not allowed.
526 Specifically, vdev removal/attach/detach, mirror splitting, and
527 changing the pool's guid.
528 Adding a new vdev is supported but in the case of a rewind it will have to be
529 added again.
530 Finally, users of this feature should keep in mind that scrubs in a pool that
531 has a checkpoint do not repair checkpointed data.
532 .Pp
533 To create a checkpoint for a pool:
534 .Bd -literal
535 # zpool checkpoint pool
536 .Ed
537 .Pp
538 To later rewind to its checkpointed state, you need to first export it and
539 then rewind it during import:
540 .Bd -literal
541 # zpool export pool
542 # zpool import --rewind-to-checkpoint pool
543 .Ed
544 .Pp
545 To discard the checkpoint from a pool:
546 .Bd -literal
547 # zpool checkpoint -d pool
548 .Ed
549 .Pp
550 Dataset reservations (controlled by the
551 .Nm reservation
552 or
553 .Nm refreservation
554 zfs properties) may be unenforceable while a checkpoint exists, because the
555 checkpoint is allowed to consume the dataset's reservation.
556 Finally, data that is part of the checkpoint but has been freed in the
557 current state of the pool won't be scanned during a scrub.
558 .Ss Special Allocation Class
559 The allocations in the special class are dedicated to specific block types.
560 By default this includes all metadata, the indirect blocks of user data, and
561 any dedup data. The class can also be provisioned to accept a limited
562 percentage of small file data blocks.
563 .Pp
564 A pool must always have at least one general (non-specified) vdev before
565 other devices can be assigned to the special class. If the special class
566 becomes full, then allocations intended for it will spill back into the
567 normal class.
568 .Pp
569 Dedup data can be excluded from the special class by setting the
570 .Sy zfs_ddt_data_is_special
571 zfs module parameter to false (0).
572 .Pp
573 Inclusion of small file blocks in the special class is opt-in. Each dataset
574 can control the size of small file blocks allowed in the special class by
575 setting the
576 .Sy special_small_blocks
577 dataset property. It defaults to zero so you must opt-in by setting it to a
578 non-zero value. See
579 .Xr zfs 8
580 for more info on setting this property.
581 .Ss Properties
582 Each pool has several properties associated with it.
583 Some properties are read-only statistics while others are configurable and
584 change the behavior of the pool.
585 .Pp
586 The following are read-only properties:
587 .Bl -tag -width Ds
588 .It Cm allocated
589 Amount of storage used within the pool.
590 See
591 .Sy fragmentation
592 and
593 .Sy free
594 for more information.
595 .It Sy capacity
596 Percentage of pool space used.
597 This property can also be referred to by its shortened column name,
598 .Sy cap .
599 .It Sy expandsize
600 Amount of uninitialized space within the pool or device that can be used to
601 increase the total capacity of the pool.
602 Uninitialized space consists of any space on an EFI labeled vdev which has not
603 been brought online
604 .Po e.g, using
605 .Nm zpool Cm online Fl e
606 .Pc .
607 This space occurs when a LUN is dynamically expanded.
608 .It Sy fragmentation
609 The amount of fragmentation in the pool. As the amount of space
610 .Sy allocated
611 increases, it becomes more difficult to locate
612 .Sy free
613 space. This may result in lower write performance compared to pools with more
614 unfragmented free space.
615 .It Sy free
616 The amount of free space available in the pool.
617 By contrast, the
618 .Xr zfs 8
619 .Sy available
620 property describes how much new data can be written to ZFS filesystems/volumes.
621 The zpool
622 .Sy free
623 property is not generally useful for this purpose, and can be substantially more than the zfs
624 .Sy available
625 space. This discrepancy is due to several factors, including raidz party; zfs
626 reservation, quota, refreservation, and refquota properties; and space set aside by
627 .Sy spa_slop_shift
628 (see
629 .Xr zfs-module-parameters 5
630 for more information).
631 .It Sy freeing
632 After a file system or snapshot is destroyed, the space it was using is
633 returned to the pool asynchronously.
634 .Sy freeing
635 is the amount of space remaining to be reclaimed.
636 Over time
637 .Sy freeing
638 will decrease while
639 .Sy free
640 increases.
641 .It Sy health
642 The current health of the pool.
643 Health can be one of
644 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
645 .It Sy guid
646 A unique identifier for the pool.
647 .It Sy load_guid
648 A unique identifier for the pool.
649 Unlike the
650 .Sy guid
651 property, this identifier is generated every time we load the pool (e.g. does
652 not persist across imports/exports) and never changes while the pool is loaded
653 (even if a
654 .Sy reguid
655 operation takes place).
656 .It Sy size
657 Total size of the storage pool.
658 .It Sy unsupported@ Ns Em feature_guid
659 Information about unsupported features that are enabled on the pool.
660 See
661 .Xr zpool-features 5
662 for details.
663 .El
664 .Pp
665 The space usage properties report actual physical space available to the
666 storage pool.
667 The physical space can be different from the total amount of space that any
668 contained datasets can actually use.
669 The amount of space used in a raidz configuration depends on the characteristics
670 of the data being written.
671 In addition, ZFS reserves some space for internal accounting that the
672 .Xr zfs 8
673 command takes into account, but the
674 .Nm
675 command does not.
676 For non-full pools of a reasonable size, these effects should be invisible.
677 For small pools, or pools that are close to being completely full, these
678 discrepancies may become more noticeable.
679 .Pp
680 The following property can be set at creation time and import time:
681 .Bl -tag -width Ds
682 .It Sy altroot
683 Alternate root directory.
684 If set, this directory is prepended to any mount points within the pool.
685 This can be used when examining an unknown pool where the mount points cannot be
686 trusted, or in an alternate boot environment, where the typical paths are not
687 valid.
688 .Sy altroot
689 is not a persistent property.
690 It is valid only while the system is up.
691 Setting
692 .Sy altroot
693 defaults to using
694 .Sy cachefile Ns = Ns Sy none ,
695 though this may be overridden using an explicit setting.
696 .El
697 .Pp
698 The following property can be set only at import time:
699 .Bl -tag -width Ds
700 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
701 If set to
702 .Sy on ,
703 the pool will be imported in read-only mode.
704 This property can also be referred to by its shortened column name,
705 .Sy rdonly .
706 .El
707 .Pp
708 The following properties can be set at creation time and import time, and later
709 changed with the
710 .Nm zpool Cm set
711 command:
712 .Bl -tag -width Ds
713 .It Sy ashift Ns = Ns Sy ashift
714 Pool sector size exponent, to the power of
715 .Sy 2
716 (internally referred to as
717 .Sy ashift
718 ). Values from 9 to 16, inclusive, are valid; also, the special
719 value 0 (the default) means to auto-detect using the kernel's block
720 layer and a ZFS internal exception list. I/O operations will be aligned
721 to the specified size boundaries. Additionally, the minimum (disk)
722 write size will be set to the specified size, so this represents a
723 space vs. performance trade-off. For optimal performance, the pool
724 sector size should be greater than or equal to the sector size of the
725 underlying disks. The typical case for setting this property is when
726 performance is important and the underlying disks use 4KiB sectors but
727 report 512B sectors to the OS (for compatibility reasons); in that
728 case, set
729 .Sy ashift=12
730 (which is 1<<12 = 4096). When set, this property is
731 used as the default hint value in subsequent vdev operations (add,
732 attach and replace). Changing this value will not modify any existing
733 vdev, not even on disk replacement; however it can be used, for
734 instance, to replace a dying 512B sectors disk with a newer 4KiB
735 sectors device: this will probably result in bad performance but at the
736 same time could prevent loss of data.
737 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
738 Controls automatic pool expansion when the underlying LUN is grown.
739 If set to
740 .Sy on ,
741 the pool will be resized according to the size of the expanded device.
742 If the device is part of a mirror or raidz then all devices within that
743 mirror/raidz group must be expanded before the new space is made available to
744 the pool.
745 The default behavior is
746 .Sy off .
747 This property can also be referred to by its shortened column name,
748 .Sy expand .
749 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
750 Controls automatic device replacement.
751 If set to
752 .Sy off ,
753 device replacement must be initiated by the administrator by using the
754 .Nm zpool Cm replace
755 command.
756 If set to
757 .Sy on ,
758 any new device, found in the same physical location as a device that previously
759 belonged to the pool, is automatically formatted and replaced.
760 The default behavior is
761 .Sy off .
762 This property can also be referred to by its shortened column name,
763 .Sy replace .
764 Autoreplace can also be used with virtual disks (like device
765 mapper) provided that you use the /dev/disk/by-vdev paths setup by
766 vdev_id.conf. See the
767 .Xr vdev_id 8
768 man page for more details.
769 Autoreplace and autoonline require the ZFS Event Daemon be configured and
770 running. See the
771 .Xr zed 8
772 man page for more details.
773 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
774 Identifies the default bootable dataset for the root pool. This property is
775 expected to be set mainly by the installation and upgrade programs.
776 Not all Linux distribution boot processes use the bootfs property.
777 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
778 Controls the location of where the pool configuration is cached.
779 Discovering all pools on system startup requires a cached copy of the
780 configuration data that is stored on the root file system.
781 All pools in this cache are automatically imported when the system boots.
782 Some environments, such as install and clustering, need to cache this
783 information in a different location so that pools are not automatically
784 imported.
785 Setting this property caches the pool configuration in a different location that
786 can later be imported with
787 .Nm zpool Cm import Fl c .
788 Setting it to the special value
789 .Sy none
790 creates a temporary pool that is never cached, and the special value
791 .Qq
792 .Pq empty string
793 uses the default location.
794 .Pp
795 Multiple pools can share the same cache file.
796 Because the kernel destroys and recreates this file when pools are added and
797 removed, care should be taken when attempting to access this file.
798 When the last pool using a
799 .Sy cachefile
800 is exported or destroyed, the file will be empty.
801 .It Sy comment Ns = Ns Ar text
802 A text string consisting of printable ASCII characters that will be stored
803 such that it is available even if the pool becomes faulted.
804 An administrator can provide additional information about a pool using this
805 property.
806 .It Sy dedupditto Ns = Ns Ar number
807 Threshold for the number of block ditto copies.
808 If the reference count for a deduplicated block increases above this number, a
809 new ditto copy of this block is automatically stored.
810 The default setting is
811 .Sy 0
812 which causes no ditto copies to be created for deduplicated blocks.
813 The minimum legal nonzero setting is
814 .Sy 100 .
815 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
816 Controls whether a non-privileged user is granted access based on the dataset
817 permissions defined on the dataset.
818 See
819 .Xr zfs 8
820 for more information on ZFS delegated administration.
821 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
822 Controls the system behavior in the event of catastrophic pool failure.
823 This condition is typically a result of a loss of connectivity to the underlying
824 storage device(s) or a failure of all devices within the pool.
825 The behavior of such an event is determined as follows:
826 .Bl -tag -width "continue"
827 .It Sy wait
828 Blocks all I/O access until the device connectivity is recovered and the errors
829 are cleared.
830 This is the default behavior.
831 .It Sy continue
832 Returns
833 .Er EIO
834 to any new write I/O requests but allows reads to any of the remaining healthy
835 devices.
836 Any write requests that have yet to be committed to disk would be blocked.
837 .It Sy panic
838 Prints out a message to the console and generates a system crash dump.
839 .El
840 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
841 When set to
842 .Sy on
843 space which has been recently freed, and is no longer allocated by the pool,
844 will be periodically trimmed. This allows block device vdevs which support
845 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
846 supports hole-punching, to reclaim unused blocks. The default setting for
847 this property is
848 .Sy off .
849 .Pp
850 Automatic TRIM does not immediately reclaim blocks after a free. Instead,
851 it will optimistically delay allowing smaller ranges to be aggregated in to
852 a few larger ones. These can then be issued more efficiently to the storage.
853 .Pp
854 Be aware that automatic trimming of recently freed data blocks can put
855 significant stress on the underlying storage devices. This will vary
856 depending of how well the specific device handles these commands. For
857 lower end devices it is often possible to achieve most of the benefits
858 of automatic trimming by running an on-demand (manual) TRIM periodically
859 using the
860 .Nm zpool Cm trim
861 command.
862 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
863 The value of this property is the current state of
864 .Ar feature_name .
865 The only valid value when setting this property is
866 .Sy enabled
867 which moves
868 .Ar feature_name
869 to the enabled state.
870 See
871 .Xr zpool-features 5
872 for details on feature states.
873 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
874 Controls whether information about snapshots associated with this pool is
875 output when
876 .Nm zfs Cm list
877 is run without the
878 .Fl t
879 option.
880 The default value is
881 .Sy off .
882 This property can also be referred to by its shortened name,
883 .Sy listsnaps .
884 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
885 Controls whether a pool activity check should be performed during
886 .Nm zpool Cm import .
887 When a pool is determined to be active it cannot be imported, even with the
888 .Fl f
889 option. This property is intended to be used in failover configurations
890 where multiple hosts have access to a pool on shared storage.
891 .Pp
892 Multihost provides protection on import only. It does not protect against an
893 individual device being used in multiple pools, regardless of the type of vdev.
894 See the discussion under
895 .Sy zpool create.
896 .Pp
897 When this property is on, periodic writes to storage occur to show the pool is
898 in use. See
899 .Sy zfs_multihost_interval
900 in the
901 .Xr zfs-module-parameters 5
902 man page. In order to enable this property each host must set a unique hostid.
903 See
904 .Xr genhostid 1
905 .Xr zgenhostid 8
906 .Xr spl-module-parameters 5
907 for additional details. The default value is
908 .Sy off .
909 .It Sy version Ns = Ns Ar version
910 The current on-disk version of the pool.
911 This can be increased, but never decreased.
912 The preferred method of updating pools is with the
913 .Nm zpool Cm upgrade
914 command, though this property can be used when a specific version is needed for
915 backwards compatibility.
916 Once feature flags are enabled on a pool this property will no longer have a
917 value.
918 .El
919 .Ss Subcommands
920 All subcommands that modify state are logged persistently to the pool in their
921 original form.
922 .Pp
923 The
924 .Nm
925 command provides subcommands to create and destroy storage pools, add capacity
926 to storage pools, and provide information about the storage pools.
927 The following subcommands are supported:
928 .Bl -tag -width Ds
929 .It Xo
930 .Nm
931 .Fl ?
932 .Xc
933 Displays a help message.
934 .It Xo
935 .Nm
936 .Cm add
937 .Op Fl fgLnP
938 .Oo Fl o Ar property Ns = Ns Ar value Oc
939 .Ar pool vdev Ns ...
940 .Xc
941 Adds the specified virtual devices to the given pool.
942 The
943 .Ar vdev
944 specification is described in the
945 .Sx Virtual Devices
946 section.
947 The behavior of the
948 .Fl f
949 option, and the device checks performed are described in the
950 .Nm zpool Cm create
951 subcommand.
952 .Bl -tag -width Ds
953 .It Fl f
954 Forces use of
955 .Ar vdev Ns s ,
956 even if they appear in use or specify a conflicting replication level.
957 Not all devices can be overridden in this manner.
958 .It Fl g
959 Display
960 .Ar vdev ,
961 GUIDs instead of the normal device names. These GUIDs can be used in place of
962 device names for the zpool detach/offline/remove/replace commands.
963 .It Fl L
964 Display real paths for
965 .Ar vdev Ns s
966 resolving all symbolic links. This can be used to look up the current block
967 device name regardless of the /dev/disk/ path used to open it.
968 .It Fl n
969 Displays the configuration that would be used without actually adding the
970 .Ar vdev Ns s .
971 The actual pool creation can still fail due to insufficient privileges or
972 device sharing.
973 .It Fl P
974 Display real paths for
975 .Ar vdev Ns s
976 instead of only the last component of the path. This can be used in
977 conjunction with the
978 .Fl L
979 flag.
980 .It Fl o Ar property Ns = Ns Ar value
981 Sets the given pool properties. See the
982 .Sx Properties
983 section for a list of valid properties that can be set. The only property
984 supported at the moment is ashift.
985 .El
986 .It Xo
987 .Nm
988 .Cm attach
989 .Op Fl f
990 .Oo Fl o Ar property Ns = Ns Ar value Oc
991 .Ar pool device new_device
992 .Xc
993 Attaches
994 .Ar new_device
995 to the existing
996 .Ar device .
997 The existing device cannot be part of a raidz configuration.
998 If
999 .Ar device
1000 is not currently part of a mirrored configuration,
1001 .Ar device
1002 automatically transforms into a two-way mirror of
1003 .Ar device
1004 and
1005 .Ar new_device .
1006 If
1007 .Ar device
1008 is part of a two-way mirror, attaching
1009 .Ar new_device
1010 creates a three-way mirror, and so on.
1011 In either case,
1012 .Ar new_device
1013 begins to resilver immediately.
1014 .Bl -tag -width Ds
1015 .It Fl f
1016 Forces use of
1017 .Ar new_device ,
1018 even if it appears to be in use.
1019 Not all devices can be overridden in this manner.
1020 .It Fl o Ar property Ns = Ns Ar value
1021 Sets the given pool properties. See the
1022 .Sx Properties
1023 section for a list of valid properties that can be set. The only property
1024 supported at the moment is ashift.
1025 .El
1026 .It Xo
1027 .Nm
1028 .Cm checkpoint
1029 .Op Fl d, -discard
1030 .Ar pool
1031 .Xc
1032 Checkpoints the current state of
1033 .Ar pool
1034 , which can be later restored by
1035 .Nm zpool Cm import --rewind-to-checkpoint .
1036 The existence of a checkpoint in a pool prohibits the following
1037 .Nm zpool
1038 commands:
1039 .Cm remove ,
1040 .Cm attach ,
1041 .Cm detach ,
1042 .Cm split ,
1043 and
1044 .Cm reguid .
1045 In addition, it may break reservation boundaries if the pool lacks free
1046 space.
1047 The
1048 .Nm zpool Cm status
1049 command indicates the existence of a checkpoint or the progress of discarding a
1050 checkpoint from a pool.
1051 The
1052 .Nm zpool Cm list
1053 command reports how much space the checkpoint takes from the pool.
1054 .Bl -tag -width Ds
1055 .It Fl d, -discard
1056 Discards an existing checkpoint from
1057 .Ar pool .
1058 .El
1059 .It Xo
1060 .Nm
1061 .Cm clear
1062 .Ar pool
1063 .Op Ar device
1064 .Xc
1065 Clears device errors in a pool.
1066 If no arguments are specified, all device errors within the pool are cleared.
1067 If one or more devices is specified, only those errors associated with the
1068 specified device or devices are cleared.
1069 If multihost is enabled, and the pool has been suspended, this will not
1070 resume I/O. While the pool was suspended, it may have been imported on
1071 another host, and resuming I/O could result in pool damage.
1072 .It Xo
1073 .Nm
1074 .Cm create
1075 .Op Fl dfn
1076 .Op Fl m Ar mountpoint
1077 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1078 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1079 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1080 .Op Fl R Ar root
1081 .Op Fl t Ar tname
1082 .Ar pool vdev Ns ...
1083 .Xc
1084 Creates a new storage pool containing the virtual devices specified on the
1085 command line.
1086 The pool name must begin with a letter, and can only contain
1087 alphanumeric characters as well as underscore
1088 .Pq Qq Sy _ ,
1089 dash
1090 .Pq Qq Sy \&- ,
1091 colon
1092 .Pq Qq Sy \&: ,
1093 space
1094 .Pq Qq Sy \&\ ,
1095 and period
1096 .Pq Qq Sy \&. .
1097 The pool names
1098 .Sy mirror ,
1099 .Sy raidz ,
1100 .Sy spare
1101 and
1102 .Sy log
1103 are reserved, as are names beginning with
1104 .Sy mirror ,
1105 .Sy raidz ,
1106 .Sy spare ,
1107 and the pattern
1108 .Sy c[0-9] .
1109 The
1110 .Ar vdev
1111 specification is described in the
1112 .Sx Virtual Devices
1113 section.
1114 .Pp
1115 The command attempts to verify that each device specified is accessible and not
1116 currently in use by another subsystem. However this check is not robust enough
1117 to detect simultaneous attempts to use a new device in different pools, even if
1118 .Sy multihost
1119 is
1120 .Sy enabled.
1121 The
1122 administrator must ensure that simultaneous invocations of any combination of
1123 .Sy zpool replace ,
1124 .Sy zpool create ,
1125 .Sy zpool add ,
1126 or
1127 .Sy zpool labelclear ,
1128 do not refer to the same device. Using the same device in two pools will
1129 result in pool corruption.
1130 .Pp
1131 There are some uses, such as being currently mounted, or specified as the
1132 dedicated dump device, that prevents a device from ever being used by ZFS.
1133 Other uses, such as having a preexisting UFS file system, can be overridden with
1134 the
1135 .Fl f
1136 option.
1137 .Pp
1138 The command also checks that the replication strategy for the pool is
1139 consistent.
1140 An attempt to combine redundant and non-redundant storage in a single pool, or
1141 to mix disks and files, results in an error unless
1142 .Fl f
1143 is specified.
1144 The use of differently sized devices within a single raidz or mirror group is
1145 also flagged as an error unless
1146 .Fl f
1147 is specified.
1148 .Pp
1149 Unless the
1150 .Fl R
1151 option is specified, the default mount point is
1152 .Pa / Ns Ar pool .
1153 The mount point must not exist or must be empty, or else the root dataset
1154 cannot be mounted.
1155 This can be overridden with the
1156 .Fl m
1157 option.
1158 .Pp
1159 By default all supported features are enabled on the new pool unless the
1160 .Fl d
1161 option is specified.
1162 .Bl -tag -width Ds
1163 .It Fl d
1164 Do not enable any features on the new pool.
1165 Individual features can be enabled by setting their corresponding properties to
1166 .Sy enabled
1167 with the
1168 .Fl o
1169 option.
1170 See
1171 .Xr zpool-features 5
1172 for details about feature properties.
1173 .It Fl f
1174 Forces use of
1175 .Ar vdev Ns s ,
1176 even if they appear in use or specify a conflicting replication level.
1177 Not all devices can be overridden in this manner.
1178 .It Fl m Ar mountpoint
1179 Sets the mount point for the root dataset.
1180 The default mount point is
1181 .Pa /pool
1182 or
1183 .Pa altroot/pool
1184 if
1185 .Ar altroot
1186 is specified.
1187 The mount point must be an absolute path,
1188 .Sy legacy ,
1189 or
1190 .Sy none .
1191 For more information on dataset mount points, see
1192 .Xr zfs 8 .
1193 .It Fl n
1194 Displays the configuration that would be used without actually creating the
1195 pool.
1196 The actual pool creation can still fail due to insufficient privileges or
1197 device sharing.
1198 .It Fl o Ar property Ns = Ns Ar value
1199 Sets the given pool properties.
1200 See the
1201 .Sx Properties
1202 section for a list of valid properties that can be set.
1203 .It Fl o Ar feature@feature Ns = Ns Ar value
1204 Sets the given pool feature. See the
1205 .Xr zpool-features 5
1206 section for a list of valid features that can be set.
1207 Value can be either disabled or enabled.
1208 .It Fl O Ar file-system-property Ns = Ns Ar value
1209 Sets the given file system properties in the root file system of the pool.
1210 See the
1211 .Sx Properties
1212 section of
1213 .Xr zfs 8
1214 for a list of valid properties that can be set.
1215 .It Fl R Ar root
1216 Equivalent to
1217 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1218 .It Fl t Ar tname
1219 Sets the in-core pool name to
1220 .Sy tname
1221 while the on-disk name will be the name specified as the pool name
1222 .Sy pool .
1223 This will set the default cachefile property to none. This is intended
1224 to handle name space collisions when creating pools for other systems,
1225 such as virtual machines or physical machines whose pools live on network
1226 block devices.
1227 .El
1228 .It Xo
1229 .Nm
1230 .Cm destroy
1231 .Op Fl f
1232 .Ar pool
1233 .Xc
1234 Destroys the given pool, freeing up any devices for other use.
1235 This command tries to unmount any active datasets before destroying the pool.
1236 .Bl -tag -width Ds
1237 .It Fl f
1238 Forces any active datasets contained within the pool to be unmounted.
1239 .El
1240 .It Xo
1241 .Nm
1242 .Cm detach
1243 .Ar pool device
1244 .Xc
1245 Detaches
1246 .Ar device
1247 from a mirror.
1248 The operation is refused if there are no other valid replicas of the data.
1249 If device may be re-added to the pool later on then consider the
1250 .Sy zpool offline
1251 command instead.
1252 .It Xo
1253 .Nm
1254 .Cm events
1255 .Op Fl vHf Oo Ar pool Oc | Fl c
1256 .Xc
1257 Lists all recent events generated by the ZFS kernel modules. These events
1258 are consumed by the
1259 .Xr zed 8
1260 and used to automate administrative tasks such as replacing a failed device
1261 with a hot spare. For more information about the subclasses and event payloads
1262 that can be generated see the
1263 .Xr zfs-events 5
1264 man page.
1265 .Bl -tag -width Ds
1266 .It Fl c
1267 Clear all previous events.
1268 .It Fl f
1269 Follow mode.
1270 .It Fl H
1271 Scripted mode. Do not display headers, and separate fields by a
1272 single tab instead of arbitrary space.
1273 .It Fl v
1274 Print the entire payload for each event.
1275 .El
1276 .It Xo
1277 .Nm
1278 .Cm export
1279 .Op Fl a
1280 .Op Fl f
1281 .Ar pool Ns ...
1282 .Xc
1283 Exports the given pools from the system.
1284 All devices are marked as exported, but are still considered in use by other
1285 subsystems.
1286 The devices can be moved between systems
1287 .Pq even those of different endianness
1288 and imported as long as a sufficient number of devices are present.
1289 .Pp
1290 Before exporting the pool, all datasets within the pool are unmounted.
1291 A pool can not be exported if it has a shared spare that is currently being
1292 used.
1293 .Pp
1294 For pools to be portable, you must give the
1295 .Nm
1296 command whole disks, not just partitions, so that ZFS can label the disks with
1297 portable EFI labels.
1298 Otherwise, disk drivers on platforms of different endianness will not recognize
1299 the disks.
1300 .Bl -tag -width Ds
1301 .It Fl a
1302 Exports all pools imported on the system.
1303 .It Fl f
1304 Forcefully unmount all datasets, using the
1305 .Nm unmount Fl f
1306 command.
1307 .Pp
1308 This command will forcefully export the pool even if it has a shared spare that
1309 is currently being used.
1310 This may lead to potential data corruption.
1311 .El
1312 .It Xo
1313 .Nm
1314 .Cm get
1315 .Op Fl Hp
1316 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1317 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1318 .Oo Ar pool Oc Ns ...
1319 .Xc
1320 Retrieves the given list of properties
1321 .Po
1322 or all properties if
1323 .Sy all
1324 is used
1325 .Pc
1326 for the specified storage pool(s).
1327 These properties are displayed with the following fields:
1328 .Bd -literal
1329 name Name of storage pool
1330 property Property name
1331 value Property value
1332 source Property source, either 'default' or 'local'.
1333 .Ed
1334 .Pp
1335 See the
1336 .Sx Properties
1337 section for more information on the available pool properties.
1338 .Bl -tag -width Ds
1339 .It Fl H
1340 Scripted mode.
1341 Do not display headers, and separate fields by a single tab instead of arbitrary
1342 space.
1343 .It Fl o Ar field
1344 A comma-separated list of columns to display.
1345 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1346 is the default value.
1347 .It Fl p
1348 Display numbers in parsable (exact) values.
1349 .El
1350 .It Xo
1351 .Nm
1352 .Cm history
1353 .Op Fl il
1354 .Oo Ar pool Oc Ns ...
1355 .Xc
1356 Displays the command history of the specified pool(s) or all pools if no pool is
1357 specified.
1358 .Bl -tag -width Ds
1359 .It Fl i
1360 Displays internally logged ZFS events in addition to user initiated events.
1361 .It Fl l
1362 Displays log records in long format, which in addition to standard format
1363 includes, the user name, the hostname, and the zone in which the operation was
1364 performed.
1365 .El
1366 .It Xo
1367 .Nm
1368 .Cm import
1369 .Op Fl D
1370 .Op Fl d Ar dir Ns | Ns device
1371 .Xc
1372 Lists pools available to import.
1373 If the
1374 .Fl d
1375 option is not specified, this command searches for devices in
1376 .Pa /dev .
1377 The
1378 .Fl d
1379 option can be specified multiple times, and all directories are searched.
1380 If the device appears to be part of an exported pool, this command displays a
1381 summary of the pool with the name of the pool, a numeric identifier, as well as
1382 the vdev layout and current health of the device for each device or file.
1383 Destroyed pools, pools that were previously destroyed with the
1384 .Nm zpool Cm destroy
1385 command, are not listed unless the
1386 .Fl D
1387 option is specified.
1388 .Pp
1389 The numeric identifier is unique, and can be used instead of the pool name when
1390 multiple exported pools of the same name are available.
1391 .Bl -tag -width Ds
1392 .It Fl c Ar cachefile
1393 Reads configuration from the given
1394 .Ar cachefile
1395 that was created with the
1396 .Sy cachefile
1397 pool property.
1398 This
1399 .Ar cachefile
1400 is used instead of searching for devices.
1401 .It Fl d Ar dir Ns | Ns Ar device
1402 Uses
1403 .Ar device
1404 or searches for devices or files in
1405 .Ar dir .
1406 The
1407 .Fl d
1408 option can be specified multiple times.
1409 .It Fl D
1410 Lists destroyed pools only.
1411 .El
1412 .It Xo
1413 .Nm
1414 .Cm import
1415 .Fl a
1416 .Op Fl DflmN
1417 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1418 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1419 .Op Fl o Ar mntopts
1420 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1421 .Op Fl R Ar root
1422 .Op Fl s
1423 .Xc
1424 Imports all pools found in the search directories.
1425 Identical to the previous command, except that all pools with a sufficient
1426 number of devices available are imported.
1427 Destroyed pools, pools that were previously destroyed with the
1428 .Nm zpool Cm destroy
1429 command, will not be imported unless the
1430 .Fl D
1431 option is specified.
1432 .Bl -tag -width Ds
1433 .It Fl a
1434 Searches for and imports all pools found.
1435 .It Fl c Ar cachefile
1436 Reads configuration from the given
1437 .Ar cachefile
1438 that was created with the
1439 .Sy cachefile
1440 pool property.
1441 This
1442 .Ar cachefile
1443 is used instead of searching for devices.
1444 .It Fl d Ar dir Ns | Ns Ar device
1445 Uses
1446 .Ar device
1447 or searches for devices or files in
1448 .Ar dir .
1449 The
1450 .Fl d
1451 option can be specified multiple times.
1452 This option is incompatible with the
1453 .Fl c
1454 option.
1455 .It Fl D
1456 Imports destroyed pools only.
1457 The
1458 .Fl f
1459 option is also required.
1460 .It Fl f
1461 Forces import, even if the pool appears to be potentially active.
1462 .It Fl F
1463 Recovery mode for a non-importable pool.
1464 Attempt to return the pool to an importable state by discarding the last few
1465 transactions.
1466 Not all damaged pools can be recovered by using this option.
1467 If successful, the data from the discarded transactions is irretrievably lost.
1468 This option is ignored if the pool is importable or already imported.
1469 .It Fl l
1470 Indicates that this command will request encryption keys for all encrypted
1471 datasets it attempts to mount as it is bringing the pool online. Note that if
1472 any datasets have a
1473 .Sy keylocation
1474 of
1475 .Sy prompt
1476 this command will block waiting for the keys to be entered. Without this flag
1477 encrypted datasets will be left unavailable until the keys are loaded.
1478 .It Fl m
1479 Allows a pool to import when there is a missing log device.
1480 Recent transactions can be lost because the log device will be discarded.
1481 .It Fl n
1482 Used with the
1483 .Fl F
1484 recovery option.
1485 Determines whether a non-importable pool can be made importable again, but does
1486 not actually perform the pool recovery.
1487 For more details about pool recovery mode, see the
1488 .Fl F
1489 option, above.
1490 .It Fl N
1491 Import the pool without mounting any file systems.
1492 .It Fl o Ar mntopts
1493 Comma-separated list of mount options to use when mounting datasets within the
1494 pool.
1495 See
1496 .Xr zfs 8
1497 for a description of dataset properties and mount options.
1498 .It Fl o Ar property Ns = Ns Ar value
1499 Sets the specified property on the imported pool.
1500 See the
1501 .Sx Properties
1502 section for more information on the available pool properties.
1503 .It Fl R Ar root
1504 Sets the
1505 .Sy cachefile
1506 property to
1507 .Sy none
1508 and the
1509 .Sy altroot
1510 property to
1511 .Ar root .
1512 .It Fl -rewind-to-checkpoint
1513 Rewinds pool to the checkpointed state.
1514 Once the pool is imported with this flag there is no way to undo the rewind.
1515 All changes and data that were written after the checkpoint are lost!
1516 The only exception is when the
1517 .Sy readonly
1518 mounting option is enabled.
1519 In this case, the checkpointed state of the pool is opened and an
1520 administrator can see how the pool would look like if they were
1521 to fully rewind.
1522 .It Fl s
1523 Scan using the default search path, the libblkid cache will not be
1524 consulted. A custom search path may be specified by setting the
1525 ZPOOL_IMPORT_PATH environment variable.
1526 .It Fl X
1527 Used with the
1528 .Fl F
1529 recovery option. Determines whether extreme
1530 measures to find a valid txg should take place. This allows the pool to
1531 be rolled back to a txg which is no longer guaranteed to be consistent.
1532 Pools imported at an inconsistent txg may contain uncorrectable
1533 checksum errors. For more details about pool recovery mode, see the
1534 .Fl F
1535 option, above. WARNING: This option can be extremely hazardous to the
1536 health of your pool and should only be used as a last resort.
1537 .It Fl T
1538 Specify the txg to use for rollback. Implies
1539 .Fl FX .
1540 For more details
1541 about pool recovery mode, see the
1542 .Fl X
1543 option, above. WARNING: This option can be extremely hazardous to the
1544 health of your pool and should only be used as a last resort.
1545 .El
1546 .It Xo
1547 .Nm
1548 .Cm import
1549 .Op Fl Dflm
1550 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1551 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1552 .Op Fl o Ar mntopts
1553 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1554 .Op Fl R Ar root
1555 .Op Fl s
1556 .Ar pool Ns | Ns Ar id
1557 .Op Ar newpool
1558 .Xc
1559 Imports a specific pool.
1560 A pool can be identified by its name or the numeric identifier.
1561 If
1562 .Ar newpool
1563 is specified, the pool is imported using the name
1564 .Ar newpool .
1565 Otherwise, it is imported with the same name as its exported name.
1566 .Pp
1567 If a device is removed from a system without running
1568 .Nm zpool Cm export
1569 first, the device appears as potentially active.
1570 It cannot be determined if this was a failed export, or whether the device is
1571 really in use from another host.
1572 To import a pool in this state, the
1573 .Fl f
1574 option is required.
1575 .Bl -tag -width Ds
1576 .It Fl c Ar cachefile
1577 Reads configuration from the given
1578 .Ar cachefile
1579 that was created with the
1580 .Sy cachefile
1581 pool property.
1582 This
1583 .Ar cachefile
1584 is used instead of searching for devices.
1585 .It Fl d Ar dir Ns | Ns Ar device
1586 Uses
1587 .Ar device
1588 or searches for devices or files in
1589 .Ar dir .
1590 The
1591 .Fl d
1592 option can be specified multiple times.
1593 This option is incompatible with the
1594 .Fl c
1595 option.
1596 .It Fl D
1597 Imports destroyed pool.
1598 The
1599 .Fl f
1600 option is also required.
1601 .It Fl f
1602 Forces import, even if the pool appears to be potentially active.
1603 .It Fl F
1604 Recovery mode for a non-importable pool.
1605 Attempt to return the pool to an importable state by discarding the last few
1606 transactions.
1607 Not all damaged pools can be recovered by using this option.
1608 If successful, the data from the discarded transactions is irretrievably lost.
1609 This option is ignored if the pool is importable or already imported.
1610 .It Fl l
1611 Indicates that this command will request encryption keys for all encrypted
1612 datasets it attempts to mount as it is bringing the pool online. Note that if
1613 any datasets have a
1614 .Sy keylocation
1615 of
1616 .Sy prompt
1617 this command will block waiting for the keys to be entered. Without this flag
1618 encrypted datasets will be left unavailable until the keys are loaded.
1619 .It Fl m
1620 Allows a pool to import when there is a missing log device.
1621 Recent transactions can be lost because the log device will be discarded.
1622 .It Fl n
1623 Used with the
1624 .Fl F
1625 recovery option.
1626 Determines whether a non-importable pool can be made importable again, but does
1627 not actually perform the pool recovery.
1628 For more details about pool recovery mode, see the
1629 .Fl F
1630 option, above.
1631 .It Fl o Ar mntopts
1632 Comma-separated list of mount options to use when mounting datasets within the
1633 pool.
1634 See
1635 .Xr zfs 8
1636 for a description of dataset properties and mount options.
1637 .It Fl o Ar property Ns = Ns Ar value
1638 Sets the specified property on the imported pool.
1639 See the
1640 .Sx Properties
1641 section for more information on the available pool properties.
1642 .It Fl R Ar root
1643 Sets the
1644 .Sy cachefile
1645 property to
1646 .Sy none
1647 and the
1648 .Sy altroot
1649 property to
1650 .Ar root .
1651 .It Fl s
1652 Scan using the default search path, the libblkid cache will not be
1653 consulted. A custom search path may be specified by setting the
1654 ZPOOL_IMPORT_PATH environment variable.
1655 .It Fl X
1656 Used with the
1657 .Fl F
1658 recovery option. Determines whether extreme
1659 measures to find a valid txg should take place. This allows the pool to
1660 be rolled back to a txg which is no longer guaranteed to be consistent.
1661 Pools imported at an inconsistent txg may contain uncorrectable
1662 checksum errors. For more details about pool recovery mode, see the
1663 .Fl F
1664 option, above. WARNING: This option can be extremely hazardous to the
1665 health of your pool and should only be used as a last resort.
1666 .It Fl T
1667 Specify the txg to use for rollback. Implies
1668 .Fl FX .
1669 For more details
1670 about pool recovery mode, see the
1671 .Fl X
1672 option, above. WARNING: This option can be extremely hazardous to the
1673 health of your pool and should only be used as a last resort.
1674 .It Fl t
1675 Used with
1676 .Sy newpool .
1677 Specifies that
1678 .Sy newpool
1679 is temporary. Temporary pool names last until export. Ensures that
1680 the original pool name will be used in all label updates and therefore
1681 is retained upon export.
1682 Will also set -o cachefile=none when not explicitly specified.
1683 .El
1684 .It Xo
1685 .Nm
1686 .Cm initialize
1687 .Op Fl c | Fl s
1688 .Ar pool
1689 .Op Ar device Ns ...
1690 .Xc
1691 Begins initializing by writing to all unallocated regions on the specified
1692 devices, or all eligible devices in the pool if no individual devices are
1693 specified.
1694 Only leaf data or log devices may be initialized.
1695 .Bl -tag -width Ds
1696 .It Fl c, -cancel
1697 Cancel initializing on the specified devices, or all eligible devices if none
1698 are specified.
1699 If one or more target devices are invalid or are not currently being
1700 initialized, the command will fail and no cancellation will occur on any device.
1701 .It Fl s -suspend
1702 Suspend initializing on the specified devices, or all eligible devices if none
1703 are specified.
1704 If one or more target devices are invalid or are not currently being
1705 initialized, the command will fail and no suspension will occur on any device.
1706 Initializing can then be resumed by running
1707 .Nm zpool Cm initialize
1708 with no flags on the relevant target devices.
1709 .El
1710 .It Xo
1711 .Nm
1712 .Cm iostat
1713 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1714 .Op Fl T Sy u Ns | Ns Sy d
1715 .Op Fl ghHLnpPvy
1716 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1717 .Op Ar interval Op Ar count
1718 .Xc
1719 Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1720 be observed via
1721 .Xr iostat 1 .
1722 If writes are located nearby, they may be merged into a single
1723 larger operation. Additional I/O may be generated depending on the level of
1724 vdev redundancy.
1725 To filter output, you may pass in a list of pools, a pool and list of vdevs
1726 in that pool, or a list of any vdevs from any pool. If no items are specified,
1727 statistics for every pool in the system are shown.
1728 When given an
1729 .Ar interval ,
1730 the statistics are printed every
1731 .Ar interval
1732 seconds until ^C is pressed. If
1733 .Fl n
1734 flag is specified the headers are displayed only once, otherwise they are
1735 displayed periodically. If count is specified, the command exits
1736 after count reports are printed. The first report printed is always
1737 the statistics since boot regardless of whether
1738 .Ar interval
1739 and
1740 .Ar count
1741 are passed. However, this behavior can be suppressed with the
1742 .Fl y
1743 flag. Also note that the units of
1744 .Sy K ,
1745 .Sy M ,
1746 .Sy G ...
1747 that are printed in the report are in base 1024. To get the raw
1748 values, use the
1749 .Fl p
1750 flag.
1751 .Bl -tag -width Ds
1752 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1753 Run a script (or scripts) on each vdev and include the output as a new column
1754 in the
1755 .Nm zpool Cm iostat
1756 output. Users can run any script found in their
1757 .Pa ~/.zpool.d
1758 directory or from the system
1759 .Pa /etc/zfs/zpool.d
1760 directory. Script names containing the slash (/) character are not allowed.
1761 The default search path can be overridden by setting the
1762 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1763 .Fl c
1764 if they have the ZPOOL_SCRIPTS_AS_ROOT
1765 environment variable set. If a script requires the use of a privileged
1766 command, like
1767 .Xr smartctl 8 ,
1768 then it's recommended you allow the user access to it in
1769 .Pa /etc/sudoers
1770 or add the user to the
1771 .Pa /etc/sudoers.d/zfs
1772 file.
1773 .Pp
1774 If
1775 .Fl c
1776 is passed without a script name, it prints a list of all scripts.
1777 .Fl c
1778 also sets verbose mode
1779 .No \&( Ns Fl v Ns No \&).
1780 .Pp
1781 Script output should be in the form of "name=value". The column name is
1782 set to "name" and the value is set to "value". Multiple lines can be
1783 used to output multiple columns. The first line of output not in the
1784 "name=value" format is displayed without a column title, and no more
1785 output after that is displayed. This can be useful for printing error
1786 messages. Blank or NULL values are printed as a '-' to make output
1787 awk-able.
1788 .Pp
1789 The following environment variables are set before running each script:
1790 .Bl -tag -width "VDEV_PATH"
1791 .It Sy VDEV_PATH
1792 Full path to the vdev
1793 .El
1794 .Bl -tag -width "VDEV_UPATH"
1795 .It Sy VDEV_UPATH
1796 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1797 multipath, or partitioned vdevs.
1798 .El
1799 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1800 .It Sy VDEV_ENC_SYSFS_PATH
1801 The sysfs path to the enclosure for the vdev (if any).
1802 .El
1803 .It Fl T Sy u Ns | Ns Sy d
1804 Display a time stamp.
1805 Specify
1806 .Sy u
1807 for a printed representation of the internal representation of time.
1808 See
1809 .Xr time 2 .
1810 Specify
1811 .Sy d
1812 for standard date format.
1813 See
1814 .Xr date 1 .
1815 .It Fl g
1816 Display vdev GUIDs instead of the normal device names. These GUIDs
1817 can be used in place of device names for the zpool
1818 detach/offline/remove/replace commands.
1819 .It Fl H
1820 Scripted mode. Do not display headers, and separate fields by a
1821 single tab instead of arbitrary space.
1822 .It Fl L
1823 Display real paths for vdevs resolving all symbolic links. This can
1824 be used to look up the current block device name regardless of the
1825 .Pa /dev/disk/
1826 path used to open it.
1827 .It Fl n
1828 Print headers only once when passed
1829 .It Fl p
1830 Display numbers in parsable (exact) values. Time values are in
1831 nanoseconds.
1832 .It Fl P
1833 Display full paths for vdevs instead of only the last component of
1834 the path. This can be used in conjunction with the
1835 .Fl L
1836 flag.
1837 .It Fl r
1838 Print request size histograms for the leaf vdev's IO. This includes
1839 histograms of individual IOs (ind) and aggregate IOs (agg). These stats
1840 can be useful for observing how well IO aggregation is working. Note
1841 that TRIM IOs may exceed 16M, but will be counted as 16M.
1842 .It Fl v
1843 Verbose statistics Reports usage statistics for individual vdevs within the
1844 pool, in addition to the pool-wide statistics.
1845 .It Fl y
1846 Omit statistics since boot.
1847 Normally the first line of output reports the statistics since boot.
1848 This option suppresses that first line of output.
1849 .Ar interval
1850 .It Fl w
1851 Display latency histograms:
1852 .Pp
1853 .Ar total_wait :
1854 Total IO time (queuing + disk IO time).
1855 .Ar disk_wait :
1856 Disk IO time (time reading/writing the disk).
1857 .Ar syncq_wait :
1858 Amount of time IO spent in synchronous priority queues. Does not include
1859 disk time.
1860 .Ar asyncq_wait :
1861 Amount of time IO spent in asynchronous priority queues. Does not include
1862 disk time.
1863 .Ar scrub :
1864 Amount of time IO spent in scrub queue. Does not include disk time.
1865 .It Fl l
1866 Include average latency statistics:
1867 .Pp
1868 .Ar total_wait :
1869 Average total IO time (queuing + disk IO time).
1870 .Ar disk_wait :
1871 Average disk IO time (time reading/writing the disk).
1872 .Ar syncq_wait :
1873 Average amount of time IO spent in synchronous priority queues. Does
1874 not include disk time.
1875 .Ar asyncq_wait :
1876 Average amount of time IO spent in asynchronous priority queues.
1877 Does not include disk time.
1878 .Ar scrub :
1879 Average queuing time in scrub queue. Does not include disk time.
1880 .Ar trim :
1881 Average queuing time in trim queue. Does not include disk time.
1882 .It Fl q
1883 Include active queue statistics. Each priority queue has both
1884 pending (
1885 .Ar pend )
1886 and active (
1887 .Ar activ )
1888 IOs. Pending IOs are waiting to
1889 be issued to the disk, and active IOs have been issued to disk and are
1890 waiting for completion. These stats are broken out by priority queue:
1891 .Pp
1892 .Ar syncq_read/write :
1893 Current number of entries in synchronous priority
1894 queues.
1895 .Ar asyncq_read/write :
1896 Current number of entries in asynchronous priority queues.
1897 .Ar scrubq_read :
1898 Current number of entries in scrub queue.
1899 .Ar trimq_write :
1900 Current number of entries in trim queue.
1901 .Pp
1902 All queue statistics are instantaneous measurements of the number of
1903 entries in the queues. If you specify an interval, the measurements
1904 will be sampled from the end of the interval.
1905 .El
1906 .It Xo
1907 .Nm
1908 .Cm labelclear
1909 .Op Fl f
1910 .Ar device
1911 .Xc
1912 Removes ZFS label information from the specified
1913 .Ar device .
1914 The
1915 .Ar device
1916 must not be part of an active pool configuration.
1917 .Bl -tag -width Ds
1918 .It Fl f
1919 Treat exported or foreign devices as inactive.
1920 .El
1921 .It Xo
1922 .Nm
1923 .Cm list
1924 .Op Fl HgLpPv
1925 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1926 .Op Fl T Sy u Ns | Ns Sy d
1927 .Oo Ar pool Oc Ns ...
1928 .Op Ar interval Op Ar count
1929 .Xc
1930 Lists the given pools along with a health status and space usage.
1931 If no
1932 .Ar pool Ns s
1933 are specified, all pools in the system are listed.
1934 When given an
1935 .Ar interval ,
1936 the information is printed every
1937 .Ar interval
1938 seconds until ^C is pressed.
1939 If
1940 .Ar count
1941 is specified, the command exits after
1942 .Ar count
1943 reports are printed.
1944 .Bl -tag -width Ds
1945 .It Fl g
1946 Display vdev GUIDs instead of the normal device names. These GUIDs
1947 can be used in place of device names for the zpool
1948 detach/offline/remove/replace commands.
1949 .It Fl H
1950 Scripted mode.
1951 Do not display headers, and separate fields by a single tab instead of arbitrary
1952 space.
1953 .It Fl o Ar property
1954 Comma-separated list of properties to display.
1955 See the
1956 .Sx Properties
1957 section for a list of valid properties.
1958 The default list is
1959 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1960 .Cm capacity , dedupratio , health , altroot .
1961 .It Fl L
1962 Display real paths for vdevs resolving all symbolic links. This can
1963 be used to look up the current block device name regardless of the
1964 /dev/disk/ path used to open it.
1965 .It Fl p
1966 Display numbers in parsable
1967 .Pq exact
1968 values.
1969 .It Fl P
1970 Display full paths for vdevs instead of only the last component of
1971 the path. This can be used in conjunction with the
1972 .Fl L
1973 flag.
1974 .It Fl T Sy u Ns | Ns Sy d
1975 Display a time stamp.
1976 Specify
1977 .Sy u
1978 for a printed representation of the internal representation of time.
1979 See
1980 .Xr time 2 .
1981 Specify
1982 .Sy d
1983 for standard date format.
1984 See
1985 .Xr date 1 .
1986 .It Fl v
1987 Verbose statistics.
1988 Reports usage statistics for individual vdevs within the pool, in addition to
1989 the pool-wise statistics.
1990 .El
1991 .It Xo
1992 .Nm
1993 .Cm offline
1994 .Op Fl f
1995 .Op Fl t
1996 .Ar pool Ar device Ns ...
1997 .Xc
1998 Takes the specified physical device offline.
1999 While the
2000 .Ar device
2001 is offline, no attempt is made to read or write to the device.
2002 This command is not applicable to spares.
2003 .Bl -tag -width Ds
2004 .It Fl f
2005 Force fault. Instead of offlining the disk, put it into a faulted
2006 state. The fault will persist across imports unless the
2007 .Fl t
2008 flag was specified.
2009 .It Fl t
2010 Temporary.
2011 Upon reboot, the specified physical device reverts to its previous state.
2012 .El
2013 .It Xo
2014 .Nm
2015 .Cm online
2016 .Op Fl e
2017 .Ar pool Ar device Ns ...
2018 .Xc
2019 Brings the specified physical device online.
2020 This command is not applicable to spares.
2021 .Bl -tag -width Ds
2022 .It Fl e
2023 Expand the device to use all available space.
2024 If the device is part of a mirror or raidz then all devices must be expanded
2025 before the new space will become available to the pool.
2026 .El
2027 .It Xo
2028 .Nm
2029 .Cm reguid
2030 .Ar pool
2031 .Xc
2032 Generates a new unique identifier for the pool.
2033 You must ensure that all devices in this pool are online and healthy before
2034 performing this action.
2035 .It Xo
2036 .Nm
2037 .Cm reopen
2038 .Op Fl n
2039 .Ar pool
2040 .Xc
2041 Reopen all the vdevs associated with the pool.
2042 .Bl -tag -width Ds
2043 .It Fl n
2044 Do not restart an in-progress scrub operation. This is not recommended and can
2045 result in partially resilvered devices unless a second scrub is performed.
2046 .El
2047 .It Xo
2048 .Nm
2049 .Cm remove
2050 .Op Fl np
2051 .Ar pool Ar device Ns ...
2052 .Xc
2053 Removes the specified device from the pool.
2054 This command supports removing hot spare, cache, log, and both mirrored and
2055 non-redundant primary top-level vdevs, including dedup and special vdevs.
2056 When the primary pool storage includes a top-level raidz vdev only hot spare,
2057 cache, and log devices can be removed.
2058 .sp
2059 Removing a top-level vdev reduces the total amount of space in the storage pool.
2060 The specified device will be evacuated by copying all allocated space from it to
2061 the other devices in the pool.
2062 In this case, the
2063 .Nm zpool Cm remove
2064 command initiates the removal and returns, while the evacuation continues in
2065 the background.
2066 The removal progress can be monitored with
2067 .Nm zpool Cm status .
2068 If an IO error is encountered during the removal process it will be
2069 cancelled. The
2070 .Sy device_removal
2071 feature flag must be enabled to remove a top-level vdev, see
2072 .Xr zpool-features 5 .
2073 .Pp
2074 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2075 same.
2076 Non-log devices or data devices that are part of a mirrored configuration can be removed using
2077 the
2078 .Nm zpool Cm detach
2079 command.
2080 .Bl -tag -width Ds
2081 .It Fl n
2082 Do not actually perform the removal ("no-op").
2083 Instead, print the estimated amount of memory that will be used by the
2084 mapping table after the removal completes.
2085 This is nonzero only for top-level vdevs.
2086 .El
2087 .Bl -tag -width Ds
2088 .It Fl p
2089 Used in conjunction with the
2090 .Fl n
2091 flag, displays numbers as parsable (exact) values.
2092 .El
2093 .It Xo
2094 .Nm
2095 .Cm remove
2096 .Fl s
2097 .Ar pool
2098 .Xc
2099 Stops and cancels an in-progress removal of a top-level vdev.
2100 .It Xo
2101 .Nm
2102 .Cm replace
2103 .Op Fl f
2104 .Op Fl o Ar property Ns = Ns Ar value
2105 .Ar pool Ar device Op Ar new_device
2106 .Xc
2107 Replaces
2108 .Ar old_device
2109 with
2110 .Ar new_device .
2111 This is equivalent to attaching
2112 .Ar new_device ,
2113 waiting for it to resilver, and then detaching
2114 .Ar old_device .
2115 .Pp
2116 The size of
2117 .Ar new_device
2118 must be greater than or equal to the minimum size of all the devices in a mirror
2119 or raidz configuration.
2120 .Pp
2121 .Ar new_device
2122 is required if the pool is not redundant.
2123 If
2124 .Ar new_device
2125 is not specified, it defaults to
2126 .Ar old_device .
2127 This form of replacement is useful after an existing disk has failed and has
2128 been physically replaced.
2129 In this case, the new disk may have the same
2130 .Pa /dev
2131 path as the old device, even though it is actually a different disk.
2132 ZFS recognizes this.
2133 .Bl -tag -width Ds
2134 .It Fl f
2135 Forces use of
2136 .Ar new_device ,
2137 even if it appears to be in use.
2138 Not all devices can be overridden in this manner.
2139 .It Fl o Ar property Ns = Ns Ar value
2140 Sets the given pool properties. See the
2141 .Sx Properties
2142 section for a list of valid properties that can be set.
2143 The only property supported at the moment is
2144 .Sy ashift .
2145 .El
2146 .It Xo
2147 .Nm
2148 .Cm scrub
2149 .Op Fl s | Fl p
2150 .Ar pool Ns ...
2151 .Xc
2152 Begins a scrub or resumes a paused scrub.
2153 The scrub examines all data in the specified pools to verify that it checksums
2154 correctly.
2155 For replicated
2156 .Pq mirror or raidz
2157 devices, ZFS automatically repairs any damage discovered during the scrub.
2158 The
2159 .Nm zpool Cm status
2160 command reports the progress of the scrub and summarizes the results of the
2161 scrub upon completion.
2162 .Pp
2163 Scrubbing and resilvering are very similar operations.
2164 The difference is that resilvering only examines data that ZFS knows to be out
2165 of date
2166 .Po
2167 for example, when attaching a new device to a mirror or replacing an existing
2168 device
2169 .Pc ,
2170 whereas scrubbing examines all data to discover silent errors due to hardware
2171 faults or disk failure.
2172 .Pp
2173 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2174 one at a time.
2175 If a scrub is paused, the
2176 .Nm zpool Cm scrub
2177 resumes it.
2178 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2179 resilver completes.
2180 .Bl -tag -width Ds
2181 .It Fl s
2182 Stop scrubbing.
2183 .El
2184 .Bl -tag -width Ds
2185 .It Fl p
2186 Pause scrubbing.
2187 Scrub pause state and progress are periodically synced to disk.
2188 If the system is restarted or pool is exported during a paused scrub,
2189 even after import, scrub will remain paused until it is resumed.
2190 Once resumed the scrub will pick up from the place where it was last
2191 checkpointed to disk.
2192 To resume a paused scrub issue
2193 .Nm zpool Cm scrub
2194 again.
2195 .El
2196 .It Xo
2197 .Nm
2198 .Cm resilver
2199 .Ar pool Ns ...
2200 .Xc
2201 Starts a resilver. If an existing resilver is already running it will be
2202 restarted from the beginning. Any drives that were scheduled for a deferred
2203 resilver will be added to the new one.
2204 .It Xo
2205 .Nm
2206 .Cm trim
2207 .Op Fl d
2208 .Op Fl c | Fl s
2209 .Ar pool
2210 .Op Ar device Ns ...
2211 .Xc
2212 Initiates an immediate on-demand TRIM operation for all of the free space in
2213 a pool. This operation informs the underlying storage devices of all blocks
2214 in the pool which are no longer allocated and allows thinly provisioned
2215 devices to reclaim the space.
2216 .Pp
2217 A manual on-demand TRIM operation can be initiated irrespective of the
2218 .Sy autotrim
2219 pool property setting. See the documentation for the
2220 .Sy autotrim
2221 property above for the types of vdev devices which can be trimmed.
2222 .Bl -tag -width Ds
2223 .It Fl d -secure
2224 Causes a secure TRIM to be initiated. When performing a secure TRIM, the
2225 device guarantees that data stored on the trimmed blocks has been erased.
2226 This requires support from the device and is not supported by all SSDs.
2227 .It Fl r -rate Ar rate
2228 Controls the rate at which the TRIM operation progresses. Without this
2229 option TRIM is executed as quickly as possible. The rate, expressed in bytes
2230 per second, is applied on a per-vdev basis and may be set differently for
2231 each leaf vdev.
2232 .It Fl c, -cancel
2233 Cancel trimming on the specified devices, or all eligible devices if none
2234 are specified.
2235 If one or more target devices are invalid or are not currently being
2236 trimmed, the command will fail and no cancellation will occur on any device.
2237 .It Fl s -suspend
2238 Suspend trimming on the specified devices, or all eligible devices if none
2239 are specified.
2240 If one or more target devices are invalid or are not currently being
2241 trimmed, the command will fail and no suspension will occur on any device.
2242 Trimming can then be resumed by running
2243 .Nm zpool Cm trim
2244 with no flags on the relevant target devices.
2245 .El
2246 .It Xo
2247 .Nm
2248 .Cm set
2249 .Ar property Ns = Ns Ar value
2250 .Ar pool
2251 .Xc
2252 Sets the given property on the specified pool.
2253 See the
2254 .Sx Properties
2255 section for more information on what properties can be set and acceptable
2256 values.
2257 .It Xo
2258 .Nm
2259 .Cm split
2260 .Op Fl gLlnP
2261 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2262 .Op Fl R Ar root
2263 .Ar pool newpool
2264 .Op Ar device ...
2265 .Xc
2266 Splits devices off
2267 .Ar pool
2268 creating
2269 .Ar newpool .
2270 All vdevs in
2271 .Ar pool
2272 must be mirrors and the pool must not be in the process of resilvering.
2273 At the time of the split,
2274 .Ar newpool
2275 will be a replica of
2276 .Ar pool .
2277 By default, the
2278 last device in each mirror is split from
2279 .Ar pool
2280 to create
2281 .Ar newpool .
2282 .Pp
2283 The optional device specification causes the specified device(s) to be
2284 included in the new
2285 .Ar pool
2286 and, should any devices remain unspecified,
2287 the last device in each mirror is used as would be by default.
2288 .Bl -tag -width Ds
2289 .It Fl g
2290 Display vdev GUIDs instead of the normal device names. These GUIDs
2291 can be used in place of device names for the zpool
2292 detach/offline/remove/replace commands.
2293 .It Fl L
2294 Display real paths for vdevs resolving all symbolic links. This can
2295 be used to look up the current block device name regardless of the
2296 .Pa /dev/disk/
2297 path used to open it.
2298 .It Fl l
2299 Indicates that this command will request encryption keys for all encrypted
2300 datasets it attempts to mount as it is bringing the new pool online. Note that
2301 if any datasets have a
2302 .Sy keylocation
2303 of
2304 .Sy prompt
2305 this command will block waiting for the keys to be entered. Without this flag
2306 encrypted datasets will be left unavailable until the keys are loaded.
2307 .It Fl n
2308 Do dry run, do not actually perform the split.
2309 Print out the expected configuration of
2310 .Ar newpool .
2311 .It Fl P
2312 Display full paths for vdevs instead of only the last component of
2313 the path. This can be used in conjunction with the
2314 .Fl L
2315 flag.
2316 .It Fl o Ar property Ns = Ns Ar value
2317 Sets the specified property for
2318 .Ar newpool .
2319 See the
2320 .Sx Properties
2321 section for more information on the available pool properties.
2322 .It Fl R Ar root
2323 Set
2324 .Sy altroot
2325 for
2326 .Ar newpool
2327 to
2328 .Ar root
2329 and automatically import it.
2330 .El
2331 .It Xo
2332 .Nm
2333 .Cm status
2334 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2335 .Op Fl DigLpPstvx
2336 .Op Fl T Sy u Ns | Ns Sy d
2337 .Oo Ar pool Oc Ns ...
2338 .Op Ar interval Op Ar count
2339 .Xc
2340 Displays the detailed health status for the given pools.
2341 If no
2342 .Ar pool
2343 is specified, then the status of each pool in the system is displayed.
2344 For more information on pool and device health, see the
2345 .Sx Device Failure and Recovery
2346 section.
2347 .Pp
2348 If a scrub or resilver is in progress, this command reports the percentage done
2349 and the estimated time to completion.
2350 Both of these are only approximate, because the amount of data in the pool and
2351 the other workloads on the system can change.
2352 .Bl -tag -width Ds
2353 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2354 Run a script (or scripts) on each vdev and include the output as a new column
2355 in the
2356 .Nm zpool Cm status
2357 output. See the
2358 .Fl c
2359 option of
2360 .Nm zpool Cm iostat
2361 for complete details.
2362 .It Fl i
2363 Display vdev initialization status.
2364 .It Fl g
2365 Display vdev GUIDs instead of the normal device names. These GUIDs
2366 can be used in place of device names for the zpool
2367 detach/offline/remove/replace commands.
2368 .It Fl L
2369 Display real paths for vdevs resolving all symbolic links. This can
2370 be used to look up the current block device name regardless of the
2371 .Pa /dev/disk/
2372 path used to open it.
2373 .It Fl p
2374 Display numbers in parsable (exact) values.
2375 .It Fl P
2376 Display full paths for vdevs instead of only the last component of
2377 the path. This can be used in conjunction with the
2378 .Fl L
2379 flag.
2380 .It Fl D
2381 Display a histogram of deduplication statistics, showing the allocated
2382 .Pq physically present on disk
2383 and referenced
2384 .Pq logically referenced in the pool
2385 block counts and sizes by reference count.
2386 .It Fl s
2387 Display the number of leaf VDEV slow IOs. This is the number of IOs that
2388 didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2389 This does not necessarily mean the IOs failed to complete, just took an
2390 unreasonably long amount of time. This may indicate a problem with the
2391 underlying storage.
2392 .It Fl t
2393 Display vdev TRIM status.
2394 .It Fl T Sy u Ns | Ns Sy d
2395 Display a time stamp.
2396 Specify
2397 .Sy u
2398 for a printed representation of the internal representation of time.
2399 See
2400 .Xr time 2 .
2401 Specify
2402 .Sy d
2403 for standard date format.
2404 See
2405 .Xr date 1 .
2406 .It Fl v
2407 Displays verbose data error information, printing out a complete list of all
2408 data errors since the last complete pool scrub.
2409 .It Fl x
2410 Only display status for pools that are exhibiting errors or are otherwise
2411 unavailable.
2412 Warnings about pools not using the latest on-disk format will not be included.
2413 .El
2414 .It Xo
2415 .Nm
2416 .Cm sync
2417 .Op Ar pool ...
2418 .Xc
2419 This command forces all in-core dirty data to be written to the primary
2420 pool storage and not the ZIL. It will also update administrative
2421 information including quota reporting. Without arguments,
2422 .Sy zpool sync
2423 will sync all pools on the system. Otherwise, it will sync only the
2424 specified pool(s).
2425 .It Xo
2426 .Nm
2427 .Cm upgrade
2428 .Xc
2429 Displays pools which do not have all supported features enabled and pools
2430 formatted using a legacy ZFS version number.
2431 These pools can continue to be used, but some features may not be available.
2432 Use
2433 .Nm zpool Cm upgrade Fl a
2434 to enable all features on all pools.
2435 .It Xo
2436 .Nm
2437 .Cm upgrade
2438 .Fl v
2439 .Xc
2440 Displays legacy ZFS versions supported by the current software.
2441 See
2442 .Xr zpool-features 5
2443 for a description of feature flags features supported by the current software.
2444 .It Xo
2445 .Nm
2446 .Cm upgrade
2447 .Op Fl V Ar version
2448 .Fl a Ns | Ns Ar pool Ns ...
2449 .Xc
2450 Enables all supported features on the given pool.
2451 Once this is done, the pool will no longer be accessible on systems that do not
2452 support feature flags.
2453 See
2454 .Xr zpool-features 5
2455 for details on compatibility with systems that support feature flags, but do not
2456 support all features enabled on the pool.
2457 .Bl -tag -width Ds
2458 .It Fl a
2459 Enables all supported features on all pools.
2460 .It Fl V Ar version
2461 Upgrade to the specified legacy version.
2462 If the
2463 .Fl V
2464 flag is specified, no features will be enabled on the pool.
2465 This option can only be used to increase the version number up to the last
2466 supported legacy version number.
2467 .El
2468 .El
2469 .Sh EXIT STATUS
2470 The following exit values are returned:
2471 .Bl -tag -width Ds
2472 .It Sy 0
2473 Successful completion.
2474 .It Sy 1
2475 An error occurred.
2476 .It Sy 2
2477 Invalid command line options were specified.
2478 .El
2479 .Sh EXAMPLES
2480 .Bl -tag -width Ds
2481 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2482 The following command creates a pool with a single raidz root vdev that
2483 consists of six disks.
2484 .Bd -literal
2485 # zpool create tank raidz sda sdb sdc sdd sde sdf
2486 .Ed
2487 .It Sy Example 2 No Creating a Mirrored Storage Pool
2488 The following command creates a pool with two mirrors, where each mirror
2489 contains two disks.
2490 .Bd -literal
2491 # zpool create tank mirror sda sdb mirror sdc sdd
2492 .Ed
2493 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2494 The following command creates an unmirrored pool using two disk partitions.
2495 .Bd -literal
2496 # zpool create tank sda1 sdb2
2497 .Ed
2498 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2499 The following command creates an unmirrored pool using files.
2500 While not recommended, a pool based on files can be useful for experimental
2501 purposes.
2502 .Bd -literal
2503 # zpool create tank /path/to/file/a /path/to/file/b
2504 .Ed
2505 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2506 The following command adds two mirrored disks to the pool
2507 .Em tank ,
2508 assuming the pool is already made up of two-way mirrors.
2509 The additional space is immediately available to any datasets within the pool.
2510 .Bd -literal
2511 # zpool add tank mirror sda sdb
2512 .Ed
2513 .It Sy Example 6 No Listing Available ZFS Storage Pools
2514 The following command lists all available pools on the system.
2515 In this case, the pool
2516 .Em zion
2517 is faulted due to a missing device.
2518 The results from this command are similar to the following:
2519 .Bd -literal
2520 # zpool list
2521 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2522 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2523 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2524 zion - - - - - - - FAULTED -
2525 .Ed
2526 .It Sy Example 7 No Destroying a ZFS Storage Pool
2527 The following command destroys the pool
2528 .Em tank
2529 and any datasets contained within.
2530 .Bd -literal
2531 # zpool destroy -f tank
2532 .Ed
2533 .It Sy Example 8 No Exporting a ZFS Storage Pool
2534 The following command exports the devices in pool
2535 .Em tank
2536 so that they can be relocated or later imported.
2537 .Bd -literal
2538 # zpool export tank
2539 .Ed
2540 .It Sy Example 9 No Importing a ZFS Storage Pool
2541 The following command displays available pools, and then imports the pool
2542 .Em tank
2543 for use on the system.
2544 The results from this command are similar to the following:
2545 .Bd -literal
2546 # zpool import
2547 pool: tank
2548 id: 15451357997522795478
2549 state: ONLINE
2550 action: The pool can be imported using its name or numeric identifier.
2551 config:
2552
2553 tank ONLINE
2554 mirror ONLINE
2555 sda ONLINE
2556 sdb ONLINE
2557
2558 # zpool import tank
2559 .Ed
2560 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2561 The following command upgrades all ZFS Storage pools to the current version of
2562 the software.
2563 .Bd -literal
2564 # zpool upgrade -a
2565 This system is currently running ZFS version 2.
2566 .Ed
2567 .It Sy Example 11 No Managing Hot Spares
2568 The following command creates a new pool with an available hot spare:
2569 .Bd -literal
2570 # zpool create tank mirror sda sdb spare sdc
2571 .Ed
2572 .Pp
2573 If one of the disks were to fail, the pool would be reduced to the degraded
2574 state.
2575 The failed device can be replaced using the following command:
2576 .Bd -literal
2577 # zpool replace tank sda sdd
2578 .Ed
2579 .Pp
2580 Once the data has been resilvered, the spare is automatically removed and is
2581 made available for use should another device fail.
2582 The hot spare can be permanently removed from the pool using the following
2583 command:
2584 .Bd -literal
2585 # zpool remove tank sdc
2586 .Ed
2587 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2588 The following command creates a ZFS storage pool consisting of two, two-way
2589 mirrors and mirrored log devices:
2590 .Bd -literal
2591 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2592 sde sdf
2593 .Ed
2594 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2595 The following command adds two disks for use as cache devices to a ZFS storage
2596 pool:
2597 .Bd -literal
2598 # zpool add pool cache sdc sdd
2599 .Ed
2600 .Pp
2601 Once added, the cache devices gradually fill with content from main memory.
2602 Depending on the size of your cache devices, it could take over an hour for
2603 them to fill.
2604 Capacity and reads can be monitored using the
2605 .Cm iostat
2606 option as follows:
2607 .Bd -literal
2608 # zpool iostat -v pool 5
2609 .Ed
2610 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2611 The following commands remove the mirrored log device
2612 .Sy mirror-2
2613 and mirrored top-level data device
2614 .Sy mirror-1 .
2615 .Pp
2616 Given this configuration:
2617 .Bd -literal
2618 pool: tank
2619 state: ONLINE
2620 scrub: none requested
2621 config:
2622
2623 NAME STATE READ WRITE CKSUM
2624 tank ONLINE 0 0 0
2625 mirror-0 ONLINE 0 0 0
2626 sda ONLINE 0 0 0
2627 sdb ONLINE 0 0 0
2628 mirror-1 ONLINE 0 0 0
2629 sdc ONLINE 0 0 0
2630 sdd ONLINE 0 0 0
2631 logs
2632 mirror-2 ONLINE 0 0 0
2633 sde ONLINE 0 0 0
2634 sdf ONLINE 0 0 0
2635 .Ed
2636 .Pp
2637 The command to remove the mirrored log
2638 .Sy mirror-2
2639 is:
2640 .Bd -literal
2641 # zpool remove tank mirror-2
2642 .Ed
2643 .Pp
2644 The command to remove the mirrored data
2645 .Sy mirror-1
2646 is:
2647 .Bd -literal
2648 # zpool remove tank mirror-1
2649 .Ed
2650 .It Sy Example 15 No Displaying expanded space on a device
2651 The following command displays the detailed information for the pool
2652 .Em data .
2653 This pool is comprised of a single raidz vdev where one of its devices
2654 increased its capacity by 10GB.
2655 In this example, the pool will not be able to utilize this extra capacity until
2656 all the devices under the raidz vdev have been expanded.
2657 .Bd -literal
2658 # zpool list -v data
2659 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2660 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2661 raidz1 23.9G 14.6G 9.30G - 48%
2662 sda - - - - -
2663 sdb - - - 10G -
2664 sdc - - - - -
2665 .Ed
2666 .It Sy Example 16 No Adding output columns
2667 Additional columns can be added to the
2668 .Nm zpool Cm status
2669 and
2670 .Nm zpool Cm iostat
2671 output with
2672 .Fl c
2673 option.
2674 .Bd -literal
2675 # zpool status -c vendor,model,size
2676 NAME STATE READ WRITE CKSUM vendor model size
2677 tank ONLINE 0 0 0
2678 mirror-0 ONLINE 0 0 0
2679 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2680 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2681 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2682 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2683 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2684 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2685
2686 # zpool iostat -vc slaves
2687 capacity operations bandwidth
2688 pool alloc free read write read write slaves
2689 ---------- ----- ----- ----- ----- ----- ----- ---------
2690 tank 20.4G 7.23T 26 152 20.7M 21.6M
2691 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2692 U1 - - 0 31 1.46K 20.6M sdb sdff
2693 U10 - - 0 1 3.77K 13.3K sdas sdgw
2694 U11 - - 0 1 288K 13.3K sdat sdgx
2695 U12 - - 0 1 78.4K 13.3K sdau sdgy
2696 U13 - - 0 1 128K 13.3K sdav sdgz
2697 U14 - - 0 1 63.2K 13.3K sdfk sdg
2698 .Ed
2699 .El
2700 .Sh ENVIRONMENT VARIABLES
2701 .Bl -tag -width "ZFS_ABORT"
2702 .It Ev ZFS_ABORT
2703 Cause
2704 .Nm zpool
2705 to dump core on exit for the purposes of running
2706 .Sy ::findleaks .
2707 .El
2708 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2709 .It Ev ZPOOL_IMPORT_PATH
2710 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2711 .Nm zpool
2712 looks for device nodes and files.
2713 Similar to the
2714 .Fl d
2715 option in
2716 .Nm zpool import .
2717 .El
2718 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2719 .It Ev ZPOOL_VDEV_NAME_GUID
2720 Cause
2721 .Nm zpool subcommands to output vdev guids by default. This behavior
2722 is identical to the
2723 .Nm zpool status -g
2724 command line option.
2725 .El
2726 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2727 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2728 Cause
2729 .Nm zpool
2730 subcommands to follow links for vdev names by default. This behavior is identical to the
2731 .Nm zpool status -L
2732 command line option.
2733 .El
2734 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2735 .It Ev ZPOOL_VDEV_NAME_PATH
2736 Cause
2737 .Nm zpool
2738 subcommands to output full vdev path names by default. This
2739 behavior is identical to the
2740 .Nm zpool status -p
2741 command line option.
2742 .El
2743 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2744 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2745 Older ZFS on Linux implementations had issues when attempting to display pool
2746 config VDEV names if a
2747 .Sy devid
2748 NVP value is present in the pool's config.
2749 .Pp
2750 For example, a pool that originated on illumos platform would have a devid
2751 value in the config and
2752 .Nm zpool status
2753 would fail when listing the config.
2754 This would also be true for future Linux based pools.
2755 .Pp
2756 A pool can be stripped of any
2757 .Sy devid
2758 values on import or prevented from adding
2759 them on
2760 .Nm zpool create
2761 or
2762 .Nm zpool add
2763 by setting
2764 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2765 .El
2766 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2767 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2768 Allow a privileged user to run the
2769 .Nm zpool status/iostat
2770 with the
2771 .Fl c
2772 option. Normally, only unprivileged users are allowed to run
2773 .Fl c .
2774 .El
2775 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2776 .It Ev ZPOOL_SCRIPTS_PATH
2777 The search path for scripts when running
2778 .Nm zpool status/iostat
2779 with the
2780 .Fl c
2781 option. This is a colon-separated list of directories and overrides the default
2782 .Pa ~/.zpool.d
2783 and
2784 .Pa /etc/zfs/zpool.d
2785 search paths.
2786 .El
2787 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2788 .It Ev ZPOOL_SCRIPTS_ENABLED
2789 Allow a user to run
2790 .Nm zpool status/iostat
2791 with the
2792 .Fl c
2793 option. If
2794 .Sy ZPOOL_SCRIPTS_ENABLED
2795 is not set, it is assumed that the user is allowed to run
2796 .Nm zpool status/iostat -c .
2797 .El
2798 .Sh INTERFACE STABILITY
2799 .Sy Evolving
2800 .Sh SEE ALSO
2801 .Xr zfs-events 5 ,
2802 .Xr zfs-module-parameters 5 ,
2803 .Xr zpool-features 5 ,
2804 .Xr zed 8 ,
2805 .Xr zfs 8