]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Fix incorrect use of .Nm directive for ZPOOL_VDEV_NAME_GUID in zpool(8)
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd November 29, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?V
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm initialize
119 .Op Fl c | Fl s
120 .Ar pool
121 .Op Ar device Ns ...
122 .Nm
123 .Cm iostat
124 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125 .Op Fl T Sy u Ns | Ns Sy d
126 .Op Fl ghHLnpPvy
127 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm labelclear
131 .Op Fl f
132 .Ar device
133 .Nm
134 .Cm list
135 .Op Fl HgLpPv
136 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137 .Op Fl T Sy u Ns | Ns Sy d
138 .Oo Ar pool Oc Ns ...
139 .Op Ar interval Op Ar count
140 .Nm
141 .Cm offline
142 .Op Fl f
143 .Op Fl t
144 .Ar pool Ar device Ns ...
145 .Nm
146 .Cm online
147 .Op Fl e
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm reguid
151 .Ar pool
152 .Nm
153 .Cm reopen
154 .Op Fl n
155 .Ar pool
156 .Nm
157 .Cm remove
158 .Op Fl np
159 .Ar pool Ar device Ns ...
160 .Nm
161 .Cm remove
162 .Fl s
163 .Ar pool
164 .Nm
165 .Cm replace
166 .Op Fl f
167 .Oo Fl o Ar property Ns = Ns Ar value Oc
168 .Ar pool Ar device Op Ar new_device
169 .Nm
170 .Cm resilver
171 .Ar pool Ns ...
172 .Nm
173 .Cm scrub
174 .Op Fl s | Fl p
175 .Ar pool Ns ...
176 .Nm
177 .Cm trim
178 .Op Fl d
179 .Op Fl r Ar rate
180 .Op Fl c | Fl s
181 .Ar pool
182 .Op Ar device Ns ...
183 .Nm
184 .Cm set
185 .Ar property Ns = Ns Ar value
186 .Ar pool
187 .Nm
188 .Cm split
189 .Op Fl gLlnP
190 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
191 .Op Fl R Ar root
192 .Ar pool newpool
193 .Oo Ar device Oc Ns ...
194 .Nm
195 .Cm status
196 .Oo Fl c Ar SCRIPT Oc
197 .Op Fl DigLpPstvx
198 .Op Fl T Sy u Ns | Ns Sy d
199 .Oo Ar pool Oc Ns ...
200 .Op Ar interval Op Ar count
201 .Nm
202 .Cm sync
203 .Oo Ar pool Oc Ns ...
204 .Nm
205 .Cm upgrade
206 .Nm
207 .Cm upgrade
208 .Fl v
209 .Nm
210 .Cm upgrade
211 .Op Fl V Ar version
212 .Fl a Ns | Ns Ar pool Ns ...
213 .Nm
214 .Cm version
215 .Sh DESCRIPTION
216 The
217 .Nm
218 command configures ZFS storage pools.
219 A storage pool is a collection of devices that provides physical storage and
220 data replication for ZFS datasets.
221 All datasets within a storage pool share the same space.
222 See
223 .Xr zfs 8
224 for information on managing datasets.
225 .Ss Virtual Devices (vdevs)
226 A "virtual device" describes a single device or a collection of devices
227 organized according to certain performance and fault characteristics.
228 The following virtual devices are supported:
229 .Bl -tag -width Ds
230 .It Sy disk
231 A block device, typically located under
232 .Pa /dev .
233 ZFS can use individual slices or partitions, though the recommended mode of
234 operation is to use whole disks.
235 A disk can be specified by a full path, or it can be a shorthand name
236 .Po the relative portion of the path under
237 .Pa /dev
238 .Pc .
239 A whole disk can be specified by omitting the slice or partition designation.
240 For example,
241 .Pa sda
242 is equivalent to
243 .Pa /dev/sda .
244 When given a whole disk, ZFS automatically labels the disk, if necessary.
245 .It Sy file
246 A regular file.
247 The use of files as a backing store is strongly discouraged.
248 It is designed primarily for experimental purposes, as the fault tolerance of a
249 file is only as good as the file system of which it is a part.
250 A file must be specified by a full path.
251 .It Sy mirror
252 A mirror of two or more devices.
253 Data is replicated in an identical fashion across all components of a mirror.
254 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
255 failing before data integrity is compromised.
256 .It Sy raidz , raidz1 , raidz2 , raidz3
257 A variation on RAID-5 that allows for better distribution of parity and
258 eliminates the RAID-5
259 .Qq write hole
260 .Pq in which data and parity become inconsistent after a power loss .
261 Data and parity is striped across all disks within a raidz group.
262 .Pp
263 A raidz group can have single-, double-, or triple-parity, meaning that the
264 raidz group can sustain one, two, or three failures, respectively, without
265 losing any data.
266 The
267 .Sy raidz1
268 vdev type specifies a single-parity raidz group; the
269 .Sy raidz2
270 vdev type specifies a double-parity raidz group; and the
271 .Sy raidz3
272 vdev type specifies a triple-parity raidz group.
273 The
274 .Sy raidz
275 vdev type is an alias for
276 .Sy raidz1 .
277 .Pp
278 A raidz group with N disks of size X with P parity disks can hold approximately
279 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
280 compromised.
281 The minimum number of devices in a raidz group is one more than the number of
282 parity disks.
283 The recommended number is between 3 and 9 to help increase performance.
284 .It Sy spare
285 A pseudo-vdev which keeps track of available hot spares for a pool.
286 For more information, see the
287 .Sx Hot Spares
288 section.
289 .It Sy log
290 A separate intent log device.
291 If more than one log device is specified, then writes are load-balanced between
292 devices.
293 Log devices can be mirrored.
294 However, raidz vdev types are not supported for the intent log.
295 For more information, see the
296 .Sx Intent Log
297 section.
298 .It Sy dedup
299 A device dedicated solely for dedup data.
300 The redundancy of this device should match the redundancy of the other normal
301 devices in the pool. If more than one dedup device is specified, then
302 allocations are load-balanced between those devices.
303 .It Sy special
304 A device dedicated solely for allocating various kinds of internal metadata,
305 and optionally small file data.
306 The redundancy of this device should match the redundancy of the other normal
307 devices in the pool. If more than one special device is specified, then
308 allocations are load-balanced between those devices.
309 .Pp
310 For more information on special allocations, see the
311 .Sx Special Allocation Class
312 section.
313 .It Sy cache
314 A device used to cache storage pool data.
315 A cache device cannot be configured as a mirror or raidz group.
316 For more information, see the
317 .Sx Cache Devices
318 section.
319 .El
320 .Pp
321 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
322 contain files or disks.
323 Mirrors of mirrors
324 .Pq or other combinations
325 are not allowed.
326 .Pp
327 A pool can have any number of virtual devices at the top of the configuration
328 .Po known as
329 .Qq root vdevs
330 .Pc .
331 Data is dynamically distributed across all top-level devices to balance data
332 among devices.
333 As new virtual devices are added, ZFS automatically places data on the newly
334 available devices.
335 .Pp
336 Virtual devices are specified one at a time on the command line, separated by
337 whitespace.
338 The keywords
339 .Sy mirror
340 and
341 .Sy raidz
342 are used to distinguish where a group ends and another begins.
343 For example, the following creates two root vdevs, each a mirror of two disks:
344 .Bd -literal
345 # zpool create mypool mirror sda sdb mirror sdc sdd
346 .Ed
347 .Ss Device Failure and Recovery
348 ZFS supports a rich set of mechanisms for handling device failure and data
349 corruption.
350 All metadata and data is checksummed, and ZFS automatically repairs bad data
351 from a good copy when corruption is detected.
352 .Pp
353 In order to take advantage of these features, a pool must make use of some form
354 of redundancy, using either mirrored or raidz groups.
355 While ZFS supports running in a non-redundant configuration, where each root
356 vdev is simply a disk or file, this is strongly discouraged.
357 A single case of bit corruption can render some or all of your data unavailable.
358 .Pp
359 A pool's health status is described by one of three states: online, degraded,
360 or faulted.
361 An online pool has all devices operating normally.
362 A degraded pool is one in which one or more devices have failed, but the data is
363 still available due to a redundant configuration.
364 A faulted pool has corrupted metadata, or one or more faulted devices, and
365 insufficient replicas to continue functioning.
366 .Pp
367 The health of the top-level vdev, such as mirror or raidz device, is
368 potentially impacted by the state of its associated vdevs, or component
369 devices.
370 A top-level vdev or component device is in one of the following states:
371 .Bl -tag -width "DEGRADED"
372 .It Sy DEGRADED
373 One or more top-level vdevs is in the degraded state because one or more
374 component devices are offline.
375 Sufficient replicas exist to continue functioning.
376 .Pp
377 One or more component devices is in the degraded or faulted state, but
378 sufficient replicas exist to continue functioning.
379 The underlying conditions are as follows:
380 .Bl -bullet
381 .It
382 The number of checksum errors exceeds acceptable levels and the device is
383 degraded as an indication that something may be wrong.
384 ZFS continues to use the device as necessary.
385 .It
386 The number of I/O errors exceeds acceptable levels.
387 The device could not be marked as faulted because there are insufficient
388 replicas to continue functioning.
389 .El
390 .It Sy FAULTED
391 One or more top-level vdevs is in the faulted state because one or more
392 component devices are offline.
393 Insufficient replicas exist to continue functioning.
394 .Pp
395 One or more component devices is in the faulted state, and insufficient
396 replicas exist to continue functioning.
397 The underlying conditions are as follows:
398 .Bl -bullet
399 .It
400 The device could be opened, but the contents did not match expected values.
401 .It
402 The number of I/O errors exceeds acceptable levels and the device is faulted to
403 prevent further use of the device.
404 .El
405 .It Sy OFFLINE
406 The device was explicitly taken offline by the
407 .Nm zpool Cm offline
408 command.
409 .It Sy ONLINE
410 The device is online and functioning.
411 .It Sy REMOVED
412 The device was physically removed while the system was running.
413 Device removal detection is hardware-dependent and may not be supported on all
414 platforms.
415 .It Sy UNAVAIL
416 The device could not be opened.
417 If a pool is imported when a device was unavailable, then the device will be
418 identified by a unique identifier instead of its path since the path was never
419 correct in the first place.
420 .El
421 .Pp
422 If a device is removed and later re-attached to the system, ZFS attempts
423 to put the device online automatically.
424 Device attach detection is hardware-dependent and might not be supported on all
425 platforms.
426 .Ss Hot Spares
427 ZFS allows devices to be associated with pools as
428 .Qq hot spares .
429 These devices are not actively used in the pool, but when an active device
430 fails, it is automatically replaced by a hot spare.
431 To create a pool with hot spares, specify a
432 .Sy spare
433 vdev with any number of devices.
434 For example,
435 .Bd -literal
436 # zpool create pool mirror sda sdb spare sdc sdd
437 .Ed
438 .Pp
439 Spares can be shared across multiple pools, and can be added with the
440 .Nm zpool Cm add
441 command and removed with the
442 .Nm zpool Cm remove
443 command.
444 Once a spare replacement is initiated, a new
445 .Sy spare
446 vdev is created within the configuration that will remain there until the
447 original device is replaced.
448 At this point, the hot spare becomes available again if another device fails.
449 .Pp
450 If a pool has a shared spare that is currently being used, the pool can not be
451 exported since other pools may use this shared spare, which may lead to
452 potential data corruption.
453 .Pp
454 Shared spares add some risk. If the pools are imported on different hosts, and
455 both pools suffer a device failure at the same time, both could attempt to use
456 the spare at the same time. This may not be detected, resulting in data
457 corruption.
458 .Pp
459 An in-progress spare replacement can be cancelled by detaching the hot spare.
460 If the original faulted device is detached, then the hot spare assumes its
461 place in the configuration, and is removed from the spare list of all active
462 pools.
463 .Pp
464 Spares cannot replace log devices.
465 .Ss Intent Log
466 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
467 transactions.
468 For instance, databases often require their transactions to be on stable storage
469 devices when returning from a system call.
470 NFS and other applications can also use
471 .Xr fsync 2
472 to ensure data stability.
473 By default, the intent log is allocated from blocks within the main pool.
474 However, it might be possible to get better performance using separate intent
475 log devices such as NVRAM or a dedicated disk.
476 For example:
477 .Bd -literal
478 # zpool create pool sda sdb log sdc
479 .Ed
480 .Pp
481 Multiple log devices can also be specified, and they can be mirrored.
482 See the
483 .Sx EXAMPLES
484 section for an example of mirroring multiple log devices.
485 .Pp
486 Log devices can be added, replaced, attached, detached and removed. In
487 addition, log devices are imported and exported as part of the pool
488 that contains them.
489 Mirrored devices can be removed by specifying the top-level mirror vdev.
490 .Ss Cache Devices
491 Devices can be added to a storage pool as
492 .Qq cache devices .
493 These devices provide an additional layer of caching between main memory and
494 disk.
495 For read-heavy workloads, where the working set size is much larger than what
496 can be cached in main memory, using cache devices allow much more of this
497 working set to be served from low latency media.
498 Using cache devices provides the greatest performance improvement for random
499 read-workloads of mostly static content.
500 .Pp
501 To create a pool with cache devices, specify a
502 .Sy cache
503 vdev with any number of devices.
504 For example:
505 .Bd -literal
506 # zpool create pool sda sdb cache sdc sdd
507 .Ed
508 .Pp
509 Cache devices cannot be mirrored or part of a raidz configuration.
510 If a read error is encountered on a cache device, that read I/O is reissued to
511 the original storage pool device, which might be part of a mirrored or raidz
512 configuration.
513 .Pp
514 The content of the cache devices is considered volatile, as is the case with
515 other system caches.
516 .Ss Pool checkpoint
517 Before starting critical procedures that include destructive actions (e.g
518 .Nm zfs Cm destroy
519 ), an administrator can checkpoint the pool's state and in the case of a
520 mistake or failure, rewind the entire pool back to the checkpoint.
521 Otherwise, the checkpoint can be discarded when the procedure has completed
522 successfully.
523 .Pp
524 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
525 with care as it contains every part of the pool's state, from properties to vdev
526 configuration.
527 Thus, while a pool has a checkpoint certain operations are not allowed.
528 Specifically, vdev removal/attach/detach, mirror splitting, and
529 changing the pool's guid.
530 Adding a new vdev is supported but in the case of a rewind it will have to be
531 added again.
532 Finally, users of this feature should keep in mind that scrubs in a pool that
533 has a checkpoint do not repair checkpointed data.
534 .Pp
535 To create a checkpoint for a pool:
536 .Bd -literal
537 # zpool checkpoint pool
538 .Ed
539 .Pp
540 To later rewind to its checkpointed state, you need to first export it and
541 then rewind it during import:
542 .Bd -literal
543 # zpool export pool
544 # zpool import --rewind-to-checkpoint pool
545 .Ed
546 .Pp
547 To discard the checkpoint from a pool:
548 .Bd -literal
549 # zpool checkpoint -d pool
550 .Ed
551 .Pp
552 Dataset reservations (controlled by the
553 .Nm reservation
554 or
555 .Nm refreservation
556 zfs properties) may be unenforceable while a checkpoint exists, because the
557 checkpoint is allowed to consume the dataset's reservation.
558 Finally, data that is part of the checkpoint but has been freed in the
559 current state of the pool won't be scanned during a scrub.
560 .Ss Special Allocation Class
561 The allocations in the special class are dedicated to specific block types.
562 By default this includes all metadata, the indirect blocks of user data, and
563 any dedup data. The class can also be provisioned to accept a limited
564 percentage of small file data blocks.
565 .Pp
566 A pool must always have at least one general (non-specified) vdev before
567 other devices can be assigned to the special class. If the special class
568 becomes full, then allocations intended for it will spill back into the
569 normal class.
570 .Pp
571 Dedup data can be excluded from the special class by setting the
572 .Sy zfs_ddt_data_is_special
573 zfs module parameter to false (0).
574 .Pp
575 Inclusion of small file blocks in the special class is opt-in. Each dataset
576 can control the size of small file blocks allowed in the special class by
577 setting the
578 .Sy special_small_blocks
579 dataset property. It defaults to zero, so you must opt-in by setting it to a
580 non-zero value. See
581 .Xr zfs 8
582 for more info on setting this property.
583 .Ss Properties
584 Each pool has several properties associated with it.
585 Some properties are read-only statistics while others are configurable and
586 change the behavior of the pool.
587 .Pp
588 The following are read-only properties:
589 .Bl -tag -width Ds
590 .It Cm allocated
591 Amount of storage used within the pool.
592 See
593 .Sy fragmentation
594 and
595 .Sy free
596 for more information.
597 .It Sy capacity
598 Percentage of pool space used.
599 This property can also be referred to by its shortened column name,
600 .Sy cap .
601 .It Sy expandsize
602 Amount of uninitialized space within the pool or device that can be used to
603 increase the total capacity of the pool.
604 Uninitialized space consists of any space on an EFI labeled vdev which has not
605 been brought online
606 .Po e.g, using
607 .Nm zpool Cm online Fl e
608 .Pc .
609 This space occurs when a LUN is dynamically expanded.
610 .It Sy fragmentation
611 The amount of fragmentation in the pool. As the amount of space
612 .Sy allocated
613 increases, it becomes more difficult to locate
614 .Sy free
615 space. This may result in lower write performance compared to pools with more
616 unfragmented free space.
617 .It Sy free
618 The amount of free space available in the pool.
619 By contrast, the
620 .Xr zfs 8
621 .Sy available
622 property describes how much new data can be written to ZFS filesystems/volumes.
623 The zpool
624 .Sy free
625 property is not generally useful for this purpose, and can be substantially more than the zfs
626 .Sy available
627 space. This discrepancy is due to several factors, including raidz party; zfs
628 reservation, quota, refreservation, and refquota properties; and space set aside by
629 .Sy spa_slop_shift
630 (see
631 .Xr zfs-module-parameters 5
632 for more information).
633 .It Sy freeing
634 After a file system or snapshot is destroyed, the space it was using is
635 returned to the pool asynchronously.
636 .Sy freeing
637 is the amount of space remaining to be reclaimed.
638 Over time
639 .Sy freeing
640 will decrease while
641 .Sy free
642 increases.
643 .It Sy health
644 The current health of the pool.
645 Health can be one of
646 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
647 .It Sy guid
648 A unique identifier for the pool.
649 .It Sy load_guid
650 A unique identifier for the pool.
651 Unlike the
652 .Sy guid
653 property, this identifier is generated every time we load the pool (e.g. does
654 not persist across imports/exports) and never changes while the pool is loaded
655 (even if a
656 .Sy reguid
657 operation takes place).
658 .It Sy size
659 Total size of the storage pool.
660 .It Sy unsupported@ Ns Em feature_guid
661 Information about unsupported features that are enabled on the pool.
662 See
663 .Xr zpool-features 5
664 for details.
665 .El
666 .Pp
667 The space usage properties report actual physical space available to the
668 storage pool.
669 The physical space can be different from the total amount of space that any
670 contained datasets can actually use.
671 The amount of space used in a raidz configuration depends on the characteristics
672 of the data being written.
673 In addition, ZFS reserves some space for internal accounting that the
674 .Xr zfs 8
675 command takes into account, but the
676 .Nm
677 command does not.
678 For non-full pools of a reasonable size, these effects should be invisible.
679 For small pools, or pools that are close to being completely full, these
680 discrepancies may become more noticeable.
681 .Pp
682 The following property can be set at creation time and import time:
683 .Bl -tag -width Ds
684 .It Sy altroot
685 Alternate root directory.
686 If set, this directory is prepended to any mount points within the pool.
687 This can be used when examining an unknown pool where the mount points cannot be
688 trusted, or in an alternate boot environment, where the typical paths are not
689 valid.
690 .Sy altroot
691 is not a persistent property.
692 It is valid only while the system is up.
693 Setting
694 .Sy altroot
695 defaults to using
696 .Sy cachefile Ns = Ns Sy none ,
697 though this may be overridden using an explicit setting.
698 .El
699 .Pp
700 The following property can be set only at import time:
701 .Bl -tag -width Ds
702 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
703 If set to
704 .Sy on ,
705 the pool will be imported in read-only mode.
706 This property can also be referred to by its shortened column name,
707 .Sy rdonly .
708 .El
709 .Pp
710 The following properties can be set at creation time and import time, and later
711 changed with the
712 .Nm zpool Cm set
713 command:
714 .Bl -tag -width Ds
715 .It Sy ashift Ns = Ns Sy ashift
716 Pool sector size exponent, to the power of
717 .Sy 2
718 (internally referred to as
719 .Sy ashift
720 ). Values from 9 to 16, inclusive, are valid; also, the
721 value 0 (the default) means to auto-detect using the kernel's block
722 layer and a ZFS internal exception list. I/O operations will be aligned
723 to the specified size boundaries. Additionally, the minimum (disk)
724 write size will be set to the specified size, so this represents a
725 space vs. performance trade-off. For optimal performance, the pool
726 sector size should be greater than or equal to the sector size of the
727 underlying disks. The typical case for setting this property is when
728 performance is important and the underlying disks use 4KiB sectors but
729 report 512B sectors to the OS (for compatibility reasons); in that
730 case, set
731 .Sy ashift=12
732 (which is 1<<12 = 4096). When set, this property is
733 used as the default hint value in subsequent vdev operations (add,
734 attach and replace). Changing this value will not modify any existing
735 vdev, not even on disk replacement; however it can be used, for
736 instance, to replace a dying 512B sectors disk with a newer 4KiB
737 sectors device: this will probably result in bad performance but at the
738 same time could prevent loss of data.
739 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
740 Controls automatic pool expansion when the underlying LUN is grown.
741 If set to
742 .Sy on ,
743 the pool will be resized according to the size of the expanded device.
744 If the device is part of a mirror or raidz then all devices within that
745 mirror/raidz group must be expanded before the new space is made available to
746 the pool.
747 The default behavior is
748 .Sy off .
749 This property can also be referred to by its shortened column name,
750 .Sy expand .
751 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
752 Controls automatic device replacement.
753 If set to
754 .Sy off ,
755 device replacement must be initiated by the administrator by using the
756 .Nm zpool Cm replace
757 command.
758 If set to
759 .Sy on ,
760 any new device, found in the same physical location as a device that previously
761 belonged to the pool, is automatically formatted and replaced.
762 The default behavior is
763 .Sy off .
764 This property can also be referred to by its shortened column name,
765 .Sy replace .
766 Autoreplace can also be used with virtual disks (like device
767 mapper) provided that you use the /dev/disk/by-vdev paths setup by
768 vdev_id.conf. See the
769 .Xr vdev_id 8
770 man page for more details.
771 Autoreplace and autoonline require the ZFS Event Daemon be configured and
772 running. See the
773 .Xr zed 8
774 man page for more details.
775 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
776 Identifies the default bootable dataset for the root pool. This property is
777 expected to be set mainly by the installation and upgrade programs.
778 Not all Linux distribution boot processes use the bootfs property.
779 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
780 Controls the location of where the pool configuration is cached.
781 Discovering all pools on system startup requires a cached copy of the
782 configuration data that is stored on the root file system.
783 All pools in this cache are automatically imported when the system boots.
784 Some environments, such as install and clustering, need to cache this
785 information in a different location so that pools are not automatically
786 imported.
787 Setting this property caches the pool configuration in a different location that
788 can later be imported with
789 .Nm zpool Cm import Fl c .
790 Setting it to the value
791 .Sy none
792 creates a temporary pool that is never cached, and the
793 .Qq
794 .Pq empty string
795 uses the default location.
796 .Pp
797 Multiple pools can share the same cache file.
798 Because the kernel destroys and recreates this file when pools are added and
799 removed, care should be taken when attempting to access this file.
800 When the last pool using a
801 .Sy cachefile
802 is exported or destroyed, the file will be empty.
803 .It Sy comment Ns = Ns Ar text
804 A text string consisting of printable ASCII characters that will be stored
805 such that it is available even if the pool becomes faulted.
806 An administrator can provide additional information about a pool using this
807 property.
808 .It Sy dedupditto Ns = Ns Ar number
809 Threshold for the number of block ditto copies.
810 If the reference count for a deduplicated block increases above this number, a
811 new ditto copy of this block is automatically stored.
812 The default setting is
813 .Sy 0
814 which causes no ditto copies to be created for deduplicated blocks.
815 The minimum legal nonzero setting is
816 .Sy 100 .
817 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
818 Controls whether a non-privileged user is granted access based on the dataset
819 permissions defined on the dataset.
820 See
821 .Xr zfs 8
822 for more information on ZFS delegated administration.
823 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
824 Controls the system behavior in the event of catastrophic pool failure.
825 This condition is typically a result of a loss of connectivity to the underlying
826 storage device(s) or a failure of all devices within the pool.
827 The behavior of such an event is determined as follows:
828 .Bl -tag -width "continue"
829 .It Sy wait
830 Blocks all I/O access until the device connectivity is recovered and the errors
831 are cleared.
832 This is the default behavior.
833 .It Sy continue
834 Returns
835 .Er EIO
836 to any new write I/O requests but allows reads to any of the remaining healthy
837 devices.
838 Any write requests that have yet to be committed to disk would be blocked.
839 .It Sy panic
840 Prints out a message to the console and generates a system crash dump.
841 .El
842 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
843 When set to
844 .Sy on
845 space which has been recently freed, and is no longer allocated by the pool,
846 will be periodically trimmed. This allows block device vdevs which support
847 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
848 supports hole-punching, to reclaim unused blocks. The default setting for
849 this property is
850 .Sy off .
851 .Pp
852 Automatic TRIM does not immediately reclaim blocks after a free. Instead,
853 it will optimistically delay allowing smaller ranges to be aggregated in to
854 a few larger ones. These can then be issued more efficiently to the storage.
855 .Pp
856 Be aware that automatic trimming of recently freed data blocks can put
857 significant stress on the underlying storage devices. This will vary
858 depending of how well the specific device handles these commands. For
859 lower end devices it is often possible to achieve most of the benefits
860 of automatic trimming by running an on-demand (manual) TRIM periodically
861 using the
862 .Nm zpool Cm trim
863 command.
864 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
865 The value of this property is the current state of
866 .Ar feature_name .
867 The only valid value when setting this property is
868 .Sy enabled
869 which moves
870 .Ar feature_name
871 to the enabled state.
872 See
873 .Xr zpool-features 5
874 for details on feature states.
875 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
876 Controls whether information about snapshots associated with this pool is
877 output when
878 .Nm zfs Cm list
879 is run without the
880 .Fl t
881 option.
882 The default value is
883 .Sy off .
884 This property can also be referred to by its shortened name,
885 .Sy listsnaps .
886 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
887 Controls whether a pool activity check should be performed during
888 .Nm zpool Cm import .
889 When a pool is determined to be active it cannot be imported, even with the
890 .Fl f
891 option. This property is intended to be used in failover configurations
892 where multiple hosts have access to a pool on shared storage.
893 .Pp
894 Multihost provides protection on import only. It does not protect against an
895 individual device being used in multiple pools, regardless of the type of vdev.
896 See the discussion under
897 .Sy zpool create.
898 .Pp
899 When this property is on, periodic writes to storage occur to show the pool is
900 in use. See
901 .Sy zfs_multihost_interval
902 in the
903 .Xr zfs-module-parameters 5
904 man page. In order to enable this property each host must set a unique hostid.
905 See
906 .Xr genhostid 1
907 .Xr zgenhostid 8
908 .Xr spl-module-parameters 5
909 for additional details. The default value is
910 .Sy off .
911 .It Sy version Ns = Ns Ar version
912 The current on-disk version of the pool.
913 This can be increased, but never decreased.
914 The preferred method of updating pools is with the
915 .Nm zpool Cm upgrade
916 command, though this property can be used when a specific version is needed for
917 backwards compatibility.
918 Once feature flags are enabled on a pool this property will no longer have a
919 value.
920 .El
921 .Ss Subcommands
922 All subcommands that modify state are logged persistently to the pool in their
923 original form.
924 .Pp
925 The
926 .Nm
927 command provides subcommands to create and destroy storage pools, add capacity
928 to storage pools, and provide information about the storage pools.
929 The following subcommands are supported:
930 .Bl -tag -width Ds
931 .It Xo
932 .Nm
933 .Fl ?
934 .Xc
935 Displays a help message.
936 .It Xo
937 .Nm
938 .Fl V, -version
939 .Xc
940 An alias for the
941 .Nm zpool Cm version
942 subcommand.
943 .It Xo
944 .Nm
945 .Cm add
946 .Op Fl fgLnP
947 .Oo Fl o Ar property Ns = Ns Ar value Oc
948 .Ar pool vdev Ns ...
949 .Xc
950 Adds the specified virtual devices to the given pool.
951 The
952 .Ar vdev
953 specification is described in the
954 .Sx Virtual Devices
955 section.
956 The behavior of the
957 .Fl f
958 option, and the device checks performed are described in the
959 .Nm zpool Cm create
960 subcommand.
961 .Bl -tag -width Ds
962 .It Fl f
963 Forces use of
964 .Ar vdev Ns s ,
965 even if they appear in use or specify a conflicting replication level.
966 Not all devices can be overridden in this manner.
967 .It Fl g
968 Display
969 .Ar vdev ,
970 GUIDs instead of the normal device names. These GUIDs can be used in place of
971 device names for the zpool detach/offline/remove/replace commands.
972 .It Fl L
973 Display real paths for
974 .Ar vdev Ns s
975 resolving all symbolic links. This can be used to look up the current block
976 device name regardless of the /dev/disk/ path used to open it.
977 .It Fl n
978 Displays the configuration that would be used without actually adding the
979 .Ar vdev Ns s .
980 The actual pool creation can still fail due to insufficient privileges or
981 device sharing.
982 .It Fl P
983 Display real paths for
984 .Ar vdev Ns s
985 instead of only the last component of the path. This can be used in
986 conjunction with the
987 .Fl L
988 flag.
989 .It Fl o Ar property Ns = Ns Ar value
990 Sets the given pool properties. See the
991 .Sx Properties
992 section for a list of valid properties that can be set. The only property
993 supported at the moment is ashift.
994 .El
995 .It Xo
996 .Nm
997 .Cm attach
998 .Op Fl f
999 .Oo Fl o Ar property Ns = Ns Ar value Oc
1000 .Ar pool device new_device
1001 .Xc
1002 Attaches
1003 .Ar new_device
1004 to the existing
1005 .Ar device .
1006 The existing device cannot be part of a raidz configuration.
1007 If
1008 .Ar device
1009 is not currently part of a mirrored configuration,
1010 .Ar device
1011 automatically transforms into a two-way mirror of
1012 .Ar device
1013 and
1014 .Ar new_device .
1015 If
1016 .Ar device
1017 is part of a two-way mirror, attaching
1018 .Ar new_device
1019 creates a three-way mirror, and so on.
1020 In either case,
1021 .Ar new_device
1022 begins to resilver immediately.
1023 .Bl -tag -width Ds
1024 .It Fl f
1025 Forces use of
1026 .Ar new_device ,
1027 even if it appears to be in use.
1028 Not all devices can be overridden in this manner.
1029 .It Fl o Ar property Ns = Ns Ar value
1030 Sets the given pool properties. See the
1031 .Sx Properties
1032 section for a list of valid properties that can be set. The only property
1033 supported at the moment is ashift.
1034 .El
1035 .It Xo
1036 .Nm
1037 .Cm checkpoint
1038 .Op Fl d, -discard
1039 .Ar pool
1040 .Xc
1041 Checkpoints the current state of
1042 .Ar pool
1043 , which can be later restored by
1044 .Nm zpool Cm import --rewind-to-checkpoint .
1045 The existence of a checkpoint in a pool prohibits the following
1046 .Nm zpool
1047 commands:
1048 .Cm remove ,
1049 .Cm attach ,
1050 .Cm detach ,
1051 .Cm split ,
1052 and
1053 .Cm reguid .
1054 In addition, it may break reservation boundaries if the pool lacks free
1055 space.
1056 The
1057 .Nm zpool Cm status
1058 command indicates the existence of a checkpoint or the progress of discarding a
1059 checkpoint from a pool.
1060 The
1061 .Nm zpool Cm list
1062 command reports how much space the checkpoint takes from the pool.
1063 .Bl -tag -width Ds
1064 .It Fl d, -discard
1065 Discards an existing checkpoint from
1066 .Ar pool .
1067 .El
1068 .It Xo
1069 .Nm
1070 .Cm clear
1071 .Ar pool
1072 .Op Ar device
1073 .Xc
1074 Clears device errors in a pool.
1075 If no arguments are specified, all device errors within the pool are cleared.
1076 If one or more devices is specified, only those errors associated with the
1077 specified device or devices are cleared.
1078 If multihost is enabled, and the pool has been suspended, this will not
1079 resume I/O. While the pool was suspended, it may have been imported on
1080 another host, and resuming I/O could result in pool damage.
1081 .It Xo
1082 .Nm
1083 .Cm create
1084 .Op Fl dfn
1085 .Op Fl m Ar mountpoint
1086 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1087 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1088 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1089 .Op Fl R Ar root
1090 .Op Fl t Ar tname
1091 .Ar pool vdev Ns ...
1092 .Xc
1093 Creates a new storage pool containing the virtual devices specified on the
1094 command line.
1095 The pool name must begin with a letter, and can only contain
1096 alphanumeric characters as well as underscore
1097 .Pq Qq Sy _ ,
1098 dash
1099 .Pq Qq Sy \&- ,
1100 colon
1101 .Pq Qq Sy \&: ,
1102 space
1103 .Pq Qq Sy \&\ ,
1104 and period
1105 .Pq Qq Sy \&. .
1106 The pool names
1107 .Sy mirror ,
1108 .Sy raidz ,
1109 .Sy spare
1110 and
1111 .Sy log
1112 are reserved, as are names beginning with
1113 .Sy mirror ,
1114 .Sy raidz ,
1115 .Sy spare ,
1116 and the pattern
1117 .Sy c[0-9] .
1118 The
1119 .Ar vdev
1120 specification is described in the
1121 .Sx Virtual Devices
1122 section.
1123 .Pp
1124 The command attempts to verify that each device specified is accessible and not
1125 currently in use by another subsystem. However this check is not robust enough
1126 to detect simultaneous attempts to use a new device in different pools, even if
1127 .Sy multihost
1128 is
1129 .Sy enabled.
1130 The
1131 administrator must ensure that simultaneous invocations of any combination of
1132 .Sy zpool replace ,
1133 .Sy zpool create ,
1134 .Sy zpool add ,
1135 or
1136 .Sy zpool labelclear ,
1137 do not refer to the same device. Using the same device in two pools will
1138 result in pool corruption.
1139 .Pp
1140 There are some uses, such as being currently mounted, or specified as the
1141 dedicated dump device, that prevents a device from ever being used by ZFS.
1142 Other uses, such as having a preexisting UFS file system, can be overridden with
1143 the
1144 .Fl f
1145 option.
1146 .Pp
1147 The command also checks that the replication strategy for the pool is
1148 consistent.
1149 An attempt to combine redundant and non-redundant storage in a single pool, or
1150 to mix disks and files, results in an error unless
1151 .Fl f
1152 is specified.
1153 The use of differently sized devices within a single raidz or mirror group is
1154 also flagged as an error unless
1155 .Fl f
1156 is specified.
1157 .Pp
1158 Unless the
1159 .Fl R
1160 option is specified, the default mount point is
1161 .Pa / Ns Ar pool .
1162 The mount point must not exist or must be empty, or else the root dataset
1163 cannot be mounted.
1164 This can be overridden with the
1165 .Fl m
1166 option.
1167 .Pp
1168 By default all supported features are enabled on the new pool unless the
1169 .Fl d
1170 option is specified.
1171 .Bl -tag -width Ds
1172 .It Fl d
1173 Do not enable any features on the new pool.
1174 Individual features can be enabled by setting their corresponding properties to
1175 .Sy enabled
1176 with the
1177 .Fl o
1178 option.
1179 See
1180 .Xr zpool-features 5
1181 for details about feature properties.
1182 .It Fl f
1183 Forces use of
1184 .Ar vdev Ns s ,
1185 even if they appear in use or specify a conflicting replication level.
1186 Not all devices can be overridden in this manner.
1187 .It Fl m Ar mountpoint
1188 Sets the mount point for the root dataset.
1189 The default mount point is
1190 .Pa /pool
1191 or
1192 .Pa altroot/pool
1193 if
1194 .Ar altroot
1195 is specified.
1196 The mount point must be an absolute path,
1197 .Sy legacy ,
1198 or
1199 .Sy none .
1200 For more information on dataset mount points, see
1201 .Xr zfs 8 .
1202 .It Fl n
1203 Displays the configuration that would be used without actually creating the
1204 pool.
1205 The actual pool creation can still fail due to insufficient privileges or
1206 device sharing.
1207 .It Fl o Ar property Ns = Ns Ar value
1208 Sets the given pool properties.
1209 See the
1210 .Sx Properties
1211 section for a list of valid properties that can be set.
1212 .It Fl o Ar feature@feature Ns = Ns Ar value
1213 Sets the given pool feature. See the
1214 .Xr zpool-features 5
1215 section for a list of valid features that can be set.
1216 Value can be either disabled or enabled.
1217 .It Fl O Ar file-system-property Ns = Ns Ar value
1218 Sets the given file system properties in the root file system of the pool.
1219 See the
1220 .Sx Properties
1221 section of
1222 .Xr zfs 8
1223 for a list of valid properties that can be set.
1224 .It Fl R Ar root
1225 Equivalent to
1226 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1227 .It Fl t Ar tname
1228 Sets the in-core pool name to
1229 .Sy tname
1230 while the on-disk name will be the name specified as the pool name
1231 .Sy pool .
1232 This will set the default cachefile property to none. This is intended
1233 to handle name space collisions when creating pools for other systems,
1234 such as virtual machines or physical machines whose pools live on network
1235 block devices.
1236 .El
1237 .It Xo
1238 .Nm
1239 .Cm destroy
1240 .Op Fl f
1241 .Ar pool
1242 .Xc
1243 Destroys the given pool, freeing up any devices for other use.
1244 This command tries to unmount any active datasets before destroying the pool.
1245 .Bl -tag -width Ds
1246 .It Fl f
1247 Forces any active datasets contained within the pool to be unmounted.
1248 .El
1249 .It Xo
1250 .Nm
1251 .Cm detach
1252 .Ar pool device
1253 .Xc
1254 Detaches
1255 .Ar device
1256 from a mirror.
1257 The operation is refused if there are no other valid replicas of the data.
1258 If device may be re-added to the pool later on then consider the
1259 .Sy zpool offline
1260 command instead.
1261 .It Xo
1262 .Nm
1263 .Cm events
1264 .Op Fl vHf Oo Ar pool Oc | Fl c
1265 .Xc
1266 Lists all recent events generated by the ZFS kernel modules. These events
1267 are consumed by the
1268 .Xr zed 8
1269 and used to automate administrative tasks such as replacing a failed device
1270 with a hot spare. For more information about the subclasses and event payloads
1271 that can be generated see the
1272 .Xr zfs-events 5
1273 man page.
1274 .Bl -tag -width Ds
1275 .It Fl c
1276 Clear all previous events.
1277 .It Fl f
1278 Follow mode.
1279 .It Fl H
1280 Scripted mode. Do not display headers, and separate fields by a
1281 single tab instead of arbitrary space.
1282 .It Fl v
1283 Print the entire payload for each event.
1284 .El
1285 .It Xo
1286 .Nm
1287 .Cm export
1288 .Op Fl a
1289 .Op Fl f
1290 .Ar pool Ns ...
1291 .Xc
1292 Exports the given pools from the system.
1293 All devices are marked as exported, but are still considered in use by other
1294 subsystems.
1295 The devices can be moved between systems
1296 .Pq even those of different endianness
1297 and imported as long as a sufficient number of devices are present.
1298 .Pp
1299 Before exporting the pool, all datasets within the pool are unmounted.
1300 A pool can not be exported if it has a shared spare that is currently being
1301 used.
1302 .Pp
1303 For pools to be portable, you must give the
1304 .Nm
1305 command whole disks, not just partitions, so that ZFS can label the disks with
1306 portable EFI labels.
1307 Otherwise, disk drivers on platforms of different endianness will not recognize
1308 the disks.
1309 .Bl -tag -width Ds
1310 .It Fl a
1311 Exports all pools imported on the system.
1312 .It Fl f
1313 Forcefully unmount all datasets, using the
1314 .Nm unmount Fl f
1315 command.
1316 .Pp
1317 This command will forcefully export the pool even if it has a shared spare that
1318 is currently being used.
1319 This may lead to potential data corruption.
1320 .El
1321 .It Xo
1322 .Nm
1323 .Cm get
1324 .Op Fl Hp
1325 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1326 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1327 .Oo Ar pool Oc Ns ...
1328 .Xc
1329 Retrieves the given list of properties
1330 .Po
1331 or all properties if
1332 .Sy all
1333 is used
1334 .Pc
1335 for the specified storage pool(s).
1336 These properties are displayed with the following fields:
1337 .Bd -literal
1338 name Name of storage pool
1339 property Property name
1340 value Property value
1341 source Property source, either 'default' or 'local'.
1342 .Ed
1343 .Pp
1344 See the
1345 .Sx Properties
1346 section for more information on the available pool properties.
1347 .Bl -tag -width Ds
1348 .It Fl H
1349 Scripted mode.
1350 Do not display headers, and separate fields by a single tab instead of arbitrary
1351 space.
1352 .It Fl o Ar field
1353 A comma-separated list of columns to display.
1354 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1355 is the default value.
1356 .It Fl p
1357 Display numbers in parsable (exact) values.
1358 .El
1359 .It Xo
1360 .Nm
1361 .Cm history
1362 .Op Fl il
1363 .Oo Ar pool Oc Ns ...
1364 .Xc
1365 Displays the command history of the specified pool(s) or all pools if no pool is
1366 specified.
1367 .Bl -tag -width Ds
1368 .It Fl i
1369 Displays internally logged ZFS events in addition to user initiated events.
1370 .It Fl l
1371 Displays log records in long format, which in addition to standard format
1372 includes, the user name, the hostname, and the zone in which the operation was
1373 performed.
1374 .El
1375 .It Xo
1376 .Nm
1377 .Cm import
1378 .Op Fl D
1379 .Op Fl d Ar dir Ns | Ns device
1380 .Xc
1381 Lists pools available to import.
1382 If the
1383 .Fl d
1384 option is not specified, this command searches for devices in
1385 .Pa /dev .
1386 The
1387 .Fl d
1388 option can be specified multiple times, and all directories are searched.
1389 If the device appears to be part of an exported pool, this command displays a
1390 summary of the pool with the name of the pool, a numeric identifier, as well as
1391 the vdev layout and current health of the device for each device or file.
1392 Destroyed pools, pools that were previously destroyed with the
1393 .Nm zpool Cm destroy
1394 command, are not listed unless the
1395 .Fl D
1396 option is specified.
1397 .Pp
1398 The numeric identifier is unique, and can be used instead of the pool name when
1399 multiple exported pools of the same name are available.
1400 .Bl -tag -width Ds
1401 .It Fl c Ar cachefile
1402 Reads configuration from the given
1403 .Ar cachefile
1404 that was created with the
1405 .Sy cachefile
1406 pool property.
1407 This
1408 .Ar cachefile
1409 is used instead of searching for devices.
1410 .It Fl d Ar dir Ns | Ns Ar device
1411 Uses
1412 .Ar device
1413 or searches for devices or files in
1414 .Ar dir .
1415 The
1416 .Fl d
1417 option can be specified multiple times.
1418 .It Fl D
1419 Lists destroyed pools only.
1420 .El
1421 .It Xo
1422 .Nm
1423 .Cm import
1424 .Fl a
1425 .Op Fl DflmN
1426 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1427 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1428 .Op Fl o Ar mntopts
1429 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1430 .Op Fl R Ar root
1431 .Op Fl s
1432 .Xc
1433 Imports all pools found in the search directories.
1434 Identical to the previous command, except that all pools with a sufficient
1435 number of devices available are imported.
1436 Destroyed pools, pools that were previously destroyed with the
1437 .Nm zpool Cm destroy
1438 command, will not be imported unless the
1439 .Fl D
1440 option is specified.
1441 .Bl -tag -width Ds
1442 .It Fl a
1443 Searches for and imports all pools found.
1444 .It Fl c Ar cachefile
1445 Reads configuration from the given
1446 .Ar cachefile
1447 that was created with the
1448 .Sy cachefile
1449 pool property.
1450 This
1451 .Ar cachefile
1452 is used instead of searching for devices.
1453 .It Fl d Ar dir Ns | Ns Ar device
1454 Uses
1455 .Ar device
1456 or searches for devices or files in
1457 .Ar dir .
1458 The
1459 .Fl d
1460 option can be specified multiple times.
1461 This option is incompatible with the
1462 .Fl c
1463 option.
1464 .It Fl D
1465 Imports destroyed pools only.
1466 The
1467 .Fl f
1468 option is also required.
1469 .It Fl f
1470 Forces import, even if the pool appears to be potentially active.
1471 .It Fl F
1472 Recovery mode for a non-importable pool.
1473 Attempt to return the pool to an importable state by discarding the last few
1474 transactions.
1475 Not all damaged pools can be recovered by using this option.
1476 If successful, the data from the discarded transactions is irretrievably lost.
1477 This option is ignored if the pool is importable or already imported.
1478 .It Fl l
1479 Indicates that this command will request encryption keys for all encrypted
1480 datasets it attempts to mount as it is bringing the pool online. Note that if
1481 any datasets have a
1482 .Sy keylocation
1483 of
1484 .Sy prompt
1485 this command will block waiting for the keys to be entered. Without this flag
1486 encrypted datasets will be left unavailable until the keys are loaded.
1487 .It Fl m
1488 Allows a pool to import when there is a missing log device.
1489 Recent transactions can be lost because the log device will be discarded.
1490 .It Fl n
1491 Used with the
1492 .Fl F
1493 recovery option.
1494 Determines whether a non-importable pool can be made importable again, but does
1495 not actually perform the pool recovery.
1496 For more details about pool recovery mode, see the
1497 .Fl F
1498 option, above.
1499 .It Fl N
1500 Import the pool without mounting any file systems.
1501 .It Fl o Ar mntopts
1502 Comma-separated list of mount options to use when mounting datasets within the
1503 pool.
1504 See
1505 .Xr zfs 8
1506 for a description of dataset properties and mount options.
1507 .It Fl o Ar property Ns = Ns Ar value
1508 Sets the specified property on the imported pool.
1509 See the
1510 .Sx Properties
1511 section for more information on the available pool properties.
1512 .It Fl R Ar root
1513 Sets the
1514 .Sy cachefile
1515 property to
1516 .Sy none
1517 and the
1518 .Sy altroot
1519 property to
1520 .Ar root .
1521 .It Fl -rewind-to-checkpoint
1522 Rewinds pool to the checkpointed state.
1523 Once the pool is imported with this flag there is no way to undo the rewind.
1524 All changes and data that were written after the checkpoint are lost!
1525 The only exception is when the
1526 .Sy readonly
1527 mounting option is enabled.
1528 In this case, the checkpointed state of the pool is opened and an
1529 administrator can see how the pool would look like if they were
1530 to fully rewind.
1531 .It Fl s
1532 Scan using the default search path, the libblkid cache will not be
1533 consulted. A custom search path may be specified by setting the
1534 ZPOOL_IMPORT_PATH environment variable.
1535 .It Fl X
1536 Used with the
1537 .Fl F
1538 recovery option. Determines whether extreme
1539 measures to find a valid txg should take place. This allows the pool to
1540 be rolled back to a txg which is no longer guaranteed to be consistent.
1541 Pools imported at an inconsistent txg may contain uncorrectable
1542 checksum errors. For more details about pool recovery mode, see the
1543 .Fl F
1544 option, above. WARNING: This option can be extremely hazardous to the
1545 health of your pool and should only be used as a last resort.
1546 .It Fl T
1547 Specify the txg to use for rollback. Implies
1548 .Fl FX .
1549 For more details
1550 about pool recovery mode, see the
1551 .Fl X
1552 option, above. WARNING: This option can be extremely hazardous to the
1553 health of your pool and should only be used as a last resort.
1554 .El
1555 .It Xo
1556 .Nm
1557 .Cm import
1558 .Op Fl Dflm
1559 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1560 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1561 .Op Fl o Ar mntopts
1562 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1563 .Op Fl R Ar root
1564 .Op Fl s
1565 .Ar pool Ns | Ns Ar id
1566 .Op Ar newpool
1567 .Xc
1568 Imports a specific pool.
1569 A pool can be identified by its name or the numeric identifier.
1570 If
1571 .Ar newpool
1572 is specified, the pool is imported using the name
1573 .Ar newpool .
1574 Otherwise, it is imported with the same name as its exported name.
1575 .Pp
1576 If a device is removed from a system without running
1577 .Nm zpool Cm export
1578 first, the device appears as potentially active.
1579 It cannot be determined if this was a failed export, or whether the device is
1580 really in use from another host.
1581 To import a pool in this state, the
1582 .Fl f
1583 option is required.
1584 .Bl -tag -width Ds
1585 .It Fl c Ar cachefile
1586 Reads configuration from the given
1587 .Ar cachefile
1588 that was created with the
1589 .Sy cachefile
1590 pool property.
1591 This
1592 .Ar cachefile
1593 is used instead of searching for devices.
1594 .It Fl d Ar dir Ns | Ns Ar device
1595 Uses
1596 .Ar device
1597 or searches for devices or files in
1598 .Ar dir .
1599 The
1600 .Fl d
1601 option can be specified multiple times.
1602 This option is incompatible with the
1603 .Fl c
1604 option.
1605 .It Fl D
1606 Imports destroyed pool.
1607 The
1608 .Fl f
1609 option is also required.
1610 .It Fl f
1611 Forces import, even if the pool appears to be potentially active.
1612 .It Fl F
1613 Recovery mode for a non-importable pool.
1614 Attempt to return the pool to an importable state by discarding the last few
1615 transactions.
1616 Not all damaged pools can be recovered by using this option.
1617 If successful, the data from the discarded transactions is irretrievably lost.
1618 This option is ignored if the pool is importable or already imported.
1619 .It Fl l
1620 Indicates that this command will request encryption keys for all encrypted
1621 datasets it attempts to mount as it is bringing the pool online. Note that if
1622 any datasets have a
1623 .Sy keylocation
1624 of
1625 .Sy prompt
1626 this command will block waiting for the keys to be entered. Without this flag
1627 encrypted datasets will be left unavailable until the keys are loaded.
1628 .It Fl m
1629 Allows a pool to import when there is a missing log device.
1630 Recent transactions can be lost because the log device will be discarded.
1631 .It Fl n
1632 Used with the
1633 .Fl F
1634 recovery option.
1635 Determines whether a non-importable pool can be made importable again, but does
1636 not actually perform the pool recovery.
1637 For more details about pool recovery mode, see the
1638 .Fl F
1639 option, above.
1640 .It Fl o Ar mntopts
1641 Comma-separated list of mount options to use when mounting datasets within the
1642 pool.
1643 See
1644 .Xr zfs 8
1645 for a description of dataset properties and mount options.
1646 .It Fl o Ar property Ns = Ns Ar value
1647 Sets the specified property on the imported pool.
1648 See the
1649 .Sx Properties
1650 section for more information on the available pool properties.
1651 .It Fl R Ar root
1652 Sets the
1653 .Sy cachefile
1654 property to
1655 .Sy none
1656 and the
1657 .Sy altroot
1658 property to
1659 .Ar root .
1660 .It Fl s
1661 Scan using the default search path, the libblkid cache will not be
1662 consulted. A custom search path may be specified by setting the
1663 ZPOOL_IMPORT_PATH environment variable.
1664 .It Fl X
1665 Used with the
1666 .Fl F
1667 recovery option. Determines whether extreme
1668 measures to find a valid txg should take place. This allows the pool to
1669 be rolled back to a txg which is no longer guaranteed to be consistent.
1670 Pools imported at an inconsistent txg may contain uncorrectable
1671 checksum errors. For more details about pool recovery mode, see the
1672 .Fl F
1673 option, above. WARNING: This option can be extremely hazardous to the
1674 health of your pool and should only be used as a last resort.
1675 .It Fl T
1676 Specify the txg to use for rollback. Implies
1677 .Fl FX .
1678 For more details
1679 about pool recovery mode, see the
1680 .Fl X
1681 option, above. WARNING: This option can be extremely hazardous to the
1682 health of your pool and should only be used as a last resort.
1683 .It Fl t
1684 Used with
1685 .Sy newpool .
1686 Specifies that
1687 .Sy newpool
1688 is temporary. Temporary pool names last until export. Ensures that
1689 the original pool name will be used in all label updates and therefore
1690 is retained upon export.
1691 Will also set -o cachefile=none when not explicitly specified.
1692 .El
1693 .It Xo
1694 .Nm
1695 .Cm initialize
1696 .Op Fl c | Fl s
1697 .Ar pool
1698 .Op Ar device Ns ...
1699 .Xc
1700 Begins initializing by writing to all unallocated regions on the specified
1701 devices, or all eligible devices in the pool if no individual devices are
1702 specified.
1703 Only leaf data or log devices may be initialized.
1704 .Bl -tag -width Ds
1705 .It Fl c, -cancel
1706 Cancel initializing on the specified devices, or all eligible devices if none
1707 are specified.
1708 If one or more target devices are invalid or are not currently being
1709 initialized, the command will fail and no cancellation will occur on any device.
1710 .It Fl s -suspend
1711 Suspend initializing on the specified devices, or all eligible devices if none
1712 are specified.
1713 If one or more target devices are invalid or are not currently being
1714 initialized, the command will fail and no suspension will occur on any device.
1715 Initializing can then be resumed by running
1716 .Nm zpool Cm initialize
1717 with no flags on the relevant target devices.
1718 .El
1719 .It Xo
1720 .Nm
1721 .Cm iostat
1722 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1723 .Op Fl T Sy u Ns | Ns Sy d
1724 .Op Fl ghHLnpPvy
1725 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1726 .Op Ar interval Op Ar count
1727 .Xc
1728 Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1729 be observed via
1730 .Xr iostat 1 .
1731 If writes are located nearby, they may be merged into a single
1732 larger operation. Additional I/O may be generated depending on the level of
1733 vdev redundancy.
1734 To filter output, you may pass in a list of pools, a pool and list of vdevs
1735 in that pool, or a list of any vdevs from any pool. If no items are specified,
1736 statistics for every pool in the system are shown.
1737 When given an
1738 .Ar interval ,
1739 the statistics are printed every
1740 .Ar interval
1741 seconds until ^C is pressed. If
1742 .Fl n
1743 flag is specified the headers are displayed only once, otherwise they are
1744 displayed periodically. If count is specified, the command exits
1745 after count reports are printed. The first report printed is always
1746 the statistics since boot regardless of whether
1747 .Ar interval
1748 and
1749 .Ar count
1750 are passed. However, this behavior can be suppressed with the
1751 .Fl y
1752 flag. Also note that the units of
1753 .Sy K ,
1754 .Sy M ,
1755 .Sy G ...
1756 that are printed in the report are in base 1024. To get the raw
1757 values, use the
1758 .Fl p
1759 flag.
1760 .Bl -tag -width Ds
1761 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1762 Run a script (or scripts) on each vdev and include the output as a new column
1763 in the
1764 .Nm zpool Cm iostat
1765 output. Users can run any script found in their
1766 .Pa ~/.zpool.d
1767 directory or from the system
1768 .Pa /etc/zfs/zpool.d
1769 directory. Script names containing the slash (/) character are not allowed.
1770 The default search path can be overridden by setting the
1771 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1772 .Fl c
1773 if they have the ZPOOL_SCRIPTS_AS_ROOT
1774 environment variable set. If a script requires the use of a privileged
1775 command, like
1776 .Xr smartctl 8 ,
1777 then it's recommended you allow the user access to it in
1778 .Pa /etc/sudoers
1779 or add the user to the
1780 .Pa /etc/sudoers.d/zfs
1781 file.
1782 .Pp
1783 If
1784 .Fl c
1785 is passed without a script name, it prints a list of all scripts.
1786 .Fl c
1787 also sets verbose mode
1788 .No \&( Ns Fl v Ns No \&).
1789 .Pp
1790 Script output should be in the form of "name=value". The column name is
1791 set to "name" and the value is set to "value". Multiple lines can be
1792 used to output multiple columns. The first line of output not in the
1793 "name=value" format is displayed without a column title, and no more
1794 output after that is displayed. This can be useful for printing error
1795 messages. Blank or NULL values are printed as a '-' to make output
1796 awk-able.
1797 .Pp
1798 The following environment variables are set before running each script:
1799 .Bl -tag -width "VDEV_PATH"
1800 .It Sy VDEV_PATH
1801 Full path to the vdev
1802 .El
1803 .Bl -tag -width "VDEV_UPATH"
1804 .It Sy VDEV_UPATH
1805 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1806 multipath, or partitioned vdevs.
1807 .El
1808 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1809 .It Sy VDEV_ENC_SYSFS_PATH
1810 The sysfs path to the enclosure for the vdev (if any).
1811 .El
1812 .It Fl T Sy u Ns | Ns Sy d
1813 Display a time stamp.
1814 Specify
1815 .Sy u
1816 for a printed representation of the internal representation of time.
1817 See
1818 .Xr time 2 .
1819 Specify
1820 .Sy d
1821 for standard date format.
1822 See
1823 .Xr date 1 .
1824 .It Fl g
1825 Display vdev GUIDs instead of the normal device names. These GUIDs
1826 can be used in place of device names for the zpool
1827 detach/offline/remove/replace commands.
1828 .It Fl H
1829 Scripted mode. Do not display headers, and separate fields by a
1830 single tab instead of arbitrary space.
1831 .It Fl L
1832 Display real paths for vdevs resolving all symbolic links. This can
1833 be used to look up the current block device name regardless of the
1834 .Pa /dev/disk/
1835 path used to open it.
1836 .It Fl n
1837 Print headers only once when passed
1838 .It Fl p
1839 Display numbers in parsable (exact) values. Time values are in
1840 nanoseconds.
1841 .It Fl P
1842 Display full paths for vdevs instead of only the last component of
1843 the path. This can be used in conjunction with the
1844 .Fl L
1845 flag.
1846 .It Fl r
1847 Print request size histograms for the leaf vdev's IO. This includes
1848 histograms of individual IOs (ind) and aggregate IOs (agg). These stats
1849 can be useful for observing how well IO aggregation is working. Note
1850 that TRIM IOs may exceed 16M, but will be counted as 16M.
1851 .It Fl v
1852 Verbose statistics Reports usage statistics for individual vdevs within the
1853 pool, in addition to the pool-wide statistics.
1854 .It Fl y
1855 Omit statistics since boot.
1856 Normally the first line of output reports the statistics since boot.
1857 This option suppresses that first line of output.
1858 .Ar interval
1859 .It Fl w
1860 Display latency histograms:
1861 .Pp
1862 .Ar total_wait :
1863 Total IO time (queuing + disk IO time).
1864 .Ar disk_wait :
1865 Disk IO time (time reading/writing the disk).
1866 .Ar syncq_wait :
1867 Amount of time IO spent in synchronous priority queues. Does not include
1868 disk time.
1869 .Ar asyncq_wait :
1870 Amount of time IO spent in asynchronous priority queues. Does not include
1871 disk time.
1872 .Ar scrub :
1873 Amount of time IO spent in scrub queue. Does not include disk time.
1874 .It Fl l
1875 Include average latency statistics:
1876 .Pp
1877 .Ar total_wait :
1878 Average total IO time (queuing + disk IO time).
1879 .Ar disk_wait :
1880 Average disk IO time (time reading/writing the disk).
1881 .Ar syncq_wait :
1882 Average amount of time IO spent in synchronous priority queues. Does
1883 not include disk time.
1884 .Ar asyncq_wait :
1885 Average amount of time IO spent in asynchronous priority queues.
1886 Does not include disk time.
1887 .Ar scrub :
1888 Average queuing time in scrub queue. Does not include disk time.
1889 .Ar trim :
1890 Average queuing time in trim queue. Does not include disk time.
1891 .It Fl q
1892 Include active queue statistics. Each priority queue has both
1893 pending (
1894 .Ar pend )
1895 and active (
1896 .Ar activ )
1897 IOs. Pending IOs are waiting to
1898 be issued to the disk, and active IOs have been issued to disk and are
1899 waiting for completion. These stats are broken out by priority queue:
1900 .Pp
1901 .Ar syncq_read/write :
1902 Current number of entries in synchronous priority
1903 queues.
1904 .Ar asyncq_read/write :
1905 Current number of entries in asynchronous priority queues.
1906 .Ar scrubq_read :
1907 Current number of entries in scrub queue.
1908 .Ar trimq_write :
1909 Current number of entries in trim queue.
1910 .Pp
1911 All queue statistics are instantaneous measurements of the number of
1912 entries in the queues. If you specify an interval, the measurements
1913 will be sampled from the end of the interval.
1914 .El
1915 .It Xo
1916 .Nm
1917 .Cm labelclear
1918 .Op Fl f
1919 .Ar device
1920 .Xc
1921 Removes ZFS label information from the specified
1922 .Ar device .
1923 The
1924 .Ar device
1925 must not be part of an active pool configuration.
1926 .Bl -tag -width Ds
1927 .It Fl f
1928 Treat exported or foreign devices as inactive.
1929 .El
1930 .It Xo
1931 .Nm
1932 .Cm list
1933 .Op Fl HgLpPv
1934 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1935 .Op Fl T Sy u Ns | Ns Sy d
1936 .Oo Ar pool Oc Ns ...
1937 .Op Ar interval Op Ar count
1938 .Xc
1939 Lists the given pools along with a health status and space usage.
1940 If no
1941 .Ar pool Ns s
1942 are specified, all pools in the system are listed.
1943 When given an
1944 .Ar interval ,
1945 the information is printed every
1946 .Ar interval
1947 seconds until ^C is pressed.
1948 If
1949 .Ar count
1950 is specified, the command exits after
1951 .Ar count
1952 reports are printed.
1953 .Bl -tag -width Ds
1954 .It Fl g
1955 Display vdev GUIDs instead of the normal device names. These GUIDs
1956 can be used in place of device names for the zpool
1957 detach/offline/remove/replace commands.
1958 .It Fl H
1959 Scripted mode.
1960 Do not display headers, and separate fields by a single tab instead of arbitrary
1961 space.
1962 .It Fl o Ar property
1963 Comma-separated list of properties to display.
1964 See the
1965 .Sx Properties
1966 section for a list of valid properties.
1967 The default list is
1968 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1969 .Cm capacity , dedupratio , health , altroot .
1970 .It Fl L
1971 Display real paths for vdevs resolving all symbolic links. This can
1972 be used to look up the current block device name regardless of the
1973 /dev/disk/ path used to open it.
1974 .It Fl p
1975 Display numbers in parsable
1976 .Pq exact
1977 values.
1978 .It Fl P
1979 Display full paths for vdevs instead of only the last component of
1980 the path. This can be used in conjunction with the
1981 .Fl L
1982 flag.
1983 .It Fl T Sy u Ns | Ns Sy d
1984 Display a time stamp.
1985 Specify
1986 .Sy u
1987 for a printed representation of the internal representation of time.
1988 See
1989 .Xr time 2 .
1990 Specify
1991 .Sy d
1992 for standard date format.
1993 See
1994 .Xr date 1 .
1995 .It Fl v
1996 Verbose statistics.
1997 Reports usage statistics for individual vdevs within the pool, in addition to
1998 the pool-wise statistics.
1999 .El
2000 .It Xo
2001 .Nm
2002 .Cm offline
2003 .Op Fl f
2004 .Op Fl t
2005 .Ar pool Ar device Ns ...
2006 .Xc
2007 Takes the specified physical device offline.
2008 While the
2009 .Ar device
2010 is offline, no attempt is made to read or write to the device.
2011 This command is not applicable to spares.
2012 .Bl -tag -width Ds
2013 .It Fl f
2014 Force fault. Instead of offlining the disk, put it into a faulted
2015 state. The fault will persist across imports unless the
2016 .Fl t
2017 flag was specified.
2018 .It Fl t
2019 Temporary.
2020 Upon reboot, the specified physical device reverts to its previous state.
2021 .El
2022 .It Xo
2023 .Nm
2024 .Cm online
2025 .Op Fl e
2026 .Ar pool Ar device Ns ...
2027 .Xc
2028 Brings the specified physical device online.
2029 This command is not applicable to spares.
2030 .Bl -tag -width Ds
2031 .It Fl e
2032 Expand the device to use all available space.
2033 If the device is part of a mirror or raidz then all devices must be expanded
2034 before the new space will become available to the pool.
2035 .El
2036 .It Xo
2037 .Nm
2038 .Cm reguid
2039 .Ar pool
2040 .Xc
2041 Generates a new unique identifier for the pool.
2042 You must ensure that all devices in this pool are online and healthy before
2043 performing this action.
2044 .It Xo
2045 .Nm
2046 .Cm reopen
2047 .Op Fl n
2048 .Ar pool
2049 .Xc
2050 Reopen all the vdevs associated with the pool.
2051 .Bl -tag -width Ds
2052 .It Fl n
2053 Do not restart an in-progress scrub operation. This is not recommended and can
2054 result in partially resilvered devices unless a second scrub is performed.
2055 .El
2056 .It Xo
2057 .Nm
2058 .Cm remove
2059 .Op Fl np
2060 .Ar pool Ar device Ns ...
2061 .Xc
2062 Removes the specified device from the pool.
2063 This command supports removing hot spare, cache, log, and both mirrored and
2064 non-redundant primary top-level vdevs, including dedup and special vdevs.
2065 When the primary pool storage includes a top-level raidz vdev only hot spare,
2066 cache, and log devices can be removed.
2067 .sp
2068 Removing a top-level vdev reduces the total amount of space in the storage pool.
2069 The specified device will be evacuated by copying all allocated space from it to
2070 the other devices in the pool.
2071 In this case, the
2072 .Nm zpool Cm remove
2073 command initiates the removal and returns, while the evacuation continues in
2074 the background.
2075 The removal progress can be monitored with
2076 .Nm zpool Cm status .
2077 If an IO error is encountered during the removal process it will be
2078 cancelled. The
2079 .Sy device_removal
2080 feature flag must be enabled to remove a top-level vdev, see
2081 .Xr zpool-features 5 .
2082 .Pp
2083 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2084 same.
2085 Non-log devices or data devices that are part of a mirrored configuration can be removed using
2086 the
2087 .Nm zpool Cm detach
2088 command.
2089 .Bl -tag -width Ds
2090 .It Fl n
2091 Do not actually perform the removal ("no-op").
2092 Instead, print the estimated amount of memory that will be used by the
2093 mapping table after the removal completes.
2094 This is nonzero only for top-level vdevs.
2095 .El
2096 .Bl -tag -width Ds
2097 .It Fl p
2098 Used in conjunction with the
2099 .Fl n
2100 flag, displays numbers as parsable (exact) values.
2101 .El
2102 .It Xo
2103 .Nm
2104 .Cm remove
2105 .Fl s
2106 .Ar pool
2107 .Xc
2108 Stops and cancels an in-progress removal of a top-level vdev.
2109 .It Xo
2110 .Nm
2111 .Cm replace
2112 .Op Fl f
2113 .Op Fl o Ar property Ns = Ns Ar value
2114 .Ar pool Ar device Op Ar new_device
2115 .Xc
2116 Replaces
2117 .Ar old_device
2118 with
2119 .Ar new_device .
2120 This is equivalent to attaching
2121 .Ar new_device ,
2122 waiting for it to resilver, and then detaching
2123 .Ar old_device .
2124 .Pp
2125 The size of
2126 .Ar new_device
2127 must be greater than or equal to the minimum size of all the devices in a mirror
2128 or raidz configuration.
2129 .Pp
2130 .Ar new_device
2131 is required if the pool is not redundant.
2132 If
2133 .Ar new_device
2134 is not specified, it defaults to
2135 .Ar old_device .
2136 This form of replacement is useful after an existing disk has failed and has
2137 been physically replaced.
2138 In this case, the new disk may have the same
2139 .Pa /dev
2140 path as the old device, even though it is actually a different disk.
2141 ZFS recognizes this.
2142 .Bl -tag -width Ds
2143 .It Fl f
2144 Forces use of
2145 .Ar new_device ,
2146 even if it appears to be in use.
2147 Not all devices can be overridden in this manner.
2148 .It Fl o Ar property Ns = Ns Ar value
2149 Sets the given pool properties. See the
2150 .Sx Properties
2151 section for a list of valid properties that can be set.
2152 The only property supported at the moment is
2153 .Sy ashift .
2154 .El
2155 .It Xo
2156 .Nm
2157 .Cm scrub
2158 .Op Fl s | Fl p
2159 .Ar pool Ns ...
2160 .Xc
2161 Begins a scrub or resumes a paused scrub.
2162 The scrub examines all data in the specified pools to verify that it checksums
2163 correctly.
2164 For replicated
2165 .Pq mirror or raidz
2166 devices, ZFS automatically repairs any damage discovered during the scrub.
2167 The
2168 .Nm zpool Cm status
2169 command reports the progress of the scrub and summarizes the results of the
2170 scrub upon completion.
2171 .Pp
2172 Scrubbing and resilvering are very similar operations.
2173 The difference is that resilvering only examines data that ZFS knows to be out
2174 of date
2175 .Po
2176 for example, when attaching a new device to a mirror or replacing an existing
2177 device
2178 .Pc ,
2179 whereas scrubbing examines all data to discover silent errors due to hardware
2180 faults or disk failure.
2181 .Pp
2182 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2183 one at a time.
2184 If a scrub is paused, the
2185 .Nm zpool Cm scrub
2186 resumes it.
2187 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2188 resilver completes.
2189 .Bl -tag -width Ds
2190 .It Fl s
2191 Stop scrubbing.
2192 .El
2193 .Bl -tag -width Ds
2194 .It Fl p
2195 Pause scrubbing.
2196 Scrub pause state and progress are periodically synced to disk.
2197 If the system is restarted or pool is exported during a paused scrub,
2198 even after import, scrub will remain paused until it is resumed.
2199 Once resumed the scrub will pick up from the place where it was last
2200 checkpointed to disk.
2201 To resume a paused scrub issue
2202 .Nm zpool Cm scrub
2203 again.
2204 .El
2205 .It Xo
2206 .Nm
2207 .Cm resilver
2208 .Ar pool Ns ...
2209 .Xc
2210 Starts a resilver. If an existing resilver is already running it will be
2211 restarted from the beginning. Any drives that were scheduled for a deferred
2212 resilver will be added to the new one.
2213 .It Xo
2214 .Nm
2215 .Cm trim
2216 .Op Fl d
2217 .Op Fl c | Fl s
2218 .Ar pool
2219 .Op Ar device Ns ...
2220 .Xc
2221 Initiates an immediate on-demand TRIM operation for all of the free space in
2222 a pool. This operation informs the underlying storage devices of all blocks
2223 in the pool which are no longer allocated and allows thinly provisioned
2224 devices to reclaim the space.
2225 .Pp
2226 A manual on-demand TRIM operation can be initiated irrespective of the
2227 .Sy autotrim
2228 pool property setting. See the documentation for the
2229 .Sy autotrim
2230 property above for the types of vdev devices which can be trimmed.
2231 .Bl -tag -width Ds
2232 .It Fl d -secure
2233 Causes a secure TRIM to be initiated. When performing a secure TRIM, the
2234 device guarantees that data stored on the trimmed blocks has been erased.
2235 This requires support from the device and is not supported by all SSDs.
2236 .It Fl r -rate Ar rate
2237 Controls the rate at which the TRIM operation progresses. Without this
2238 option TRIM is executed as quickly as possible. The rate, expressed in bytes
2239 per second, is applied on a per-vdev basis and may be set differently for
2240 each leaf vdev.
2241 .It Fl c, -cancel
2242 Cancel trimming on the specified devices, or all eligible devices if none
2243 are specified.
2244 If one or more target devices are invalid or are not currently being
2245 trimmed, the command will fail and no cancellation will occur on any device.
2246 .It Fl s -suspend
2247 Suspend trimming on the specified devices, or all eligible devices if none
2248 are specified.
2249 If one or more target devices are invalid or are not currently being
2250 trimmed, the command will fail and no suspension will occur on any device.
2251 Trimming can then be resumed by running
2252 .Nm zpool Cm trim
2253 with no flags on the relevant target devices.
2254 .El
2255 .It Xo
2256 .Nm
2257 .Cm set
2258 .Ar property Ns = Ns Ar value
2259 .Ar pool
2260 .Xc
2261 Sets the given property on the specified pool.
2262 See the
2263 .Sx Properties
2264 section for more information on what properties can be set and acceptable
2265 values.
2266 .It Xo
2267 .Nm
2268 .Cm split
2269 .Op Fl gLlnP
2270 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2271 .Op Fl R Ar root
2272 .Ar pool newpool
2273 .Op Ar device ...
2274 .Xc
2275 Splits devices off
2276 .Ar pool
2277 creating
2278 .Ar newpool .
2279 All vdevs in
2280 .Ar pool
2281 must be mirrors and the pool must not be in the process of resilvering.
2282 At the time of the split,
2283 .Ar newpool
2284 will be a replica of
2285 .Ar pool .
2286 By default, the
2287 last device in each mirror is split from
2288 .Ar pool
2289 to create
2290 .Ar newpool .
2291 .Pp
2292 The optional device specification causes the specified device(s) to be
2293 included in the new
2294 .Ar pool
2295 and, should any devices remain unspecified,
2296 the last device in each mirror is used as would be by default.
2297 .Bl -tag -width Ds
2298 .It Fl g
2299 Display vdev GUIDs instead of the normal device names. These GUIDs
2300 can be used in place of device names for the zpool
2301 detach/offline/remove/replace commands.
2302 .It Fl L
2303 Display real paths for vdevs resolving all symbolic links. This can
2304 be used to look up the current block device name regardless of the
2305 .Pa /dev/disk/
2306 path used to open it.
2307 .It Fl l
2308 Indicates that this command will request encryption keys for all encrypted
2309 datasets it attempts to mount as it is bringing the new pool online. Note that
2310 if any datasets have a
2311 .Sy keylocation
2312 of
2313 .Sy prompt
2314 this command will block waiting for the keys to be entered. Without this flag
2315 encrypted datasets will be left unavailable until the keys are loaded.
2316 .It Fl n
2317 Do dry run, do not actually perform the split.
2318 Print out the expected configuration of
2319 .Ar newpool .
2320 .It Fl P
2321 Display full paths for vdevs instead of only the last component of
2322 the path. This can be used in conjunction with the
2323 .Fl L
2324 flag.
2325 .It Fl o Ar property Ns = Ns Ar value
2326 Sets the specified property for
2327 .Ar newpool .
2328 See the
2329 .Sx Properties
2330 section for more information on the available pool properties.
2331 .It Fl R Ar root
2332 Set
2333 .Sy altroot
2334 for
2335 .Ar newpool
2336 to
2337 .Ar root
2338 and automatically import it.
2339 .El
2340 .It Xo
2341 .Nm
2342 .Cm status
2343 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2344 .Op Fl DigLpPstvx
2345 .Op Fl T Sy u Ns | Ns Sy d
2346 .Oo Ar pool Oc Ns ...
2347 .Op Ar interval Op Ar count
2348 .Xc
2349 Displays the detailed health status for the given pools.
2350 If no
2351 .Ar pool
2352 is specified, then the status of each pool in the system is displayed.
2353 For more information on pool and device health, see the
2354 .Sx Device Failure and Recovery
2355 section.
2356 .Pp
2357 If a scrub or resilver is in progress, this command reports the percentage done
2358 and the estimated time to completion.
2359 Both of these are only approximate, because the amount of data in the pool and
2360 the other workloads on the system can change.
2361 .Bl -tag -width Ds
2362 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2363 Run a script (or scripts) on each vdev and include the output as a new column
2364 in the
2365 .Nm zpool Cm status
2366 output. See the
2367 .Fl c
2368 option of
2369 .Nm zpool Cm iostat
2370 for complete details.
2371 .It Fl i
2372 Display vdev initialization status.
2373 .It Fl g
2374 Display vdev GUIDs instead of the normal device names. These GUIDs
2375 can be used in place of device names for the zpool
2376 detach/offline/remove/replace commands.
2377 .It Fl L
2378 Display real paths for vdevs resolving all symbolic links. This can
2379 be used to look up the current block device name regardless of the
2380 .Pa /dev/disk/
2381 path used to open it.
2382 .It Fl p
2383 Display numbers in parsable (exact) values.
2384 .It Fl P
2385 Display full paths for vdevs instead of only the last component of
2386 the path. This can be used in conjunction with the
2387 .Fl L
2388 flag.
2389 .It Fl D
2390 Display a histogram of deduplication statistics, showing the allocated
2391 .Pq physically present on disk
2392 and referenced
2393 .Pq logically referenced in the pool
2394 block counts and sizes by reference count.
2395 .It Fl s
2396 Display the number of leaf VDEV slow IOs. This is the number of IOs that
2397 didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2398 This does not necessarily mean the IOs failed to complete, just took an
2399 unreasonably long amount of time. This may indicate a problem with the
2400 underlying storage.
2401 .It Fl t
2402 Display vdev TRIM status.
2403 .It Fl T Sy u Ns | Ns Sy d
2404 Display a time stamp.
2405 Specify
2406 .Sy u
2407 for a printed representation of the internal representation of time.
2408 See
2409 .Xr time 2 .
2410 Specify
2411 .Sy d
2412 for standard date format.
2413 See
2414 .Xr date 1 .
2415 .It Fl v
2416 Displays verbose data error information, printing out a complete list of all
2417 data errors since the last complete pool scrub.
2418 .It Fl x
2419 Only display status for pools that are exhibiting errors or are otherwise
2420 unavailable.
2421 Warnings about pools not using the latest on-disk format will not be included.
2422 .El
2423 .It Xo
2424 .Nm
2425 .Cm sync
2426 .Op Ar pool ...
2427 .Xc
2428 This command forces all in-core dirty data to be written to the primary
2429 pool storage and not the ZIL. It will also update administrative
2430 information including quota reporting. Without arguments,
2431 .Sy zpool sync
2432 will sync all pools on the system. Otherwise, it will sync only the
2433 specified pool(s).
2434 .It Xo
2435 .Nm
2436 .Cm upgrade
2437 .Xc
2438 Displays pools which do not have all supported features enabled and pools
2439 formatted using a legacy ZFS version number.
2440 These pools can continue to be used, but some features may not be available.
2441 Use
2442 .Nm zpool Cm upgrade Fl a
2443 to enable all features on all pools.
2444 .It Xo
2445 .Nm
2446 .Cm upgrade
2447 .Fl v
2448 .Xc
2449 Displays legacy ZFS versions supported by the current software.
2450 See
2451 .Xr zpool-features 5
2452 for a description of feature flags features supported by the current software.
2453 .It Xo
2454 .Nm
2455 .Cm upgrade
2456 .Op Fl V Ar version
2457 .Fl a Ns | Ns Ar pool Ns ...
2458 .Xc
2459 Enables all supported features on the given pool.
2460 Once this is done, the pool will no longer be accessible on systems that do not
2461 support feature flags.
2462 See
2463 .Xr zpool-features 5
2464 for details on compatibility with systems that support feature flags, but do not
2465 support all features enabled on the pool.
2466 .Bl -tag -width Ds
2467 .It Fl a
2468 Enables all supported features on all pools.
2469 .It Fl V Ar version
2470 Upgrade to the specified legacy version.
2471 If the
2472 .Fl V
2473 flag is specified, no features will be enabled on the pool.
2474 This option can only be used to increase the version number up to the last
2475 supported legacy version number.
2476 .El
2477 .It Xo
2478 .Nm
2479 .Cm version
2480 .Xc
2481 Displays the software version of the
2482 .Nm
2483 userland utility and the zfs kernel module.
2484 .El
2485 .Sh EXIT STATUS
2486 The following exit values are returned:
2487 .Bl -tag -width Ds
2488 .It Sy 0
2489 Successful completion.
2490 .It Sy 1
2491 An error occurred.
2492 .It Sy 2
2493 Invalid command line options were specified.
2494 .El
2495 .Sh EXAMPLES
2496 .Bl -tag -width Ds
2497 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2498 The following command creates a pool with a single raidz root vdev that
2499 consists of six disks.
2500 .Bd -literal
2501 # zpool create tank raidz sda sdb sdc sdd sde sdf
2502 .Ed
2503 .It Sy Example 2 No Creating a Mirrored Storage Pool
2504 The following command creates a pool with two mirrors, where each mirror
2505 contains two disks.
2506 .Bd -literal
2507 # zpool create tank mirror sda sdb mirror sdc sdd
2508 .Ed
2509 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2510 The following command creates an unmirrored pool using two disk partitions.
2511 .Bd -literal
2512 # zpool create tank sda1 sdb2
2513 .Ed
2514 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2515 The following command creates an unmirrored pool using files.
2516 While not recommended, a pool based on files can be useful for experimental
2517 purposes.
2518 .Bd -literal
2519 # zpool create tank /path/to/file/a /path/to/file/b
2520 .Ed
2521 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2522 The following command adds two mirrored disks to the pool
2523 .Em tank ,
2524 assuming the pool is already made up of two-way mirrors.
2525 The additional space is immediately available to any datasets within the pool.
2526 .Bd -literal
2527 # zpool add tank mirror sda sdb
2528 .Ed
2529 .It Sy Example 6 No Listing Available ZFS Storage Pools
2530 The following command lists all available pools on the system.
2531 In this case, the pool
2532 .Em zion
2533 is faulted due to a missing device.
2534 The results from this command are similar to the following:
2535 .Bd -literal
2536 # zpool list
2537 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2538 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2539 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2540 zion - - - - - - - FAULTED -
2541 .Ed
2542 .It Sy Example 7 No Destroying a ZFS Storage Pool
2543 The following command destroys the pool
2544 .Em tank
2545 and any datasets contained within.
2546 .Bd -literal
2547 # zpool destroy -f tank
2548 .Ed
2549 .It Sy Example 8 No Exporting a ZFS Storage Pool
2550 The following command exports the devices in pool
2551 .Em tank
2552 so that they can be relocated or later imported.
2553 .Bd -literal
2554 # zpool export tank
2555 .Ed
2556 .It Sy Example 9 No Importing a ZFS Storage Pool
2557 The following command displays available pools, and then imports the pool
2558 .Em tank
2559 for use on the system.
2560 The results from this command are similar to the following:
2561 .Bd -literal
2562 # zpool import
2563 pool: tank
2564 id: 15451357997522795478
2565 state: ONLINE
2566 action: The pool can be imported using its name or numeric identifier.
2567 config:
2568
2569 tank ONLINE
2570 mirror ONLINE
2571 sda ONLINE
2572 sdb ONLINE
2573
2574 # zpool import tank
2575 .Ed
2576 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2577 The following command upgrades all ZFS Storage pools to the current version of
2578 the software.
2579 .Bd -literal
2580 # zpool upgrade -a
2581 This system is currently running ZFS version 2.
2582 .Ed
2583 .It Sy Example 11 No Managing Hot Spares
2584 The following command creates a new pool with an available hot spare:
2585 .Bd -literal
2586 # zpool create tank mirror sda sdb spare sdc
2587 .Ed
2588 .Pp
2589 If one of the disks were to fail, the pool would be reduced to the degraded
2590 state.
2591 The failed device can be replaced using the following command:
2592 .Bd -literal
2593 # zpool replace tank sda sdd
2594 .Ed
2595 .Pp
2596 Once the data has been resilvered, the spare is automatically removed and is
2597 made available for use should another device fail.
2598 The hot spare can be permanently removed from the pool using the following
2599 command:
2600 .Bd -literal
2601 # zpool remove tank sdc
2602 .Ed
2603 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2604 The following command creates a ZFS storage pool consisting of two, two-way
2605 mirrors and mirrored log devices:
2606 .Bd -literal
2607 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2608 sde sdf
2609 .Ed
2610 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2611 The following command adds two disks for use as cache devices to a ZFS storage
2612 pool:
2613 .Bd -literal
2614 # zpool add pool cache sdc sdd
2615 .Ed
2616 .Pp
2617 Once added, the cache devices gradually fill with content from main memory.
2618 Depending on the size of your cache devices, it could take over an hour for
2619 them to fill.
2620 Capacity and reads can be monitored using the
2621 .Cm iostat
2622 option as follows:
2623 .Bd -literal
2624 # zpool iostat -v pool 5
2625 .Ed
2626 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2627 The following commands remove the mirrored log device
2628 .Sy mirror-2
2629 and mirrored top-level data device
2630 .Sy mirror-1 .
2631 .Pp
2632 Given this configuration:
2633 .Bd -literal
2634 pool: tank
2635 state: ONLINE
2636 scrub: none requested
2637 config:
2638
2639 NAME STATE READ WRITE CKSUM
2640 tank ONLINE 0 0 0
2641 mirror-0 ONLINE 0 0 0
2642 sda ONLINE 0 0 0
2643 sdb ONLINE 0 0 0
2644 mirror-1 ONLINE 0 0 0
2645 sdc ONLINE 0 0 0
2646 sdd ONLINE 0 0 0
2647 logs
2648 mirror-2 ONLINE 0 0 0
2649 sde ONLINE 0 0 0
2650 sdf ONLINE 0 0 0
2651 .Ed
2652 .Pp
2653 The command to remove the mirrored log
2654 .Sy mirror-2
2655 is:
2656 .Bd -literal
2657 # zpool remove tank mirror-2
2658 .Ed
2659 .Pp
2660 The command to remove the mirrored data
2661 .Sy mirror-1
2662 is:
2663 .Bd -literal
2664 # zpool remove tank mirror-1
2665 .Ed
2666 .It Sy Example 15 No Displaying expanded space on a device
2667 The following command displays the detailed information for the pool
2668 .Em data .
2669 This pool is comprised of a single raidz vdev where one of its devices
2670 increased its capacity by 10GB.
2671 In this example, the pool will not be able to utilize this extra capacity until
2672 all the devices under the raidz vdev have been expanded.
2673 .Bd -literal
2674 # zpool list -v data
2675 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2676 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2677 raidz1 23.9G 14.6G 9.30G - 48%
2678 sda - - - - -
2679 sdb - - - 10G -
2680 sdc - - - - -
2681 .Ed
2682 .It Sy Example 16 No Adding output columns
2683 Additional columns can be added to the
2684 .Nm zpool Cm status
2685 and
2686 .Nm zpool Cm iostat
2687 output with
2688 .Fl c
2689 option.
2690 .Bd -literal
2691 # zpool status -c vendor,model,size
2692 NAME STATE READ WRITE CKSUM vendor model size
2693 tank ONLINE 0 0 0
2694 mirror-0 ONLINE 0 0 0
2695 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2696 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2697 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2698 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2699 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2700 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2701
2702 # zpool iostat -vc slaves
2703 capacity operations bandwidth
2704 pool alloc free read write read write slaves
2705 ---------- ----- ----- ----- ----- ----- ----- ---------
2706 tank 20.4G 7.23T 26 152 20.7M 21.6M
2707 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2708 U1 - - 0 31 1.46K 20.6M sdb sdff
2709 U10 - - 0 1 3.77K 13.3K sdas sdgw
2710 U11 - - 0 1 288K 13.3K sdat sdgx
2711 U12 - - 0 1 78.4K 13.3K sdau sdgy
2712 U13 - - 0 1 128K 13.3K sdav sdgz
2713 U14 - - 0 1 63.2K 13.3K sdfk sdg
2714 .Ed
2715 .El
2716 .Sh ENVIRONMENT VARIABLES
2717 .Bl -tag -width "ZFS_ABORT"
2718 .It Ev ZFS_ABORT
2719 Cause
2720 .Nm zpool
2721 to dump core on exit for the purposes of running
2722 .Sy ::findleaks .
2723 .El
2724 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2725 .It Ev ZPOOL_IMPORT_PATH
2726 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2727 .Nm zpool
2728 looks for device nodes and files.
2729 Similar to the
2730 .Fl d
2731 option in
2732 .Nm zpool import .
2733 .El
2734 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2735 .It Ev ZPOOL_VDEV_NAME_GUID
2736 Cause
2737 .Nm zpool
2738 subcommands to output vdev guids by default. This behavior is identical to the
2739 .Nm zpool status -g
2740 command line option.
2741 .El
2742 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2743 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2744 Cause
2745 .Nm zpool
2746 subcommands to follow links for vdev names by default. This behavior is identical to the
2747 .Nm zpool status -L
2748 command line option.
2749 .El
2750 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2751 .It Ev ZPOOL_VDEV_NAME_PATH
2752 Cause
2753 .Nm zpool
2754 subcommands to output full vdev path names by default. This
2755 behavior is identical to the
2756 .Nm zpool status -p
2757 command line option.
2758 .El
2759 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2760 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2761 Older ZFS on Linux implementations had issues when attempting to display pool
2762 config VDEV names if a
2763 .Sy devid
2764 NVP value is present in the pool's config.
2765 .Pp
2766 For example, a pool that originated on illumos platform would have a devid
2767 value in the config and
2768 .Nm zpool status
2769 would fail when listing the config.
2770 This would also be true for future Linux based pools.
2771 .Pp
2772 A pool can be stripped of any
2773 .Sy devid
2774 values on import or prevented from adding
2775 them on
2776 .Nm zpool create
2777 or
2778 .Nm zpool add
2779 by setting
2780 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2781 .El
2782 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2783 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2784 Allow a privileged user to run the
2785 .Nm zpool status/iostat
2786 with the
2787 .Fl c
2788 option. Normally, only unprivileged users are allowed to run
2789 .Fl c .
2790 .El
2791 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2792 .It Ev ZPOOL_SCRIPTS_PATH
2793 The search path for scripts when running
2794 .Nm zpool status/iostat
2795 with the
2796 .Fl c
2797 option. This is a colon-separated list of directories and overrides the default
2798 .Pa ~/.zpool.d
2799 and
2800 .Pa /etc/zfs/zpool.d
2801 search paths.
2802 .El
2803 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2804 .It Ev ZPOOL_SCRIPTS_ENABLED
2805 Allow a user to run
2806 .Nm zpool status/iostat
2807 with the
2808 .Fl c
2809 option. If
2810 .Sy ZPOOL_SCRIPTS_ENABLED
2811 is not set, it is assumed that the user is allowed to run
2812 .Nm zpool status/iostat -c .
2813 .El
2814 .Sh INTERFACE STABILITY
2815 .Sy Evolving
2816 .Sh SEE ALSO
2817 .Xr zfs-events 5 ,
2818 .Xr zfs-module-parameters 5 ,
2819 .Xr zpool-features 5 ,
2820 .Xr zed 8 ,
2821 .Xr zfs 8