]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Eliminate most mentions of "special"
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a448a255 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
eb201f50 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
7c9a4292 30.Dd November 29, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
d2734cce
SD
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
cda0317e
GM
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl m Ar mountpoint
61.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64.Op Fl R Ar root
65.Ar pool vdev Ns ...
66.Nm
67.Cm destroy
68.Op Fl f
69.Ar pool
70.Nm
71.Cm detach
72.Ar pool device
73.Nm
74.Cm events
88f9c939 75.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
76.Nm
77.Cm export
78.Op Fl a
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 86.Oo Ar pool Oc Ns ...
cda0317e
GM
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
522db292 94.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
95.Nm
96.Cm import
97.Fl a
b5256303 98.Op Fl DflmN
cda0317e 99.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 100.Op Fl -rewind-to-checkpoint
522db292 101.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
102.Op Fl o Ar mntopts
103.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104.Op Fl R Ar root
105.Nm
106.Cm import
b5256303 107.Op Fl Dflm
cda0317e 108.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 109.Op Fl -rewind-to-checkpoint
522db292 110.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
111.Op Fl o Ar mntopts
112.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113.Op Fl R Ar root
114.Op Fl s
115.Ar pool Ns | Ns Ar id
116.Op Ar newpool Oo Fl t Oc
117.Nm
619f0976 118.Cm initialize
a769fb53 119.Op Fl c | Fl s
619f0976
GW
120.Ar pool
121.Op Ar device Ns ...
122.Nm
cda0317e
GM
123.Cm iostat
124.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 126.Op Fl ghHLnpPvy
cda0317e
GM
127.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128.Op Ar interval Op Ar count
129.Nm
130.Cm labelclear
131.Op Fl f
132.Ar device
133.Nm
134.Cm list
135.Op Fl HgLpPv
136.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137.Op Fl T Sy u Ns | Ns Sy d
138.Oo Ar pool Oc Ns ...
139.Op Ar interval Op Ar count
140.Nm
141.Cm offline
142.Op Fl f
143.Op Fl t
144.Ar pool Ar device Ns ...
145.Nm
146.Cm online
147.Op Fl e
148.Ar pool Ar device Ns ...
149.Nm
150.Cm reguid
151.Ar pool
152.Nm
153.Cm reopen
d3f2cd7e 154.Op Fl n
cda0317e
GM
155.Ar pool
156.Nm
157.Cm remove
a1d477c2 158.Op Fl np
cda0317e
GM
159.Ar pool Ar device Ns ...
160.Nm
a1d477c2
MA
161.Cm remove
162.Fl s
163.Ar pool
164.Nm
cda0317e
GM
165.Cm replace
166.Op Fl f
167.Oo Fl o Ar property Ns = Ns Ar value Oc
168.Ar pool Ar device Op Ar new_device
169.Nm
80a91e74
TC
170.Cm resilver
171.Ar pool Ns ...
172.Nm
cda0317e 173.Cm scrub
0ea05c64 174.Op Fl s | Fl p
cda0317e
GM
175.Ar pool Ns ...
176.Nm
1b939560
BB
177.Cm trim
178.Op Fl d
179.Op Fl r Ar rate
180.Op Fl c | Fl s
181.Ar pool
182.Op Ar device Ns ...
183.Nm
cda0317e
GM
184.Cm set
185.Ar property Ns = Ns Ar value
186.Ar pool
187.Nm
188.Cm split
b5256303 189.Op Fl gLlnP
cda0317e
GM
190.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
191.Op Fl R Ar root
192.Ar pool newpool
193.Oo Ar device Oc Ns ...
194.Nm
195.Cm status
196.Oo Fl c Ar SCRIPT Oc
1b939560 197.Op Fl DigLpPstvx
cda0317e
GM
198.Op Fl T Sy u Ns | Ns Sy d
199.Oo Ar pool Oc Ns ...
200.Op Ar interval Op Ar count
201.Nm
202.Cm sync
203.Oo Ar pool Oc Ns ...
204.Nm
205.Cm upgrade
206.Nm
207.Cm upgrade
208.Fl v
209.Nm
210.Cm upgrade
211.Op Fl V Ar version
212.Fl a Ns | Ns Ar pool Ns ...
213.Sh DESCRIPTION
214The
215.Nm
216command configures ZFS storage pools.
217A storage pool is a collection of devices that provides physical storage and
218data replication for ZFS datasets.
219All datasets within a storage pool share the same space.
220See
221.Xr zfs 8
222for information on managing datasets.
223.Ss Virtual Devices (vdevs)
224A "virtual device" describes a single device or a collection of devices
225organized according to certain performance and fault characteristics.
226The following virtual devices are supported:
227.Bl -tag -width Ds
228.It Sy disk
229A block device, typically located under
230.Pa /dev .
231ZFS can use individual slices or partitions, though the recommended mode of
232operation is to use whole disks.
233A disk can be specified by a full path, or it can be a shorthand name
234.Po the relative portion of the path under
235.Pa /dev
236.Pc .
237A whole disk can be specified by omitting the slice or partition designation.
238For example,
239.Pa sda
240is equivalent to
241.Pa /dev/sda .
242When given a whole disk, ZFS automatically labels the disk, if necessary.
243.It Sy file
244A regular file.
245The use of files as a backing store is strongly discouraged.
246It is designed primarily for experimental purposes, as the fault tolerance of a
247file is only as good as the file system of which it is a part.
248A file must be specified by a full path.
249.It Sy mirror
250A mirror of two or more devices.
251Data is replicated in an identical fashion across all components of a mirror.
252A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
253failing before data integrity is compromised.
254.It Sy raidz , raidz1 , raidz2 , raidz3
255A variation on RAID-5 that allows for better distribution of parity and
256eliminates the RAID-5
257.Qq write hole
258.Pq in which data and parity become inconsistent after a power loss .
259Data and parity is striped across all disks within a raidz group.
260.Pp
261A raidz group can have single-, double-, or triple-parity, meaning that the
262raidz group can sustain one, two, or three failures, respectively, without
263losing any data.
264The
265.Sy raidz1
266vdev type specifies a single-parity raidz group; the
267.Sy raidz2
268vdev type specifies a double-parity raidz group; and the
269.Sy raidz3
270vdev type specifies a triple-parity raidz group.
271The
272.Sy raidz
273vdev type is an alias for
274.Sy raidz1 .
275.Pp
276A raidz group with N disks of size X with P parity disks can hold approximately
277(N-P)*X bytes and can withstand P device(s) failing before data integrity is
278compromised.
279The minimum number of devices in a raidz group is one more than the number of
280parity disks.
281The recommended number is between 3 and 9 to help increase performance.
282.It Sy spare
9810410a 283A pseudo-vdev which keeps track of available hot spares for a pool.
cda0317e
GM
284For more information, see the
285.Sx Hot Spares
286section.
287.It Sy log
288A separate intent log device.
289If more than one log device is specified, then writes are load-balanced between
290devices.
291Log devices can be mirrored.
292However, raidz vdev types are not supported for the intent log.
293For more information, see the
294.Sx Intent Log
295section.
cc99f275 296.It Sy dedup
9810410a 297A device dedicated solely for dedup data.
cc99f275
DB
298The redundancy of this device should match the redundancy of the other normal
299devices in the pool. If more than one dedup device is specified, then
9810410a 300allocations are load-balanced between those devices.
cc99f275
DB
301.It Sy special
302A device dedicated solely for allocating various kinds of internal metadata,
303and optionally small file data.
304The redundancy of this device should match the redundancy of the other normal
305devices in the pool. If more than one special device is specified, then
9810410a 306allocations are load-balanced between those devices.
cc99f275
DB
307.Pp
308For more information on special allocations, see the
309.Sx Special Allocation Class
310section.
cda0317e
GM
311.It Sy cache
312A device used to cache storage pool data.
313A cache device cannot be configured as a mirror or raidz group.
314For more information, see the
315.Sx Cache Devices
316section.
317.El
318.Pp
319Virtual devices cannot be nested, so a mirror or raidz virtual device can only
320contain files or disks.
321Mirrors of mirrors
322.Pq or other combinations
323are not allowed.
324.Pp
325A pool can have any number of virtual devices at the top of the configuration
326.Po known as
327.Qq root vdevs
328.Pc .
329Data is dynamically distributed across all top-level devices to balance data
330among devices.
331As new virtual devices are added, ZFS automatically places data on the newly
332available devices.
333.Pp
334Virtual devices are specified one at a time on the command line, separated by
335whitespace.
336The keywords
337.Sy mirror
338and
339.Sy raidz
340are used to distinguish where a group ends and another begins.
341For example, the following creates two root vdevs, each a mirror of two disks:
342.Bd -literal
343# zpool create mypool mirror sda sdb mirror sdc sdd
344.Ed
345.Ss Device Failure and Recovery
346ZFS supports a rich set of mechanisms for handling device failure and data
347corruption.
348All metadata and data is checksummed, and ZFS automatically repairs bad data
349from a good copy when corruption is detected.
350.Pp
351In order to take advantage of these features, a pool must make use of some form
352of redundancy, using either mirrored or raidz groups.
353While ZFS supports running in a non-redundant configuration, where each root
354vdev is simply a disk or file, this is strongly discouraged.
355A single case of bit corruption can render some or all of your data unavailable.
356.Pp
357A pool's health status is described by one of three states: online, degraded,
358or faulted.
359An online pool has all devices operating normally.
360A degraded pool is one in which one or more devices have failed, but the data is
361still available due to a redundant configuration.
362A faulted pool has corrupted metadata, or one or more faulted devices, and
363insufficient replicas to continue functioning.
364.Pp
365The health of the top-level vdev, such as mirror or raidz device, is
366potentially impacted by the state of its associated vdevs, or component
367devices.
368A top-level vdev or component device is in one of the following states:
369.Bl -tag -width "DEGRADED"
370.It Sy DEGRADED
371One or more top-level vdevs is in the degraded state because one or more
372component devices are offline.
373Sufficient replicas exist to continue functioning.
374.Pp
375One or more component devices is in the degraded or faulted state, but
376sufficient replicas exist to continue functioning.
377The underlying conditions are as follows:
378.Bl -bullet
379.It
380The number of checksum errors exceeds acceptable levels and the device is
381degraded as an indication that something may be wrong.
382ZFS continues to use the device as necessary.
383.It
384The number of I/O errors exceeds acceptable levels.
385The device could not be marked as faulted because there are insufficient
386replicas to continue functioning.
387.El
388.It Sy FAULTED
389One or more top-level vdevs is in the faulted state because one or more
390component devices are offline.
391Insufficient replicas exist to continue functioning.
392.Pp
393One or more component devices is in the faulted state, and insufficient
394replicas exist to continue functioning.
395The underlying conditions are as follows:
396.Bl -bullet
397.It
6b4e21c6 398The device could be opened, but the contents did not match expected values.
cda0317e
GM
399.It
400The number of I/O errors exceeds acceptable levels and the device is faulted to
401prevent further use of the device.
402.El
403.It Sy OFFLINE
404The device was explicitly taken offline by the
405.Nm zpool Cm offline
406command.
407.It Sy ONLINE
058ac9ba 408The device is online and functioning.
cda0317e
GM
409.It Sy REMOVED
410The device was physically removed while the system was running.
411Device removal detection is hardware-dependent and may not be supported on all
412platforms.
413.It Sy UNAVAIL
414The device could not be opened.
415If a pool is imported when a device was unavailable, then the device will be
416identified by a unique identifier instead of its path since the path was never
417correct in the first place.
418.El
419.Pp
420If a device is removed and later re-attached to the system, ZFS attempts
421to put the device online automatically.
422Device attach detection is hardware-dependent and might not be supported on all
423platforms.
424.Ss Hot Spares
425ZFS allows devices to be associated with pools as
426.Qq hot spares .
427These devices are not actively used in the pool, but when an active device
428fails, it is automatically replaced by a hot spare.
429To create a pool with hot spares, specify a
430.Sy spare
431vdev with any number of devices.
432For example,
433.Bd -literal
54e5f226 434# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
435.Ed
436.Pp
437Spares can be shared across multiple pools, and can be added with the
438.Nm zpool Cm add
439command and removed with the
440.Nm zpool Cm remove
441command.
442Once a spare replacement is initiated, a new
443.Sy spare
444vdev is created within the configuration that will remain there until the
445original device is replaced.
446At this point, the hot spare becomes available again if another device fails.
447.Pp
448If a pool has a shared spare that is currently being used, the pool can not be
449exported since other pools may use this shared spare, which may lead to
450potential data corruption.
451.Pp
4f3218ae
OF
452Shared spares add some risk. If the pools are imported on different hosts, and
453both pools suffer a device failure at the same time, both could attempt to use
454the spare at the same time. This may not be detected, resulting in data
455corruption.
456.Pp
7c9abcf8 457An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
458If the original faulted device is detached, then the hot spare assumes its
459place in the configuration, and is removed from the spare list of all active
460pools.
461.Pp
058ac9ba 462Spares cannot replace log devices.
cda0317e
GM
463.Ss Intent Log
464The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
465transactions.
466For instance, databases often require their transactions to be on stable storage
467devices when returning from a system call.
468NFS and other applications can also use
469.Xr fsync 2
470to ensure data stability.
471By default, the intent log is allocated from blocks within the main pool.
472However, it might be possible to get better performance using separate intent
473log devices such as NVRAM or a dedicated disk.
474For example:
475.Bd -literal
476# zpool create pool sda sdb log sdc
477.Ed
478.Pp
479Multiple log devices can also be specified, and they can be mirrored.
480See the
481.Sx EXAMPLES
482section for an example of mirroring multiple log devices.
483.Pp
910f3ce7
PA
484Log devices can be added, replaced, attached, detached and removed. In
485addition, log devices are imported and exported as part of the pool
486that contains them.
a1d477c2 487Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
488.Ss Cache Devices
489Devices can be added to a storage pool as
490.Qq cache devices .
491These devices provide an additional layer of caching between main memory and
492disk.
493For read-heavy workloads, where the working set size is much larger than what
494can be cached in main memory, using cache devices allow much more of this
495working set to be served from low latency media.
496Using cache devices provides the greatest performance improvement for random
497read-workloads of mostly static content.
498.Pp
499To create a pool with cache devices, specify a
500.Sy cache
501vdev with any number of devices.
502For example:
503.Bd -literal
504# zpool create pool sda sdb cache sdc sdd
505.Ed
506.Pp
507Cache devices cannot be mirrored or part of a raidz configuration.
508If a read error is encountered on a cache device, that read I/O is reissued to
509the original storage pool device, which might be part of a mirrored or raidz
510configuration.
511.Pp
512The content of the cache devices is considered volatile, as is the case with
513other system caches.
d2734cce
SD
514.Ss Pool checkpoint
515Before starting critical procedures that include destructive actions (e.g
516.Nm zfs Cm destroy
517), an administrator can checkpoint the pool's state and in the case of a
518mistake or failure, rewind the entire pool back to the checkpoint.
519Otherwise, the checkpoint can be discarded when the procedure has completed
520successfully.
521.Pp
522A pool checkpoint can be thought of as a pool-wide snapshot and should be used
523with care as it contains every part of the pool's state, from properties to vdev
524configuration.
525Thus, while a pool has a checkpoint certain operations are not allowed.
526Specifically, vdev removal/attach/detach, mirror splitting, and
527changing the pool's guid.
528Adding a new vdev is supported but in the case of a rewind it will have to be
529added again.
530Finally, users of this feature should keep in mind that scrubs in a pool that
531has a checkpoint do not repair checkpointed data.
532.Pp
533To create a checkpoint for a pool:
534.Bd -literal
535# zpool checkpoint pool
536.Ed
537.Pp
538To later rewind to its checkpointed state, you need to first export it and
539then rewind it during import:
540.Bd -literal
541# zpool export pool
542# zpool import --rewind-to-checkpoint pool
543.Ed
544.Pp
545To discard the checkpoint from a pool:
546.Bd -literal
547# zpool checkpoint -d pool
548.Ed
549.Pp
550Dataset reservations (controlled by the
551.Nm reservation
552or
553.Nm refreservation
554zfs properties) may be unenforceable while a checkpoint exists, because the
555checkpoint is allowed to consume the dataset's reservation.
556Finally, data that is part of the checkpoint but has been freed in the
557current state of the pool won't be scanned during a scrub.
cc99f275
DB
558.Ss Special Allocation Class
559The allocations in the special class are dedicated to specific block types.
560By default this includes all metadata, the indirect blocks of user data, and
561any dedup data. The class can also be provisioned to accept a limited
562percentage of small file data blocks.
563.Pp
564A pool must always have at least one general (non-specified) vdev before
565other devices can be assigned to the special class. If the special class
566becomes full, then allocations intended for it will spill back into the
567normal class.
568.Pp
569Dedup data can be excluded from the special class by setting the
570.Sy zfs_ddt_data_is_special
571zfs module parameter to false (0).
572.Pp
573Inclusion of small file blocks in the special class is opt-in. Each dataset
574can control the size of small file blocks allowed in the special class by
575setting the
576.Sy special_small_blocks
9810410a 577dataset property. It defaults to zero, so you must opt-in by setting it to a
cc99f275
DB
578non-zero value. See
579.Xr zfs 8
580for more info on setting this property.
cda0317e
GM
581.Ss Properties
582Each pool has several properties associated with it.
583Some properties are read-only statistics while others are configurable and
584change the behavior of the pool.
585.Pp
586The following are read-only properties:
587.Bl -tag -width Ds
6df9f8eb
YP
588.It Cm allocated
589Amount of storage used within the pool.
af650793
JS
590See
591.Sy fragmentation
592and
593.Sy free
594for more information.
cda0317e
GM
595.It Sy capacity
596Percentage of pool space used.
597This property can also be referred to by its shortened column name,
598.Sy cap .
599.It Sy expandsize
9ae529ec 600Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
601increase the total capacity of the pool.
602Uninitialized space consists of any space on an EFI labeled vdev which has not
603been brought online
604.Po e.g, using
605.Nm zpool Cm online Fl e
606.Pc .
607This space occurs when a LUN is dynamically expanded.
608.It Sy fragmentation
af650793
JS
609The amount of fragmentation in the pool. As the amount of space
610.Sy allocated
611increases, it becomes more difficult to locate
612.Sy free
613space. This may result in lower write performance compared to pools with more
614unfragmented free space.
cda0317e 615.It Sy free
9ae529ec 616The amount of free space available in the pool.
af650793
JS
617By contrast, the
618.Xr zfs 8
619.Sy available
620property describes how much new data can be written to ZFS filesystems/volumes.
621The zpool
622.Sy free
623property is not generally useful for this purpose, and can be substantially more than the zfs
624.Sy available
625space. This discrepancy is due to several factors, including raidz party; zfs
626reservation, quota, refreservation, and refquota properties; and space set aside by
627.Sy spa_slop_shift
628(see
629.Xr zfs-module-parameters 5
630for more information).
cda0317e 631.It Sy freeing
9ae529ec 632After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
633returned to the pool asynchronously.
634.Sy freeing
635is the amount of space remaining to be reclaimed.
636Over time
637.Sy freeing
638will decrease while
639.Sy free
640increases.
641.It Sy health
642The current health of the pool.
643Health can be one of
644.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
645.It Sy guid
058ac9ba 646A unique identifier for the pool.
a448a255
SD
647.It Sy load_guid
648A unique identifier for the pool.
649Unlike the
650.Sy guid
651property, this identifier is generated every time we load the pool (e.g. does
652not persist across imports/exports) and never changes while the pool is loaded
653(even if a
654.Sy reguid
655operation takes place).
cda0317e 656.It Sy size
058ac9ba 657Total size of the storage pool.
cda0317e
GM
658.It Sy unsupported@ Ns Em feature_guid
659Information about unsupported features that are enabled on the pool.
660See
661.Xr zpool-features 5
662for details.
cda0317e
GM
663.El
664.Pp
665The space usage properties report actual physical space available to the
666storage pool.
667The physical space can be different from the total amount of space that any
668contained datasets can actually use.
669The amount of space used in a raidz configuration depends on the characteristics
670of the data being written.
671In addition, ZFS reserves some space for internal accounting that the
672.Xr zfs 8
673command takes into account, but the
674.Nm
675command does not.
676For non-full pools of a reasonable size, these effects should be invisible.
677For small pools, or pools that are close to being completely full, these
678discrepancies may become more noticeable.
679.Pp
058ac9ba 680The following property can be set at creation time and import time:
cda0317e
GM
681.Bl -tag -width Ds
682.It Sy altroot
683Alternate root directory.
684If set, this directory is prepended to any mount points within the pool.
685This can be used when examining an unknown pool where the mount points cannot be
686trusted, or in an alternate boot environment, where the typical paths are not
687valid.
688.Sy altroot
689is not a persistent property.
690It is valid only while the system is up.
691Setting
692.Sy altroot
693defaults to using
694.Sy cachefile Ns = Ns Sy none ,
695though this may be overridden using an explicit setting.
696.El
697.Pp
698The following property can be set only at import time:
699.Bl -tag -width Ds
700.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
701If set to
702.Sy on ,
703the pool will be imported in read-only mode.
704This property can also be referred to by its shortened column name,
705.Sy rdonly .
706.El
707.Pp
708The following properties can be set at creation time and import time, and later
709changed with the
710.Nm zpool Cm set
711command:
712.Bl -tag -width Ds
713.It Sy ashift Ns = Ns Sy ashift
714Pool sector size exponent, to the power of
715.Sy 2
716(internally referred to as
717.Sy ashift
9810410a 718). Values from 9 to 16, inclusive, are valid; also, the
cda0317e
GM
719value 0 (the default) means to auto-detect using the kernel's block
720layer and a ZFS internal exception list. I/O operations will be aligned
721to the specified size boundaries. Additionally, the minimum (disk)
722write size will be set to the specified size, so this represents a
723space vs. performance trade-off. For optimal performance, the pool
724sector size should be greater than or equal to the sector size of the
725underlying disks. The typical case for setting this property is when
726performance is important and the underlying disks use 4KiB sectors but
727report 512B sectors to the OS (for compatibility reasons); in that
728case, set
729.Sy ashift=12
730(which is 1<<12 = 4096). When set, this property is
731used as the default hint value in subsequent vdev operations (add,
732attach and replace). Changing this value will not modify any existing
733vdev, not even on disk replacement; however it can be used, for
734instance, to replace a dying 512B sectors disk with a newer 4KiB
735sectors device: this will probably result in bad performance but at the
736same time could prevent loss of data.
737.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
738Controls automatic pool expansion when the underlying LUN is grown.
739If set to
740.Sy on ,
741the pool will be resized according to the size of the expanded device.
742If the device is part of a mirror or raidz then all devices within that
743mirror/raidz group must be expanded before the new space is made available to
744the pool.
745The default behavior is
746.Sy off .
747This property can also be referred to by its shortened column name,
748.Sy expand .
749.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
750Controls automatic device replacement.
751If set to
752.Sy off ,
753device replacement must be initiated by the administrator by using the
754.Nm zpool Cm replace
755command.
756If set to
757.Sy on ,
758any new device, found in the same physical location as a device that previously
759belonged to the pool, is automatically formatted and replaced.
760The default behavior is
761.Sy off .
762This property can also be referred to by its shortened column name,
763.Sy replace .
764Autoreplace can also be used with virtual disks (like device
765mapper) provided that you use the /dev/disk/by-vdev paths setup by
766vdev_id.conf. See the
767.Xr vdev_id 8
768man page for more details.
769Autoreplace and autoonline require the ZFS Event Daemon be configured and
770running. See the
771.Xr zed 8
772man page for more details.
773.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
774Identifies the default bootable dataset for the root pool. This property is
775expected to be set mainly by the installation and upgrade programs.
776Not all Linux distribution boot processes use the bootfs property.
777.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
778Controls the location of where the pool configuration is cached.
779Discovering all pools on system startup requires a cached copy of the
780configuration data that is stored on the root file system.
781All pools in this cache are automatically imported when the system boots.
782Some environments, such as install and clustering, need to cache this
783information in a different location so that pools are not automatically
784imported.
785Setting this property caches the pool configuration in a different location that
786can later be imported with
787.Nm zpool Cm import Fl c .
9810410a 788Setting it to the value
cda0317e 789.Sy none
9810410a 790creates a temporary pool that is never cached, and the
cda0317e
GM
791.Qq
792.Pq empty string
793uses the default location.
794.Pp
795Multiple pools can share the same cache file.
796Because the kernel destroys and recreates this file when pools are added and
797removed, care should be taken when attempting to access this file.
798When the last pool using a
799.Sy cachefile
bbf1ad67 800is exported or destroyed, the file will be empty.
cda0317e
GM
801.It Sy comment Ns = Ns Ar text
802A text string consisting of printable ASCII characters that will be stored
803such that it is available even if the pool becomes faulted.
804An administrator can provide additional information about a pool using this
805property.
806.It Sy dedupditto Ns = Ns Ar number
807Threshold for the number of block ditto copies.
808If the reference count for a deduplicated block increases above this number, a
809new ditto copy of this block is automatically stored.
810The default setting is
811.Sy 0
812which causes no ditto copies to be created for deduplicated blocks.
813The minimum legal nonzero setting is
814.Sy 100 .
815.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
816Controls whether a non-privileged user is granted access based on the dataset
817permissions defined on the dataset.
818See
819.Xr zfs 8
820for more information on ZFS delegated administration.
821.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
822Controls the system behavior in the event of catastrophic pool failure.
823This condition is typically a result of a loss of connectivity to the underlying
824storage device(s) or a failure of all devices within the pool.
825The behavior of such an event is determined as follows:
826.Bl -tag -width "continue"
827.It Sy wait
828Blocks all I/O access until the device connectivity is recovered and the errors
829are cleared.
830This is the default behavior.
831.It Sy continue
832Returns
833.Er EIO
834to any new write I/O requests but allows reads to any of the remaining healthy
835devices.
836Any write requests that have yet to be committed to disk would be blocked.
837.It Sy panic
058ac9ba 838Prints out a message to the console and generates a system crash dump.
cda0317e 839.El
1b939560
BB
840.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
841When set to
842.Sy on
843space which has been recently freed, and is no longer allocated by the pool,
844will be periodically trimmed. This allows block device vdevs which support
845BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
846supports hole-punching, to reclaim unused blocks. The default setting for
847this property is
848.Sy off .
849.Pp
850Automatic TRIM does not immediately reclaim blocks after a free. Instead,
851it will optimistically delay allowing smaller ranges to be aggregated in to
852a few larger ones. These can then be issued more efficiently to the storage.
853.Pp
854Be aware that automatic trimming of recently freed data blocks can put
855significant stress on the underlying storage devices. This will vary
856depending of how well the specific device handles these commands. For
857lower end devices it is often possible to achieve most of the benefits
858of automatic trimming by running an on-demand (manual) TRIM periodically
859using the
860.Nm zpool Cm trim
861command.
cda0317e
GM
862.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
863The value of this property is the current state of
864.Ar feature_name .
865The only valid value when setting this property is
866.Sy enabled
867which moves
868.Ar feature_name
869to the enabled state.
870See
871.Xr zpool-features 5
872for details on feature states.
873.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
874Controls whether information about snapshots associated with this pool is
875output when
876.Nm zfs Cm list
877is run without the
878.Fl t
879option.
880The default value is
881.Sy off .
882This property can also be referred to by its shortened name,
883.Sy listsnaps .
379ca9cf
OF
884.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
885Controls whether a pool activity check should be performed during
886.Nm zpool Cm import .
887When a pool is determined to be active it cannot be imported, even with the
888.Fl f
889option. This property is intended to be used in failover configurations
4f3218ae 890where multiple hosts have access to a pool on shared storage.
f72ecb8d 891.Pp
4f3218ae
OF
892Multihost provides protection on import only. It does not protect against an
893individual device being used in multiple pools, regardless of the type of vdev.
894See the discussion under
895.Sy zpool create.
f72ecb8d 896.Pp
4f3218ae
OF
897When this property is on, periodic writes to storage occur to show the pool is
898in use. See
379ca9cf
OF
899.Sy zfs_multihost_interval
900in the
901.Xr zfs-module-parameters 5
902man page. In order to enable this property each host must set a unique hostid.
903See
904.Xr genhostid 1
b9373170 905.Xr zgenhostid 8
d699aaef 906.Xr spl-module-parameters 5
379ca9cf
OF
907for additional details. The default value is
908.Sy off .
cda0317e
GM
909.It Sy version Ns = Ns Ar version
910The current on-disk version of the pool.
911This can be increased, but never decreased.
912The preferred method of updating pools is with the
913.Nm zpool Cm upgrade
914command, though this property can be used when a specific version is needed for
915backwards compatibility.
916Once feature flags are enabled on a pool this property will no longer have a
917value.
918.El
919.Ss Subcommands
920All subcommands that modify state are logged persistently to the pool in their
921original form.
922.Pp
923The
924.Nm
925command provides subcommands to create and destroy storage pools, add capacity
926to storage pools, and provide information about the storage pools.
927The following subcommands are supported:
928.Bl -tag -width Ds
929.It Xo
930.Nm
931.Fl ?
932.Xc
058ac9ba 933Displays a help message.
cda0317e
GM
934.It Xo
935.Nm
936.Cm add
937.Op Fl fgLnP
938.Oo Fl o Ar property Ns = Ns Ar value Oc
939.Ar pool vdev Ns ...
940.Xc
941Adds the specified virtual devices to the given pool.
942The
943.Ar vdev
944specification is described in the
945.Sx Virtual Devices
946section.
947The behavior of the
948.Fl f
949option, and the device checks performed are described in the
950.Nm zpool Cm create
951subcommand.
952.Bl -tag -width Ds
953.It Fl f
954Forces use of
955.Ar vdev Ns s ,
956even if they appear in use or specify a conflicting replication level.
957Not all devices can be overridden in this manner.
958.It Fl g
959Display
960.Ar vdev ,
961GUIDs instead of the normal device names. These GUIDs can be used in place of
962device names for the zpool detach/offline/remove/replace commands.
963.It Fl L
964Display real paths for
965.Ar vdev Ns s
966resolving all symbolic links. This can be used to look up the current block
967device name regardless of the /dev/disk/ path used to open it.
968.It Fl n
969Displays the configuration that would be used without actually adding the
970.Ar vdev Ns s .
971The actual pool creation can still fail due to insufficient privileges or
972device sharing.
973.It Fl P
974Display real paths for
975.Ar vdev Ns s
976instead of only the last component of the path. This can be used in
85912983 977conjunction with the
978.Fl L
979flag.
cda0317e
GM
980.It Fl o Ar property Ns = Ns Ar value
981Sets the given pool properties. See the
982.Sx Properties
983section for a list of valid properties that can be set. The only property
984supported at the moment is ashift.
985.El
986.It Xo
987.Nm
988.Cm attach
989.Op Fl f
990.Oo Fl o Ar property Ns = Ns Ar value Oc
991.Ar pool device new_device
992.Xc
993Attaches
994.Ar new_device
995to the existing
996.Ar device .
997The existing device cannot be part of a raidz configuration.
998If
999.Ar device
1000is not currently part of a mirrored configuration,
1001.Ar device
1002automatically transforms into a two-way mirror of
1003.Ar device
1004and
1005.Ar new_device .
1006If
1007.Ar device
1008is part of a two-way mirror, attaching
1009.Ar new_device
1010creates a three-way mirror, and so on.
1011In either case,
1012.Ar new_device
1013begins to resilver immediately.
1014.Bl -tag -width Ds
1015.It Fl f
1016Forces use of
1017.Ar new_device ,
74580a94 1018even if it appears to be in use.
cda0317e
GM
1019Not all devices can be overridden in this manner.
1020.It Fl o Ar property Ns = Ns Ar value
1021Sets the given pool properties. See the
1022.Sx Properties
1023section for a list of valid properties that can be set. The only property
1024supported at the moment is ashift.
1025.El
1026.It Xo
1027.Nm
d2734cce
SD
1028.Cm checkpoint
1029.Op Fl d, -discard
1030.Ar pool
1031.Xc
1032Checkpoints the current state of
1033.Ar pool
1034, which can be later restored by
1035.Nm zpool Cm import --rewind-to-checkpoint .
1036The existence of a checkpoint in a pool prohibits the following
1037.Nm zpool
1038commands:
1039.Cm remove ,
1040.Cm attach ,
1041.Cm detach ,
1042.Cm split ,
1043and
1044.Cm reguid .
1045In addition, it may break reservation boundaries if the pool lacks free
1046space.
1047The
1048.Nm zpool Cm status
1049command indicates the existence of a checkpoint or the progress of discarding a
1050checkpoint from a pool.
1051The
1052.Nm zpool Cm list
1053command reports how much space the checkpoint takes from the pool.
1054.Bl -tag -width Ds
1055.It Fl d, -discard
1056Discards an existing checkpoint from
1057.Ar pool .
1058.El
1059.It Xo
1060.Nm
cda0317e
GM
1061.Cm clear
1062.Ar pool
1063.Op Ar device
1064.Xc
1065Clears device errors in a pool.
1066If no arguments are specified, all device errors within the pool are cleared.
1067If one or more devices is specified, only those errors associated with the
1068specified device or devices are cleared.
8133679f
OF
1069If multihost is enabled, and the pool has been suspended, this will not
1070resume I/O. While the pool was suspended, it may have been imported on
1071another host, and resuming I/O could result in pool damage.
cda0317e
GM
1072.It Xo
1073.Nm
1074.Cm create
1075.Op Fl dfn
1076.Op Fl m Ar mountpoint
1077.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1078.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1079.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1080.Op Fl R Ar root
1081.Op Fl t Ar tname
1082.Ar pool vdev Ns ...
1083.Xc
1084Creates a new storage pool containing the virtual devices specified on the
1085command line.
1086The pool name must begin with a letter, and can only contain
1087alphanumeric characters as well as underscore
1088.Pq Qq Sy _ ,
1089dash
90cdf283 1090.Pq Qq Sy \&- ,
cda0317e
GM
1091colon
1092.Pq Qq Sy \&: ,
1093space
90cdf283 1094.Pq Qq Sy \&\ ,
cda0317e
GM
1095and period
1096.Pq Qq Sy \&. .
1097The pool names
1098.Sy mirror ,
1099.Sy raidz ,
1100.Sy spare
1101and
1102.Sy log
5ee220ba
TK
1103are reserved, as are names beginning with
1104.Sy mirror ,
1105.Sy raidz ,
1106.Sy spare ,
1107and the pattern
cda0317e
GM
1108.Sy c[0-9] .
1109The
1110.Ar vdev
1111specification is described in the
1112.Sx Virtual Devices
1113section.
1114.Pp
4f3218ae
OF
1115The command attempts to verify that each device specified is accessible and not
1116currently in use by another subsystem. However this check is not robust enough
1117to detect simultaneous attempts to use a new device in different pools, even if
1118.Sy multihost
1119is
1120.Sy enabled.
1121The
1122administrator must ensure that simultaneous invocations of any combination of
1123.Sy zpool replace ,
1124.Sy zpool create ,
1125.Sy zpool add ,
1126or
1127.Sy zpool labelclear ,
1128do not refer to the same device. Using the same device in two pools will
1129result in pool corruption.
f72ecb8d 1130.Pp
cda0317e
GM
1131There are some uses, such as being currently mounted, or specified as the
1132dedicated dump device, that prevents a device from ever being used by ZFS.
1133Other uses, such as having a preexisting UFS file system, can be overridden with
1134the
1135.Fl f
1136option.
1137.Pp
1138The command also checks that the replication strategy for the pool is
1139consistent.
1140An attempt to combine redundant and non-redundant storage in a single pool, or
1141to mix disks and files, results in an error unless
1142.Fl f
1143is specified.
1144The use of differently sized devices within a single raidz or mirror group is
1145also flagged as an error unless
1146.Fl f
1147is specified.
1148.Pp
1149Unless the
1150.Fl R
1151option is specified, the default mount point is
1152.Pa / Ns Ar pool .
1153The mount point must not exist or must be empty, or else the root dataset
1154cannot be mounted.
1155This can be overridden with the
1156.Fl m
1157option.
1158.Pp
1159By default all supported features are enabled on the new pool unless the
1160.Fl d
1161option is specified.
1162.Bl -tag -width Ds
1163.It Fl d
1164Do not enable any features on the new pool.
1165Individual features can be enabled by setting their corresponding properties to
1166.Sy enabled
1167with the
1168.Fl o
1169option.
1170See
1171.Xr zpool-features 5
1172for details about feature properties.
1173.It Fl f
1174Forces use of
1175.Ar vdev Ns s ,
1176even if they appear in use or specify a conflicting replication level.
1177Not all devices can be overridden in this manner.
1178.It Fl m Ar mountpoint
1179Sets the mount point for the root dataset.
1180The default mount point is
1181.Pa /pool
1182or
1183.Pa altroot/pool
1184if
1185.Ar altroot
1186is specified.
1187The mount point must be an absolute path,
1188.Sy legacy ,
1189or
1190.Sy none .
1191For more information on dataset mount points, see
1192.Xr zfs 8 .
1193.It Fl n
1194Displays the configuration that would be used without actually creating the
1195pool.
1196The actual pool creation can still fail due to insufficient privileges or
1197device sharing.
1198.It Fl o Ar property Ns = Ns Ar value
1199Sets the given pool properties.
1200See the
1201.Sx Properties
1202section for a list of valid properties that can be set.
1203.It Fl o Ar feature@feature Ns = Ns Ar value
1204Sets the given pool feature. See the
1205.Xr zpool-features 5
1206section for a list of valid features that can be set.
1207Value can be either disabled or enabled.
1208.It Fl O Ar file-system-property Ns = Ns Ar value
1209Sets the given file system properties in the root file system of the pool.
1210See the
1211.Sx Properties
1212section of
1213.Xr zfs 8
1214for a list of valid properties that can be set.
1215.It Fl R Ar root
1216Equivalent to
1217.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1218.It Fl t Ar tname
1219Sets the in-core pool name to
1220.Sy tname
1221while the on-disk name will be the name specified as the pool name
1222.Sy pool .
1223This will set the default cachefile property to none. This is intended
1224to handle name space collisions when creating pools for other systems,
1225such as virtual machines or physical machines whose pools live on network
1226block devices.
1227.El
1228.It Xo
1229.Nm
1230.Cm destroy
1231.Op Fl f
1232.Ar pool
1233.Xc
1234Destroys the given pool, freeing up any devices for other use.
1235This command tries to unmount any active datasets before destroying the pool.
1236.Bl -tag -width Ds
1237.It Fl f
058ac9ba 1238Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1239.El
1240.It Xo
1241.Nm
1242.Cm detach
1243.Ar pool device
1244.Xc
1245Detaches
1246.Ar device
1247from a mirror.
1248The operation is refused if there are no other valid replicas of the data.
1249If device may be re-added to the pool later on then consider the
1250.Sy zpool offline
1251command instead.
1252.It Xo
1253.Nm
1254.Cm events
88f9c939 1255.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1256.Xc
1257Lists all recent events generated by the ZFS kernel modules. These events
1258are consumed by the
1259.Xr zed 8
1260and used to automate administrative tasks such as replacing a failed device
1261with a hot spare. For more information about the subclasses and event payloads
1262that can be generated see the
1263.Xr zfs-events 5
1264man page.
1265.Bl -tag -width Ds
1266.It Fl c
d050c627 1267Clear all previous events.
cda0317e
GM
1268.It Fl f
1269Follow mode.
1270.It Fl H
1271Scripted mode. Do not display headers, and separate fields by a
1272single tab instead of arbitrary space.
1273.It Fl v
1274Print the entire payload for each event.
1275.El
1276.It Xo
1277.Nm
1278.Cm export
1279.Op Fl a
1280.Op Fl f
1281.Ar pool Ns ...
1282.Xc
1283Exports the given pools from the system.
1284All devices are marked as exported, but are still considered in use by other
1285subsystems.
1286The devices can be moved between systems
1287.Pq even those of different endianness
1288and imported as long as a sufficient number of devices are present.
1289.Pp
1290Before exporting the pool, all datasets within the pool are unmounted.
1291A pool can not be exported if it has a shared spare that is currently being
1292used.
1293.Pp
1294For pools to be portable, you must give the
1295.Nm
1296command whole disks, not just partitions, so that ZFS can label the disks with
1297portable EFI labels.
1298Otherwise, disk drivers on platforms of different endianness will not recognize
1299the disks.
1300.Bl -tag -width Ds
1301.It Fl a
859735c0 1302Exports all pools imported on the system.
cda0317e
GM
1303.It Fl f
1304Forcefully unmount all datasets, using the
1305.Nm unmount Fl f
1306command.
1307.Pp
1308This command will forcefully export the pool even if it has a shared spare that
1309is currently being used.
1310This may lead to potential data corruption.
1311.El
1312.It Xo
1313.Nm
1314.Cm get
1315.Op Fl Hp
1316.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1317.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 1318.Oo Ar pool Oc Ns ...
cda0317e
GM
1319.Xc
1320Retrieves the given list of properties
1321.Po
1322or all properties if
1323.Sy all
1324is used
1325.Pc
1326for the specified storage pool(s).
1327These properties are displayed with the following fields:
1328.Bd -literal
2a8b84b7 1329 name Name of storage pool
058ac9ba
BB
1330 property Property name
1331 value Property value
1332 source Property source, either 'default' or 'local'.
cda0317e
GM
1333.Ed
1334.Pp
1335See the
1336.Sx Properties
1337section for more information on the available pool properties.
1338.Bl -tag -width Ds
1339.It Fl H
1340Scripted mode.
1341Do not display headers, and separate fields by a single tab instead of arbitrary
1342space.
1343.It Fl o Ar field
1344A comma-separated list of columns to display.
d7323e79 1345.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1346is the default value.
cda0317e
GM
1347.It Fl p
1348Display numbers in parsable (exact) values.
1349.El
1350.It Xo
1351.Nm
1352.Cm history
1353.Op Fl il
1354.Oo Ar pool Oc Ns ...
1355.Xc
1356Displays the command history of the specified pool(s) or all pools if no pool is
1357specified.
1358.Bl -tag -width Ds
1359.It Fl i
1360Displays internally logged ZFS events in addition to user initiated events.
1361.It Fl l
1362Displays log records in long format, which in addition to standard format
1363includes, the user name, the hostname, and the zone in which the operation was
1364performed.
1365.El
1366.It Xo
1367.Nm
1368.Cm import
1369.Op Fl D
522db292 1370.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1371.Xc
1372Lists pools available to import.
1373If the
1374.Fl d
1375option is not specified, this command searches for devices in
1376.Pa /dev .
1377The
1378.Fl d
1379option can be specified multiple times, and all directories are searched.
1380If the device appears to be part of an exported pool, this command displays a
1381summary of the pool with the name of the pool, a numeric identifier, as well as
1382the vdev layout and current health of the device for each device or file.
1383Destroyed pools, pools that were previously destroyed with the
1384.Nm zpool Cm destroy
1385command, are not listed unless the
1386.Fl D
1387option is specified.
1388.Pp
1389The numeric identifier is unique, and can be used instead of the pool name when
1390multiple exported pools of the same name are available.
1391.Bl -tag -width Ds
1392.It Fl c Ar cachefile
1393Reads configuration from the given
1394.Ar cachefile
1395that was created with the
1396.Sy cachefile
1397pool property.
1398This
1399.Ar cachefile
1400is used instead of searching for devices.
522db292
CC
1401.It Fl d Ar dir Ns | Ns Ar device
1402Uses
1403.Ar device
1404or searches for devices or files in
cda0317e
GM
1405.Ar dir .
1406The
1407.Fl d
1408option can be specified multiple times.
1409.It Fl D
058ac9ba 1410Lists destroyed pools only.
cda0317e
GM
1411.El
1412.It Xo
1413.Nm
1414.Cm import
1415.Fl a
b5256303 1416.Op Fl DflmN
cda0317e 1417.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1418.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1419.Op Fl o Ar mntopts
1420.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1421.Op Fl R Ar root
1422.Op Fl s
1423.Xc
1424Imports all pools found in the search directories.
1425Identical to the previous command, except that all pools with a sufficient
1426number of devices available are imported.
1427Destroyed pools, pools that were previously destroyed with the
1428.Nm zpool Cm destroy
1429command, will not be imported unless the
1430.Fl D
1431option is specified.
1432.Bl -tag -width Ds
1433.It Fl a
6b4e21c6 1434Searches for and imports all pools found.
cda0317e
GM
1435.It Fl c Ar cachefile
1436Reads configuration from the given
1437.Ar cachefile
1438that was created with the
1439.Sy cachefile
1440pool property.
1441This
1442.Ar cachefile
1443is used instead of searching for devices.
522db292
CC
1444.It Fl d Ar dir Ns | Ns Ar device
1445Uses
1446.Ar device
1447or searches for devices or files in
cda0317e
GM
1448.Ar dir .
1449The
1450.Fl d
1451option can be specified multiple times.
1452This option is incompatible with the
1453.Fl c
1454option.
1455.It Fl D
1456Imports destroyed pools only.
1457The
1458.Fl f
1459option is also required.
1460.It Fl f
1461Forces import, even if the pool appears to be potentially active.
1462.It Fl F
1463Recovery mode for a non-importable pool.
1464Attempt to return the pool to an importable state by discarding the last few
1465transactions.
1466Not all damaged pools can be recovered by using this option.
1467If successful, the data from the discarded transactions is irretrievably lost.
1468This option is ignored if the pool is importable or already imported.
b5256303
TC
1469.It Fl l
1470Indicates that this command will request encryption keys for all encrypted
1471datasets it attempts to mount as it is bringing the pool online. Note that if
1472any datasets have a
1473.Sy keylocation
1474of
1475.Sy prompt
1476this command will block waiting for the keys to be entered. Without this flag
1477encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1478.It Fl m
7f9d9946 1479Allows a pool to import when there is a missing log device.
cda0317e
GM
1480Recent transactions can be lost because the log device will be discarded.
1481.It Fl n
1482Used with the
1483.Fl F
1484recovery option.
1485Determines whether a non-importable pool can be made importable again, but does
1486not actually perform the pool recovery.
1487For more details about pool recovery mode, see the
1488.Fl F
1489option, above.
1490.It Fl N
7f9d9946 1491Import the pool without mounting any file systems.
cda0317e
GM
1492.It Fl o Ar mntopts
1493Comma-separated list of mount options to use when mounting datasets within the
1494pool.
1495See
1496.Xr zfs 8
1497for a description of dataset properties and mount options.
1498.It Fl o Ar property Ns = Ns Ar value
1499Sets the specified property on the imported pool.
1500See the
1501.Sx Properties
1502section for more information on the available pool properties.
1503.It Fl R Ar root
1504Sets the
1505.Sy cachefile
1506property to
1507.Sy none
1508and the
1509.Sy altroot
1510property to
1511.Ar root .
d2734cce
SD
1512.It Fl -rewind-to-checkpoint
1513Rewinds pool to the checkpointed state.
1514Once the pool is imported with this flag there is no way to undo the rewind.
1515All changes and data that were written after the checkpoint are lost!
1516The only exception is when the
1517.Sy readonly
1518mounting option is enabled.
1519In this case, the checkpointed state of the pool is opened and an
1520administrator can see how the pool would look like if they were
1521to fully rewind.
cda0317e
GM
1522.It Fl s
1523Scan using the default search path, the libblkid cache will not be
1524consulted. A custom search path may be specified by setting the
1525ZPOOL_IMPORT_PATH environment variable.
1526.It Fl X
1527Used with the
1528.Fl F
1529recovery option. Determines whether extreme
1530measures to find a valid txg should take place. This allows the pool to
1531be rolled back to a txg which is no longer guaranteed to be consistent.
1532Pools imported at an inconsistent txg may contain uncorrectable
1533checksum errors. For more details about pool recovery mode, see the
1534.Fl F
1535option, above. WARNING: This option can be extremely hazardous to the
1536health of your pool and should only be used as a last resort.
1537.It Fl T
1538Specify the txg to use for rollback. Implies
1539.Fl FX .
1540For more details
1541about pool recovery mode, see the
1542.Fl X
1543option, above. WARNING: This option can be extremely hazardous to the
1544health of your pool and should only be used as a last resort.
1545.El
1546.It Xo
1547.Nm
1548.Cm import
b5256303 1549.Op Fl Dflm
cda0317e 1550.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1551.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1552.Op Fl o Ar mntopts
1553.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1554.Op Fl R Ar root
1555.Op Fl s
1556.Ar pool Ns | Ns Ar id
1557.Op Ar newpool
1558.Xc
1559Imports a specific pool.
1560A pool can be identified by its name or the numeric identifier.
1561If
1562.Ar newpool
1563is specified, the pool is imported using the name
1564.Ar newpool .
1565Otherwise, it is imported with the same name as its exported name.
1566.Pp
1567If a device is removed from a system without running
1568.Nm zpool Cm export
1569first, the device appears as potentially active.
1570It cannot be determined if this was a failed export, or whether the device is
1571really in use from another host.
1572To import a pool in this state, the
1573.Fl f
1574option is required.
1575.Bl -tag -width Ds
1576.It Fl c Ar cachefile
1577Reads configuration from the given
1578.Ar cachefile
1579that was created with the
1580.Sy cachefile
1581pool property.
1582This
1583.Ar cachefile
1584is used instead of searching for devices.
522db292
CC
1585.It Fl d Ar dir Ns | Ns Ar device
1586Uses
1587.Ar device
1588or searches for devices or files in
cda0317e
GM
1589.Ar dir .
1590The
1591.Fl d
1592option can be specified multiple times.
1593This option is incompatible with the
1594.Fl c
1595option.
1596.It Fl D
1597Imports destroyed pool.
1598The
1599.Fl f
1600option is also required.
1601.It Fl f
058ac9ba 1602Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1603.It Fl F
1604Recovery mode for a non-importable pool.
1605Attempt to return the pool to an importable state by discarding the last few
1606transactions.
1607Not all damaged pools can be recovered by using this option.
1608If successful, the data from the discarded transactions is irretrievably lost.
1609This option is ignored if the pool is importable or already imported.
b5256303
TC
1610.It Fl l
1611Indicates that this command will request encryption keys for all encrypted
1612datasets it attempts to mount as it is bringing the pool online. Note that if
1613any datasets have a
1614.Sy keylocation
1615of
1616.Sy prompt
1617this command will block waiting for the keys to be entered. Without this flag
1618encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1619.It Fl m
7f9d9946 1620Allows a pool to import when there is a missing log device.
cda0317e
GM
1621Recent transactions can be lost because the log device will be discarded.
1622.It Fl n
1623Used with the
1624.Fl F
1625recovery option.
1626Determines whether a non-importable pool can be made importable again, but does
1627not actually perform the pool recovery.
1628For more details about pool recovery mode, see the
1629.Fl F
1630option, above.
1631.It Fl o Ar mntopts
1632Comma-separated list of mount options to use when mounting datasets within the
1633pool.
1634See
1635.Xr zfs 8
1636for a description of dataset properties and mount options.
1637.It Fl o Ar property Ns = Ns Ar value
1638Sets the specified property on the imported pool.
1639See the
1640.Sx Properties
1641section for more information on the available pool properties.
1642.It Fl R Ar root
1643Sets the
1644.Sy cachefile
1645property to
1646.Sy none
1647and the
1648.Sy altroot
1649property to
1650.Ar root .
1651.It Fl s
1652Scan using the default search path, the libblkid cache will not be
1653consulted. A custom search path may be specified by setting the
1654ZPOOL_IMPORT_PATH environment variable.
1655.It Fl X
1656Used with the
1657.Fl F
1658recovery option. Determines whether extreme
1659measures to find a valid txg should take place. This allows the pool to
1660be rolled back to a txg which is no longer guaranteed to be consistent.
1661Pools imported at an inconsistent txg may contain uncorrectable
1662checksum errors. For more details about pool recovery mode, see the
1663.Fl F
1664option, above. WARNING: This option can be extremely hazardous to the
1665health of your pool and should only be used as a last resort.
1666.It Fl T
1667Specify the txg to use for rollback. Implies
1668.Fl FX .
1669For more details
1670about pool recovery mode, see the
1671.Fl X
1672option, above. WARNING: This option can be extremely hazardous to the
1673health of your pool and should only be used as a last resort.
1c68856b 1674.It Fl t
cda0317e
GM
1675Used with
1676.Sy newpool .
1677Specifies that
1678.Sy newpool
1679is temporary. Temporary pool names last until export. Ensures that
1680the original pool name will be used in all label updates and therefore
1681is retained upon export.
1682Will also set -o cachefile=none when not explicitly specified.
1683.El
1684.It Xo
1685.Nm
619f0976 1686.Cm initialize
a769fb53 1687.Op Fl c | Fl s
619f0976
GW
1688.Ar pool
1689.Op Ar device Ns ...
1690.Xc
1691Begins initializing by writing to all unallocated regions on the specified
1692devices, or all eligible devices in the pool if no individual devices are
1693specified.
1694Only leaf data or log devices may be initialized.
1695.Bl -tag -width Ds
1696.It Fl c, -cancel
1697Cancel initializing on the specified devices, or all eligible devices if none
1698are specified.
1699If one or more target devices are invalid or are not currently being
1700initialized, the command will fail and no cancellation will occur on any device.
1701.It Fl s -suspend
1702Suspend initializing on the specified devices, or all eligible devices if none
1703are specified.
1704If one or more target devices are invalid or are not currently being
1705initialized, the command will fail and no suspension will occur on any device.
1706Initializing can then be resumed by running
1707.Nm zpool Cm initialize
1708with no flags on the relevant target devices.
1709.El
1710.It Xo
1711.Nm
cda0317e
GM
1712.Cm iostat
1713.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1714.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 1715.Op Fl ghHLnpPvy
cda0317e
GM
1716.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1717.Op Ar interval Op Ar count
1718.Xc
f8bb2a7e
KP
1719Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1720be observed via
1721.Xr iostat 1 .
1722If writes are located nearby, they may be merged into a single
1723larger operation. Additional I/O may be generated depending on the level of
1724vdev redundancy.
1725To filter output, you may pass in a list of pools, a pool and list of vdevs
1726in that pool, or a list of any vdevs from any pool. If no items are specified,
1727statistics for every pool in the system are shown.
cda0317e
GM
1728When given an
1729.Ar interval ,
1730the statistics are printed every
1731.Ar interval
8fccfa8e
DW
1732seconds until ^C is pressed. If
1733.Fl n
1734flag is specified the headers are displayed only once, otherwise they are
1735displayed periodically. If count is specified, the command exits
cda0317e
GM
1736after count reports are printed. The first report printed is always
1737the statistics since boot regardless of whether
1738.Ar interval
1739and
1740.Ar count
1741are passed. However, this behavior can be suppressed with the
1742.Fl y
1743flag. Also note that the units of
1744.Sy K ,
1745.Sy M ,
1746.Sy G ...
1747that are printed in the report are in base 1024. To get the raw
1748values, use the
1749.Fl p
1750flag.
1751.Bl -tag -width Ds
7a8ed6b8 1752.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1753Run a script (or scripts) on each vdev and include the output as a new column
1754in the
1755.Nm zpool Cm iostat
1756output. Users can run any script found in their
1757.Pa ~/.zpool.d
1758directory or from the system
1759.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1760directory. Script names containing the slash (/) character are not allowed.
1761The default search path can be overridden by setting the
cda0317e
GM
1762ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1763.Fl c
1764if they have the ZPOOL_SCRIPTS_AS_ROOT
1765environment variable set. If a script requires the use of a privileged
1766command, like
7a8ed6b8
NB
1767.Xr smartctl 8 ,
1768then it's recommended you allow the user access to it in
cda0317e
GM
1769.Pa /etc/sudoers
1770or add the user to the
1771.Pa /etc/sudoers.d/zfs
1772file.
1773.Pp
1774If
1775.Fl c
1776is passed without a script name, it prints a list of all scripts.
1777.Fl c
7a8ed6b8 1778also sets verbose mode
90cdf283 1779.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1780.Pp
1781Script output should be in the form of "name=value". The column name is
1782set to "name" and the value is set to "value". Multiple lines can be
1783used to output multiple columns. The first line of output not in the
1784"name=value" format is displayed without a column title, and no more
1785output after that is displayed. This can be useful for printing error
1786messages. Blank or NULL values are printed as a '-' to make output
1787awk-able.
1788.Pp
d6418de0 1789The following environment variables are set before running each script:
cda0317e
GM
1790.Bl -tag -width "VDEV_PATH"
1791.It Sy VDEV_PATH
1792Full path to the vdev
1793.El
1794.Bl -tag -width "VDEV_UPATH"
1795.It Sy VDEV_UPATH
1796Underlying path to the vdev (/dev/sd*). For use with device mapper,
1797multipath, or partitioned vdevs.
1798.El
1799.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1800.It Sy VDEV_ENC_SYSFS_PATH
1801The sysfs path to the enclosure for the vdev (if any).
1802.El
1803.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1804Display a time stamp.
cda0317e
GM
1805Specify
1806.Sy u
1807for a printed representation of the internal representation of time.
1808See
1809.Xr time 2 .
1810Specify
1811.Sy d
1812for standard date format.
1813See
1814.Xr date 1 .
1815.It Fl g
1816Display vdev GUIDs instead of the normal device names. These GUIDs
1817can be used in place of device names for the zpool
1818detach/offline/remove/replace commands.
1819.It Fl H
1820Scripted mode. Do not display headers, and separate fields by a
1821single tab instead of arbitrary space.
1822.It Fl L
1823Display real paths for vdevs resolving all symbolic links. This can
1824be used to look up the current block device name regardless of the
1825.Pa /dev/disk/
1826path used to open it.
8fccfa8e
DW
1827.It Fl n
1828Print headers only once when passed
cda0317e
GM
1829.It Fl p
1830Display numbers in parsable (exact) values. Time values are in
1831nanoseconds.
1832.It Fl P
1833Display full paths for vdevs instead of only the last component of
1834the path. This can be used in conjunction with the
1835.Fl L
1836flag.
1837.It Fl r
1b939560
BB
1838Print request size histograms for the leaf vdev's IO. This includes
1839histograms of individual IOs (ind) and aggregate IOs (agg). These stats
1840can be useful for observing how well IO aggregation is working. Note
1841that TRIM IOs may exceed 16M, but will be counted as 16M.
cda0317e
GM
1842.It Fl v
1843Verbose statistics Reports usage statistics for individual vdevs within the
1844pool, in addition to the pool-wide statistics.
1845.It Fl y
eb201f50
GM
1846Omit statistics since boot.
1847Normally the first line of output reports the statistics since boot.
1848This option suppresses that first line of output.
8fccfa8e 1849.Ar interval
cda0317e 1850.It Fl w
eb201f50
GM
1851Display latency histograms:
1852.Pp
1853.Ar total_wait :
1854Total IO time (queuing + disk IO time).
1855.Ar disk_wait :
1856Disk IO time (time reading/writing the disk).
1857.Ar syncq_wait :
1858Amount of time IO spent in synchronous priority queues. Does not include
1859disk time.
1860.Ar asyncq_wait :
1861Amount of time IO spent in asynchronous priority queues. Does not include
1862disk time.
1863.Ar scrub :
1864Amount of time IO spent in scrub queue. Does not include disk time.
cda0317e 1865.It Fl l
193a37cb 1866Include average latency statistics:
cda0317e
GM
1867.Pp
1868.Ar total_wait :
193a37cb 1869Average total IO time (queuing + disk IO time).
cda0317e 1870.Ar disk_wait :
193a37cb 1871Average disk IO time (time reading/writing the disk).
cda0317e
GM
1872.Ar syncq_wait :
1873Average amount of time IO spent in synchronous priority queues. Does
1874not include disk time.
1875.Ar asyncq_wait :
1876Average amount of time IO spent in asynchronous priority queues.
1877Does not include disk time.
1878.Ar scrub :
1879Average queuing time in scrub queue. Does not include disk time.
1b939560
BB
1880.Ar trim :
1881Average queuing time in trim queue. Does not include disk time.
cda0317e
GM
1882.It Fl q
1883Include active queue statistics. Each priority queue has both
1884pending (
1885.Ar pend )
1886and active (
1887.Ar activ )
1888IOs. Pending IOs are waiting to
1889be issued to the disk, and active IOs have been issued to disk and are
1890waiting for completion. These stats are broken out by priority queue:
1891.Pp
1892.Ar syncq_read/write :
1893Current number of entries in synchronous priority
1894queues.
1895.Ar asyncq_read/write :
193a37cb 1896Current number of entries in asynchronous priority queues.
cda0317e 1897.Ar scrubq_read :
193a37cb 1898Current number of entries in scrub queue.
1b939560
BB
1899.Ar trimq_write :
1900Current number of entries in trim queue.
cda0317e
GM
1901.Pp
1902All queue statistics are instantaneous measurements of the number of
1903entries in the queues. If you specify an interval, the measurements
1904will be sampled from the end of the interval.
1905.El
1906.It Xo
1907.Nm
1908.Cm labelclear
1909.Op Fl f
1910.Ar device
1911.Xc
1912Removes ZFS label information from the specified
1913.Ar device .
1914The
1915.Ar device
1916must not be part of an active pool configuration.
1917.Bl -tag -width Ds
1918.It Fl f
131cc95c 1919Treat exported or foreign devices as inactive.
cda0317e
GM
1920.El
1921.It Xo
1922.Nm
1923.Cm list
1924.Op Fl HgLpPv
1925.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1926.Op Fl T Sy u Ns | Ns Sy d
1927.Oo Ar pool Oc Ns ...
1928.Op Ar interval Op Ar count
1929.Xc
1930Lists the given pools along with a health status and space usage.
1931If no
1932.Ar pool Ns s
1933are specified, all pools in the system are listed.
1934When given an
1935.Ar interval ,
1936the information is printed every
1937.Ar interval
1938seconds until ^C is pressed.
1939If
1940.Ar count
1941is specified, the command exits after
1942.Ar count
1943reports are printed.
1944.Bl -tag -width Ds
1945.It Fl g
1946Display vdev GUIDs instead of the normal device names. These GUIDs
1947can be used in place of device names for the zpool
1948detach/offline/remove/replace commands.
1949.It Fl H
1950Scripted mode.
1951Do not display headers, and separate fields by a single tab instead of arbitrary
1952space.
1953.It Fl o Ar property
1954Comma-separated list of properties to display.
1955See the
1956.Sx Properties
1957section for a list of valid properties.
1958The default list is
fb8a10d5
EA
1959.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1960.Cm capacity , dedupratio , health , altroot .
cda0317e
GM
1961.It Fl L
1962Display real paths for vdevs resolving all symbolic links. This can
1963be used to look up the current block device name regardless of the
1964/dev/disk/ path used to open it.
1965.It Fl p
1966Display numbers in parsable
1967.Pq exact
1968values.
1969.It Fl P
1970Display full paths for vdevs instead of only the last component of
1971the path. This can be used in conjunction with the
85912983 1972.Fl L
1973flag.
cda0317e 1974.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1975Display a time stamp.
cda0317e 1976Specify
f23b0242 1977.Sy u
cda0317e
GM
1978for a printed representation of the internal representation of time.
1979See
1980.Xr time 2 .
1981Specify
f23b0242 1982.Sy d
cda0317e
GM
1983for standard date format.
1984See
1985.Xr date 1 .
1986.It Fl v
1987Verbose statistics.
1988Reports usage statistics for individual vdevs within the pool, in addition to
1989the pool-wise statistics.
1990.El
1991.It Xo
1992.Nm
1993.Cm offline
1994.Op Fl f
1995.Op Fl t
1996.Ar pool Ar device Ns ...
1997.Xc
1998Takes the specified physical device offline.
1999While the
2000.Ar device
2001is offline, no attempt is made to read or write to the device.
2002This command is not applicable to spares.
2003.Bl -tag -width Ds
2004.It Fl f
2005Force fault. Instead of offlining the disk, put it into a faulted
2006state. The fault will persist across imports unless the
2007.Fl t
2008flag was specified.
2009.It Fl t
2010Temporary.
2011Upon reboot, the specified physical device reverts to its previous state.
2012.El
2013.It Xo
2014.Nm
2015.Cm online
2016.Op Fl e
2017.Ar pool Ar device Ns ...
2018.Xc
058ac9ba 2019Brings the specified physical device online.
7c9abcf8 2020This command is not applicable to spares.
cda0317e
GM
2021.Bl -tag -width Ds
2022.It Fl e
2023Expand the device to use all available space.
2024If the device is part of a mirror or raidz then all devices must be expanded
2025before the new space will become available to the pool.
2026.El
2027.It Xo
2028.Nm
2029.Cm reguid
2030.Ar pool
2031.Xc
2032Generates a new unique identifier for the pool.
2033You must ensure that all devices in this pool are online and healthy before
2034performing this action.
2035.It Xo
2036.Nm
2037.Cm reopen
d3f2cd7e 2038.Op Fl n
cda0317e
GM
2039.Ar pool
2040.Xc
5853fe79 2041Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
2042.Bl -tag -width Ds
2043.It Fl n
2044Do not restart an in-progress scrub operation. This is not recommended and can
2045result in partially resilvered devices unless a second scrub is performed.
a94d38c0 2046.El
cda0317e
GM
2047.It Xo
2048.Nm
2049.Cm remove
a1d477c2 2050.Op Fl np
cda0317e
GM
2051.Ar pool Ar device Ns ...
2052.Xc
2053Removes the specified device from the pool.
2ced3cf0
BB
2054This command supports removing hot spare, cache, log, and both mirrored and
2055non-redundant primary top-level vdevs, including dedup and special vdevs.
2056When the primary pool storage includes a top-level raidz vdev only hot spare,
2057cache, and log devices can be removed.
a1d477c2
MA
2058.sp
2059Removing a top-level vdev reduces the total amount of space in the storage pool.
2060The specified device will be evacuated by copying all allocated space from it to
2061the other devices in the pool.
2062In this case, the
2063.Nm zpool Cm remove
2064command initiates the removal and returns, while the evacuation continues in
2065the background.
2066The removal progress can be monitored with
7c9a4292
BB
2067.Nm zpool Cm status .
2068If an IO error is encountered during the removal process it will be
2069cancelled. The
2ced3cf0
BB
2070.Sy device_removal
2071feature flag must be enabled to remove a top-level vdev, see
2072.Xr zpool-features 5 .
a1d477c2
MA
2073.Pp
2074A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2075same.
2076Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
2077the
2078.Nm zpool Cm detach
2079command.
a1d477c2
MA
2080.Bl -tag -width Ds
2081.It Fl n
2082Do not actually perform the removal ("no-op").
2083Instead, print the estimated amount of memory that will be used by the
2084mapping table after the removal completes.
2085This is nonzero only for top-level vdevs.
2086.El
2087.Bl -tag -width Ds
2088.It Fl p
2089Used in conjunction with the
2090.Fl n
2091flag, displays numbers as parsable (exact) values.
2092.El
2093.It Xo
2094.Nm
2095.Cm remove
2096.Fl s
2097.Ar pool
2098.Xc
2099Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
2100.It Xo
2101.Nm
2102.Cm replace
2103.Op Fl f
2104.Op Fl o Ar property Ns = Ns Ar value
2105.Ar pool Ar device Op Ar new_device
2106.Xc
2107Replaces
2108.Ar old_device
2109with
2110.Ar new_device .
2111This is equivalent to attaching
2112.Ar new_device ,
2113waiting for it to resilver, and then detaching
2114.Ar old_device .
2115.Pp
2116The size of
2117.Ar new_device
2118must be greater than or equal to the minimum size of all the devices in a mirror
2119or raidz configuration.
2120.Pp
2121.Ar new_device
2122is required if the pool is not redundant.
2123If
2124.Ar new_device
2125is not specified, it defaults to
2126.Ar old_device .
2127This form of replacement is useful after an existing disk has failed and has
2128been physically replaced.
2129In this case, the new disk may have the same
2130.Pa /dev
2131path as the old device, even though it is actually a different disk.
2132ZFS recognizes this.
2133.Bl -tag -width Ds
2134.It Fl f
2135Forces use of
2136.Ar new_device ,
74580a94 2137even if it appears to be in use.
cda0317e
GM
2138Not all devices can be overridden in this manner.
2139.It Fl o Ar property Ns = Ns Ar value
2140Sets the given pool properties. See the
2141.Sx Properties
2142section for a list of valid properties that can be set.
2143The only property supported at the moment is
2144.Sy ashift .
2145.El
2146.It Xo
2147.Nm
2148.Cm scrub
0ea05c64 2149.Op Fl s | Fl p
cda0317e
GM
2150.Ar pool Ns ...
2151.Xc
0ea05c64 2152Begins a scrub or resumes a paused scrub.
cda0317e
GM
2153The scrub examines all data in the specified pools to verify that it checksums
2154correctly.
2155For replicated
2156.Pq mirror or raidz
2157devices, ZFS automatically repairs any damage discovered during the scrub.
2158The
2159.Nm zpool Cm status
2160command reports the progress of the scrub and summarizes the results of the
2161scrub upon completion.
2162.Pp
2163Scrubbing and resilvering are very similar operations.
2164The difference is that resilvering only examines data that ZFS knows to be out
2165of date
2166.Po
2167for example, when attaching a new device to a mirror or replacing an existing
2168device
2169.Pc ,
2170whereas scrubbing examines all data to discover silent errors due to hardware
2171faults or disk failure.
2172.Pp
2173Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2174one at a time.
0ea05c64 2175If a scrub is paused, the
cda0317e 2176.Nm zpool Cm scrub
0ea05c64 2177resumes it.
cda0317e
GM
2178If a resilver is in progress, ZFS does not allow a scrub to be started until the
2179resilver completes.
2180.Bl -tag -width Ds
2181.It Fl s
058ac9ba 2182Stop scrubbing.
cda0317e 2183.El
0ea05c64
AP
2184.Bl -tag -width Ds
2185.It Fl p
2186Pause scrubbing.
e4b6b2db
AP
2187Scrub pause state and progress are periodically synced to disk.
2188If the system is restarted or pool is exported during a paused scrub,
2189even after import, scrub will remain paused until it is resumed.
2190Once resumed the scrub will pick up from the place where it was last
2191checkpointed to disk.
0ea05c64
AP
2192To resume a paused scrub issue
2193.Nm zpool Cm scrub
2194again.
2195.El
cda0317e
GM
2196.It Xo
2197.Nm
80a91e74
TC
2198.Cm resilver
2199.Ar pool Ns ...
2200.Xc
2201Starts a resilver. If an existing resilver is already running it will be
2202restarted from the beginning. Any drives that were scheduled for a deferred
2203resilver will be added to the new one.
2204.It Xo
2205.Nm
1b939560
BB
2206.Cm trim
2207.Op Fl d
2208.Op Fl c | Fl s
2209.Ar pool
2210.Op Ar device Ns ...
2211.Xc
2212Initiates an immediate on-demand TRIM operation for all of the free space in
2213a pool. This operation informs the underlying storage devices of all blocks
2214in the pool which are no longer allocated and allows thinly provisioned
2215devices to reclaim the space.
2216.Pp
2217A manual on-demand TRIM operation can be initiated irrespective of the
2218.Sy autotrim
2219pool property setting. See the documentation for the
2220.Sy autotrim
2221property above for the types of vdev devices which can be trimmed.
2222.Bl -tag -width Ds
2223.It Fl d -secure
2224Causes a secure TRIM to be initiated. When performing a secure TRIM, the
2225device guarantees that data stored on the trimmed blocks has been erased.
2226This requires support from the device and is not supported by all SSDs.
2227.It Fl r -rate Ar rate
2228Controls the rate at which the TRIM operation progresses. Without this
2229option TRIM is executed as quickly as possible. The rate, expressed in bytes
2230per second, is applied on a per-vdev basis and may be set differently for
2231each leaf vdev.
2232.It Fl c, -cancel
2233Cancel trimming on the specified devices, or all eligible devices if none
2234are specified.
2235If one or more target devices are invalid or are not currently being
2236trimmed, the command will fail and no cancellation will occur on any device.
2237.It Fl s -suspend
2238Suspend trimming on the specified devices, or all eligible devices if none
2239are specified.
2240If one or more target devices are invalid or are not currently being
2241trimmed, the command will fail and no suspension will occur on any device.
2242Trimming can then be resumed by running
2243.Nm zpool Cm trim
2244with no flags on the relevant target devices.
2245.El
2246.It Xo
2247.Nm
cda0317e
GM
2248.Cm set
2249.Ar property Ns = Ns Ar value
2250.Ar pool
2251.Xc
2252Sets the given property on the specified pool.
2253See the
2254.Sx Properties
2255section for more information on what properties can be set and acceptable
2256values.
2257.It Xo
2258.Nm
2259.Cm split
b5256303 2260.Op Fl gLlnP
cda0317e
GM
2261.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2262.Op Fl R Ar root
2263.Ar pool newpool
2264.Op Ar device ...
2265.Xc
2266Splits devices off
2267.Ar pool
2268creating
2269.Ar newpool .
2270All vdevs in
2271.Ar pool
2272must be mirrors and the pool must not be in the process of resilvering.
2273At the time of the split,
2274.Ar newpool
2275will be a replica of
2276.Ar pool .
2277By default, the
2278last device in each mirror is split from
2279.Ar pool
2280to create
2281.Ar newpool .
2282.Pp
2283The optional device specification causes the specified device(s) to be
2284included in the new
2285.Ar pool
2286and, should any devices remain unspecified,
2287the last device in each mirror is used as would be by default.
2288.Bl -tag -width Ds
2289.It Fl g
2290Display vdev GUIDs instead of the normal device names. These GUIDs
2291can be used in place of device names for the zpool
2292detach/offline/remove/replace commands.
2293.It Fl L
2294Display real paths for vdevs resolving all symbolic links. This can
2295be used to look up the current block device name regardless of the
2296.Pa /dev/disk/
2297path used to open it.
b5256303
TC
2298.It Fl l
2299Indicates that this command will request encryption keys for all encrypted
2300datasets it attempts to mount as it is bringing the new pool online. Note that
2301if any datasets have a
2302.Sy keylocation
2303of
2304.Sy prompt
2305this command will block waiting for the keys to be entered. Without this flag
2306encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
2307.It Fl n
2308Do dry run, do not actually perform the split.
2309Print out the expected configuration of
2310.Ar newpool .
2311.It Fl P
2312Display full paths for vdevs instead of only the last component of
2313the path. This can be used in conjunction with the
85912983 2314.Fl L
2315flag.
cda0317e
GM
2316.It Fl o Ar property Ns = Ns Ar value
2317Sets the specified property for
2318.Ar newpool .
2319See the
2320.Sx Properties
2321section for more information on the available pool properties.
2322.It Fl R Ar root
2323Set
2324.Sy altroot
2325for
2326.Ar newpool
2327to
2328.Ar root
2329and automatically import it.
2330.El
2331.It Xo
2332.Nm
2333.Cm status
7a8ed6b8 2334.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1b939560 2335.Op Fl DigLpPstvx
cda0317e
GM
2336.Op Fl T Sy u Ns | Ns Sy d
2337.Oo Ar pool Oc Ns ...
2338.Op Ar interval Op Ar count
2339.Xc
2340Displays the detailed health status for the given pools.
2341If no
2342.Ar pool
2343is specified, then the status of each pool in the system is displayed.
2344For more information on pool and device health, see the
2345.Sx Device Failure and Recovery
2346section.
2347.Pp
2348If a scrub or resilver is in progress, this command reports the percentage done
2349and the estimated time to completion.
2350Both of these are only approximate, because the amount of data in the pool and
2351the other workloads on the system can change.
2352.Bl -tag -width Ds
7a8ed6b8 2353.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2354Run a script (or scripts) on each vdev and include the output as a new column
2355in the
2356.Nm zpool Cm status
2357output. See the
2358.Fl c
2359option of
2360.Nm zpool Cm iostat
2361for complete details.
a769fb53
BB
2362.It Fl i
2363Display vdev initialization status.
cda0317e
GM
2364.It Fl g
2365Display vdev GUIDs instead of the normal device names. These GUIDs
2366can be used in place of device names for the zpool
2367detach/offline/remove/replace commands.
2368.It Fl L
2369Display real paths for vdevs resolving all symbolic links. This can
2370be used to look up the current block device name regardless of the
2371.Pa /dev/disk/
2372path used to open it.
ad796b8a
TH
2373.It Fl p
2374Display numbers in parsable (exact) values.
f4ae39a1
BB
2375.It Fl P
2376Display full paths for vdevs instead of only the last component of
2377the path. This can be used in conjunction with the
85912983 2378.Fl L
2379flag.
cda0317e
GM
2380.It Fl D
2381Display a histogram of deduplication statistics, showing the allocated
2382.Pq physically present on disk
2383and referenced
2384.Pq logically referenced in the pool
2385block counts and sizes by reference count.
ad796b8a
TH
2386.It Fl s
2387Display the number of leaf VDEV slow IOs. This is the number of IOs that
2388didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2389This does not necessarily mean the IOs failed to complete, just took an
2390unreasonably long amount of time. This may indicate a problem with the
2391underlying storage.
1b939560
BB
2392.It Fl t
2393Display vdev TRIM status.
cda0317e 2394.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2395Display a time stamp.
cda0317e 2396Specify
f23b0242 2397.Sy u
cda0317e
GM
2398for a printed representation of the internal representation of time.
2399See
2400.Xr time 2 .
2401Specify
f23b0242 2402.Sy d
cda0317e
GM
2403for standard date format.
2404See
2405.Xr date 1 .
2406.It Fl v
2407Displays verbose data error information, printing out a complete list of all
2408data errors since the last complete pool scrub.
2409.It Fl x
2410Only display status for pools that are exhibiting errors or are otherwise
2411unavailable.
2412Warnings about pools not using the latest on-disk format will not be included.
2413.El
2414.It Xo
2415.Nm
2416.Cm sync
2417.Op Ar pool ...
2418.Xc
2419This command forces all in-core dirty data to be written to the primary
2420pool storage and not the ZIL. It will also update administrative
2421information including quota reporting. Without arguments,
2422.Sy zpool sync
2423will sync all pools on the system. Otherwise, it will sync only the
2424specified pool(s).
2425.It Xo
2426.Nm
2427.Cm upgrade
2428.Xc
2429Displays pools which do not have all supported features enabled and pools
2430formatted using a legacy ZFS version number.
2431These pools can continue to be used, but some features may not be available.
2432Use
2433.Nm zpool Cm upgrade Fl a
2434to enable all features on all pools.
2435.It Xo
2436.Nm
2437.Cm upgrade
2438.Fl v
2439.Xc
2440Displays legacy ZFS versions supported by the current software.
2441See
2442.Xr zpool-features 5
2443for a description of feature flags features supported by the current software.
2444.It Xo
2445.Nm
2446.Cm upgrade
2447.Op Fl V Ar version
2448.Fl a Ns | Ns Ar pool Ns ...
2449.Xc
2450Enables all supported features on the given pool.
2451Once this is done, the pool will no longer be accessible on systems that do not
2452support feature flags.
2453See
9d489ab3 2454.Xr zpool-features 5
cda0317e
GM
2455for details on compatibility with systems that support feature flags, but do not
2456support all features enabled on the pool.
2457.Bl -tag -width Ds
2458.It Fl a
b9b24bb4 2459Enables all supported features on all pools.
cda0317e
GM
2460.It Fl V Ar version
2461Upgrade to the specified legacy version.
2462If the
2463.Fl V
2464flag is specified, no features will be enabled on the pool.
2465This option can only be used to increase the version number up to the last
2466supported legacy version number.
2467.El
2468.El
2469.Sh EXIT STATUS
2470The following exit values are returned:
2471.Bl -tag -width Ds
2472.It Sy 0
2473Successful completion.
2474.It Sy 1
2475An error occurred.
2476.It Sy 2
2477Invalid command line options were specified.
2478.El
2479.Sh EXAMPLES
2480.Bl -tag -width Ds
2481.It Sy Example 1 No Creating a RAID-Z Storage Pool
2482The following command creates a pool with a single raidz root vdev that
2483consists of six disks.
2484.Bd -literal
2485# zpool create tank raidz sda sdb sdc sdd sde sdf
2486.Ed
2487.It Sy Example 2 No Creating a Mirrored Storage Pool
2488The following command creates a pool with two mirrors, where each mirror
2489contains two disks.
2490.Bd -literal
2491# zpool create tank mirror sda sdb mirror sdc sdd
2492.Ed
2493.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2494The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2495.Bd -literal
2496# zpool create tank sda1 sdb2
2497.Ed
2498.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2499The following command creates an unmirrored pool using files.
2500While not recommended, a pool based on files can be useful for experimental
2501purposes.
2502.Bd -literal
2503# zpool create tank /path/to/file/a /path/to/file/b
2504.Ed
2505.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2506The following command adds two mirrored disks to the pool
2507.Em tank ,
2508assuming the pool is already made up of two-way mirrors.
2509The additional space is immediately available to any datasets within the pool.
2510.Bd -literal
2511# zpool add tank mirror sda sdb
2512.Ed
2513.It Sy Example 6 No Listing Available ZFS Storage Pools
2514The following command lists all available pools on the system.
2515In this case, the pool
2516.Em zion
2517is faulted due to a missing device.
058ac9ba 2518The results from this command are similar to the following:
cda0317e
GM
2519.Bd -literal
2520# zpool list
d72cd017
TK
2521NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2522rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2523tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2524zion - - - - - - - FAULTED -
cda0317e
GM
2525.Ed
2526.It Sy Example 7 No Destroying a ZFS Storage Pool
2527The following command destroys the pool
2528.Em tank
2529and any datasets contained within.
2530.Bd -literal
2531# zpool destroy -f tank
2532.Ed
2533.It Sy Example 8 No Exporting a ZFS Storage Pool
2534The following command exports the devices in pool
2535.Em tank
2536so that they can be relocated or later imported.
2537.Bd -literal
2538# zpool export tank
2539.Ed
2540.It Sy Example 9 No Importing a ZFS Storage Pool
2541The following command displays available pools, and then imports the pool
2542.Em tank
2543for use on the system.
058ac9ba 2544The results from this command are similar to the following:
cda0317e
GM
2545.Bd -literal
2546# zpool import
058ac9ba
BB
2547 pool: tank
2548 id: 15451357997522795478
2549 state: ONLINE
2550action: The pool can be imported using its name or numeric identifier.
2551config:
2552
2553 tank ONLINE
2554 mirror ONLINE
54e5f226
RL
2555 sda ONLINE
2556 sdb ONLINE
058ac9ba 2557
cda0317e
GM
2558# zpool import tank
2559.Ed
2560.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2561The following command upgrades all ZFS Storage pools to the current version of
2562the software.
2563.Bd -literal
2564# zpool upgrade -a
2565This system is currently running ZFS version 2.
2566.Ed
2567.It Sy Example 11 No Managing Hot Spares
058ac9ba 2568The following command creates a new pool with an available hot spare:
cda0317e
GM
2569.Bd -literal
2570# zpool create tank mirror sda sdb spare sdc
2571.Ed
2572.Pp
2573If one of the disks were to fail, the pool would be reduced to the degraded
2574state.
2575The failed device can be replaced using the following command:
2576.Bd -literal
2577# zpool replace tank sda sdd
2578.Ed
2579.Pp
2580Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2581made available for use should another device fail.
cda0317e
GM
2582The hot spare can be permanently removed from the pool using the following
2583command:
2584.Bd -literal
2585# zpool remove tank sdc
2586.Ed
2587.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2588The following command creates a ZFS storage pool consisting of two, two-way
2589mirrors and mirrored log devices:
2590.Bd -literal
2591# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2592 sde sdf
2593.Ed
2594.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2595The following command adds two disks for use as cache devices to a ZFS storage
2596pool:
2597.Bd -literal
2598# zpool add pool cache sdc sdd
2599.Ed
2600.Pp
2601Once added, the cache devices gradually fill with content from main memory.
2602Depending on the size of your cache devices, it could take over an hour for
2603them to fill.
2604Capacity and reads can be monitored using the
2605.Cm iostat
2606option as follows:
2607.Bd -literal
2608# zpool iostat -v pool 5
2609.Ed
a1d477c2
MA
2610.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2611The following commands remove the mirrored log device
2612.Sy mirror-2
2613and mirrored top-level data device
2614.Sy mirror-1 .
2615.Pp
058ac9ba 2616Given this configuration:
cda0317e
GM
2617.Bd -literal
2618 pool: tank
2619 state: ONLINE
2620 scrub: none requested
058ac9ba
BB
2621config:
2622
2623 NAME STATE READ WRITE CKSUM
2624 tank ONLINE 0 0 0
2625 mirror-0 ONLINE 0 0 0
54e5f226
RL
2626 sda ONLINE 0 0 0
2627 sdb ONLINE 0 0 0
058ac9ba 2628 mirror-1 ONLINE 0 0 0
54e5f226
RL
2629 sdc ONLINE 0 0 0
2630 sdd ONLINE 0 0 0
058ac9ba
BB
2631 logs
2632 mirror-2 ONLINE 0 0 0
54e5f226
RL
2633 sde ONLINE 0 0 0
2634 sdf ONLINE 0 0 0
cda0317e
GM
2635.Ed
2636.Pp
2637The command to remove the mirrored log
2638.Sy mirror-2
2639is:
2640.Bd -literal
2641# zpool remove tank mirror-2
2642.Ed
a1d477c2
MA
2643.Pp
2644The command to remove the mirrored data
2645.Sy mirror-1
2646is:
2647.Bd -literal
2648# zpool remove tank mirror-1
2649.Ed
cda0317e
GM
2650.It Sy Example 15 No Displaying expanded space on a device
2651The following command displays the detailed information for the pool
2652.Em data .
2653This pool is comprised of a single raidz vdev where one of its devices
2654increased its capacity by 10GB.
2655In this example, the pool will not be able to utilize this extra capacity until
2656all the devices under the raidz vdev have been expanded.
2657.Bd -literal
2658# zpool list -v data
d72cd017
TK
2659NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2660data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2661 raidz1 23.9G 14.6G 9.30G - 48%
2662 sda - - - - -
2663 sdb - - - 10G -
2664 sdc - - - - -
cda0317e
GM
2665.Ed
2666.It Sy Example 16 No Adding output columns
2667Additional columns can be added to the
2668.Nm zpool Cm status
2669and
2670.Nm zpool Cm iostat
2671output with
2672.Fl c
2673option.
2674.Bd -literal
2675# zpool status -c vendor,model,size
2676 NAME STATE READ WRITE CKSUM vendor model size
2677 tank ONLINE 0 0 0
2678 mirror-0 ONLINE 0 0 0
2679 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2680 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2681 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2682 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2683 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2684 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2685
2686# zpool iostat -vc slaves
2687 capacity operations bandwidth
2688 pool alloc free read write read write slaves
2689 ---------- ----- ----- ----- ----- ----- ----- ---------
2690 tank 20.4G 7.23T 26 152 20.7M 21.6M
2691 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2692 U1 - - 0 31 1.46K 20.6M sdb sdff
2693 U10 - - 0 1 3.77K 13.3K sdas sdgw
2694 U11 - - 0 1 288K 13.3K sdat sdgx
2695 U12 - - 0 1 78.4K 13.3K sdau sdgy
2696 U13 - - 0 1 128K 13.3K sdav sdgz
2697 U14 - - 0 1 63.2K 13.3K sdfk sdg
2698.Ed
2699.El
2700.Sh ENVIRONMENT VARIABLES
2701.Bl -tag -width "ZFS_ABORT"
2702.It Ev ZFS_ABORT
2703Cause
2704.Nm zpool
2705to dump core on exit for the purposes of running
90cdf283 2706.Sy ::findleaks .
cda0317e
GM
2707.El
2708.Bl -tag -width "ZPOOL_IMPORT_PATH"
2709.It Ev ZPOOL_IMPORT_PATH
2710The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2711.Nm zpool
2712looks for device nodes and files.
2713Similar to the
2714.Fl d
2715option in
2716.Nm zpool import .
2717.El
2718.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2719.It Ev ZPOOL_VDEV_NAME_GUID
2720Cause
2721.Nm zpool subcommands to output vdev guids by default. This behavior
2722is identical to the
2723.Nm zpool status -g
2724command line option.
2725.El
2726.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2727.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2728Cause
2729.Nm zpool
2730subcommands to follow links for vdev names by default. This behavior is identical to the
2731.Nm zpool status -L
2732command line option.
2733.El
2734.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2735.It Ev ZPOOL_VDEV_NAME_PATH
2736Cause
2737.Nm zpool
2738subcommands to output full vdev path names by default. This
2739behavior is identical to the
2740.Nm zpool status -p
2741command line option.
2742.El
2743.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2744.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2745Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2746config VDEV names if a
2747.Sy devid
2748NVP value is present in the pool's config.
2749.Pp
39fc0cb5 2750For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2751value in the config and
2752.Nm zpool status
2753would fail when listing the config.
39fc0cb5 2754This would also be true for future Linux based pools.
cda0317e
GM
2755.Pp
2756A pool can be stripped of any
2757.Sy devid
2758values on import or prevented from adding
2759them on
2760.Nm zpool create
2761or
2762.Nm zpool add
2763by setting
2764.Sy ZFS_VDEV_DEVID_OPT_OUT .
2765.El
2766.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2767.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2768Allow a privileged user to run the
cda0317e
GM
2769.Nm zpool status/iostat
2770with the
2771.Fl c
7a8ed6b8 2772option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2773.Fl c .
2774.El
2775.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2776.It Ev ZPOOL_SCRIPTS_PATH
2777The search path for scripts when running
2778.Nm zpool status/iostat
2779with the
2780.Fl c
099700d9 2781option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2782.Pa ~/.zpool.d
2783and
2784.Pa /etc/zfs/zpool.d
2785search paths.
2786.El
2787.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2788.It Ev ZPOOL_SCRIPTS_ENABLED
2789Allow a user to run
2790.Nm zpool status/iostat
2791with the
2792.Fl c
2793option. If
2794.Sy ZPOOL_SCRIPTS_ENABLED
2795is not set, it is assumed that the user is allowed to run
2796.Nm zpool status/iostat -c .
90cdf283 2797.El
cda0317e
GM
2798.Sh INTERFACE STABILITY
2799.Sy Evolving
2800.Sh SEE ALSO
cda0317e
GM
2801.Xr zfs-events 5 ,
2802.Xr zfs-module-parameters 5 ,
90cdf283 2803.Xr zpool-features 5 ,
2804.Xr zed 8 ,
2805.Xr zfs 8