]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Add feature check for 'zpool resilver' command
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a448a255 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
eb201f50 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
7c9a4292 30.Dd November 29, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
50478c6d 38.Fl ?V
cda0317e
GM
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
d2734cce
SD
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
cda0317e
GM
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl m Ar mountpoint
61.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64.Op Fl R Ar root
65.Ar pool vdev Ns ...
66.Nm
67.Cm destroy
68.Op Fl f
69.Ar pool
70.Nm
71.Cm detach
72.Ar pool device
73.Nm
74.Cm events
88f9c939 75.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
76.Nm
77.Cm export
78.Op Fl a
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 86.Oo Ar pool Oc Ns ...
cda0317e
GM
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
522db292 94.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
95.Nm
96.Cm import
97.Fl a
b5256303 98.Op Fl DflmN
cda0317e 99.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 100.Op Fl -rewind-to-checkpoint
522db292 101.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
102.Op Fl o Ar mntopts
103.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104.Op Fl R Ar root
105.Nm
106.Cm import
b5256303 107.Op Fl Dflm
cda0317e 108.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 109.Op Fl -rewind-to-checkpoint
522db292 110.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
111.Op Fl o Ar mntopts
112.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113.Op Fl R Ar root
114.Op Fl s
115.Ar pool Ns | Ns Ar id
116.Op Ar newpool Oo Fl t Oc
117.Nm
619f0976 118.Cm initialize
a769fb53 119.Op Fl c | Fl s
619f0976
GW
120.Ar pool
121.Op Ar device Ns ...
122.Nm
cda0317e
GM
123.Cm iostat
124.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 126.Op Fl ghHLnpPvy
cda0317e
GM
127.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128.Op Ar interval Op Ar count
129.Nm
130.Cm labelclear
131.Op Fl f
132.Ar device
133.Nm
134.Cm list
135.Op Fl HgLpPv
136.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137.Op Fl T Sy u Ns | Ns Sy d
138.Oo Ar pool Oc Ns ...
139.Op Ar interval Op Ar count
140.Nm
141.Cm offline
142.Op Fl f
143.Op Fl t
144.Ar pool Ar device Ns ...
145.Nm
146.Cm online
147.Op Fl e
148.Ar pool Ar device Ns ...
149.Nm
150.Cm reguid
151.Ar pool
152.Nm
153.Cm reopen
d3f2cd7e 154.Op Fl n
cda0317e
GM
155.Ar pool
156.Nm
157.Cm remove
a1d477c2 158.Op Fl np
cda0317e
GM
159.Ar pool Ar device Ns ...
160.Nm
a1d477c2
MA
161.Cm remove
162.Fl s
163.Ar pool
164.Nm
cda0317e
GM
165.Cm replace
166.Op Fl f
167.Oo Fl o Ar property Ns = Ns Ar value Oc
168.Ar pool Ar device Op Ar new_device
169.Nm
80a91e74
TC
170.Cm resilver
171.Ar pool Ns ...
172.Nm
cda0317e 173.Cm scrub
0ea05c64 174.Op Fl s | Fl p
cda0317e
GM
175.Ar pool Ns ...
176.Nm
1b939560
BB
177.Cm trim
178.Op Fl d
179.Op Fl r Ar rate
180.Op Fl c | Fl s
181.Ar pool
182.Op Ar device Ns ...
183.Nm
cda0317e
GM
184.Cm set
185.Ar property Ns = Ns Ar value
186.Ar pool
187.Nm
188.Cm split
b5256303 189.Op Fl gLlnP
cda0317e
GM
190.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
191.Op Fl R Ar root
192.Ar pool newpool
193.Oo Ar device Oc Ns ...
194.Nm
195.Cm status
196.Oo Fl c Ar SCRIPT Oc
1b939560 197.Op Fl DigLpPstvx
cda0317e
GM
198.Op Fl T Sy u Ns | Ns Sy d
199.Oo Ar pool Oc Ns ...
200.Op Ar interval Op Ar count
201.Nm
202.Cm sync
203.Oo Ar pool Oc Ns ...
204.Nm
205.Cm upgrade
206.Nm
207.Cm upgrade
208.Fl v
209.Nm
210.Cm upgrade
211.Op Fl V Ar version
212.Fl a Ns | Ns Ar pool Ns ...
50478c6d
T
213.Nm
214.Cm version
cda0317e
GM
215.Sh DESCRIPTION
216The
217.Nm
218command configures ZFS storage pools.
219A storage pool is a collection of devices that provides physical storage and
220data replication for ZFS datasets.
221All datasets within a storage pool share the same space.
222See
223.Xr zfs 8
224for information on managing datasets.
225.Ss Virtual Devices (vdevs)
226A "virtual device" describes a single device or a collection of devices
227organized according to certain performance and fault characteristics.
228The following virtual devices are supported:
229.Bl -tag -width Ds
230.It Sy disk
231A block device, typically located under
232.Pa /dev .
233ZFS can use individual slices or partitions, though the recommended mode of
234operation is to use whole disks.
235A disk can be specified by a full path, or it can be a shorthand name
236.Po the relative portion of the path under
237.Pa /dev
238.Pc .
239A whole disk can be specified by omitting the slice or partition designation.
240For example,
241.Pa sda
242is equivalent to
243.Pa /dev/sda .
244When given a whole disk, ZFS automatically labels the disk, if necessary.
245.It Sy file
246A regular file.
247The use of files as a backing store is strongly discouraged.
248It is designed primarily for experimental purposes, as the fault tolerance of a
249file is only as good as the file system of which it is a part.
250A file must be specified by a full path.
251.It Sy mirror
252A mirror of two or more devices.
253Data is replicated in an identical fashion across all components of a mirror.
254A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
255failing before data integrity is compromised.
256.It Sy raidz , raidz1 , raidz2 , raidz3
257A variation on RAID-5 that allows for better distribution of parity and
258eliminates the RAID-5
259.Qq write hole
260.Pq in which data and parity become inconsistent after a power loss .
261Data and parity is striped across all disks within a raidz group.
262.Pp
263A raidz group can have single-, double-, or triple-parity, meaning that the
264raidz group can sustain one, two, or three failures, respectively, without
265losing any data.
266The
267.Sy raidz1
268vdev type specifies a single-parity raidz group; the
269.Sy raidz2
270vdev type specifies a double-parity raidz group; and the
271.Sy raidz3
272vdev type specifies a triple-parity raidz group.
273The
274.Sy raidz
275vdev type is an alias for
276.Sy raidz1 .
277.Pp
278A raidz group with N disks of size X with P parity disks can hold approximately
279(N-P)*X bytes and can withstand P device(s) failing before data integrity is
280compromised.
281The minimum number of devices in a raidz group is one more than the number of
282parity disks.
283The recommended number is between 3 and 9 to help increase performance.
284.It Sy spare
9810410a 285A pseudo-vdev which keeps track of available hot spares for a pool.
cda0317e
GM
286For more information, see the
287.Sx Hot Spares
288section.
289.It Sy log
290A separate intent log device.
291If more than one log device is specified, then writes are load-balanced between
292devices.
293Log devices can be mirrored.
294However, raidz vdev types are not supported for the intent log.
295For more information, see the
296.Sx Intent Log
297section.
cc99f275 298.It Sy dedup
9810410a 299A device dedicated solely for dedup data.
cc99f275
DB
300The redundancy of this device should match the redundancy of the other normal
301devices in the pool. If more than one dedup device is specified, then
9810410a 302allocations are load-balanced between those devices.
cc99f275
DB
303.It Sy special
304A device dedicated solely for allocating various kinds of internal metadata,
305and optionally small file data.
306The redundancy of this device should match the redundancy of the other normal
307devices in the pool. If more than one special device is specified, then
9810410a 308allocations are load-balanced between those devices.
cc99f275
DB
309.Pp
310For more information on special allocations, see the
311.Sx Special Allocation Class
312section.
cda0317e
GM
313.It Sy cache
314A device used to cache storage pool data.
315A cache device cannot be configured as a mirror or raidz group.
316For more information, see the
317.Sx Cache Devices
318section.
319.El
320.Pp
321Virtual devices cannot be nested, so a mirror or raidz virtual device can only
322contain files or disks.
323Mirrors of mirrors
324.Pq or other combinations
325are not allowed.
326.Pp
327A pool can have any number of virtual devices at the top of the configuration
328.Po known as
329.Qq root vdevs
330.Pc .
331Data is dynamically distributed across all top-level devices to balance data
332among devices.
333As new virtual devices are added, ZFS automatically places data on the newly
334available devices.
335.Pp
336Virtual devices are specified one at a time on the command line, separated by
337whitespace.
338The keywords
339.Sy mirror
340and
341.Sy raidz
342are used to distinguish where a group ends and another begins.
343For example, the following creates two root vdevs, each a mirror of two disks:
344.Bd -literal
345# zpool create mypool mirror sda sdb mirror sdc sdd
346.Ed
347.Ss Device Failure and Recovery
348ZFS supports a rich set of mechanisms for handling device failure and data
349corruption.
350All metadata and data is checksummed, and ZFS automatically repairs bad data
351from a good copy when corruption is detected.
352.Pp
353In order to take advantage of these features, a pool must make use of some form
354of redundancy, using either mirrored or raidz groups.
355While ZFS supports running in a non-redundant configuration, where each root
356vdev is simply a disk or file, this is strongly discouraged.
357A single case of bit corruption can render some or all of your data unavailable.
358.Pp
359A pool's health status is described by one of three states: online, degraded,
360or faulted.
361An online pool has all devices operating normally.
362A degraded pool is one in which one or more devices have failed, but the data is
363still available due to a redundant configuration.
364A faulted pool has corrupted metadata, or one or more faulted devices, and
365insufficient replicas to continue functioning.
366.Pp
367The health of the top-level vdev, such as mirror or raidz device, is
368potentially impacted by the state of its associated vdevs, or component
369devices.
370A top-level vdev or component device is in one of the following states:
371.Bl -tag -width "DEGRADED"
372.It Sy DEGRADED
373One or more top-level vdevs is in the degraded state because one or more
374component devices are offline.
375Sufficient replicas exist to continue functioning.
376.Pp
377One or more component devices is in the degraded or faulted state, but
378sufficient replicas exist to continue functioning.
379The underlying conditions are as follows:
380.Bl -bullet
381.It
382The number of checksum errors exceeds acceptable levels and the device is
383degraded as an indication that something may be wrong.
384ZFS continues to use the device as necessary.
385.It
386The number of I/O errors exceeds acceptable levels.
387The device could not be marked as faulted because there are insufficient
388replicas to continue functioning.
389.El
390.It Sy FAULTED
391One or more top-level vdevs is in the faulted state because one or more
392component devices are offline.
393Insufficient replicas exist to continue functioning.
394.Pp
395One or more component devices is in the faulted state, and insufficient
396replicas exist to continue functioning.
397The underlying conditions are as follows:
398.Bl -bullet
399.It
6b4e21c6 400The device could be opened, but the contents did not match expected values.
cda0317e
GM
401.It
402The number of I/O errors exceeds acceptable levels and the device is faulted to
403prevent further use of the device.
404.El
405.It Sy OFFLINE
406The device was explicitly taken offline by the
407.Nm zpool Cm offline
408command.
409.It Sy ONLINE
058ac9ba 410The device is online and functioning.
cda0317e
GM
411.It Sy REMOVED
412The device was physically removed while the system was running.
413Device removal detection is hardware-dependent and may not be supported on all
414platforms.
415.It Sy UNAVAIL
416The device could not be opened.
417If a pool is imported when a device was unavailable, then the device will be
418identified by a unique identifier instead of its path since the path was never
419correct in the first place.
420.El
421.Pp
422If a device is removed and later re-attached to the system, ZFS attempts
423to put the device online automatically.
424Device attach detection is hardware-dependent and might not be supported on all
425platforms.
426.Ss Hot Spares
427ZFS allows devices to be associated with pools as
428.Qq hot spares .
429These devices are not actively used in the pool, but when an active device
430fails, it is automatically replaced by a hot spare.
431To create a pool with hot spares, specify a
432.Sy spare
433vdev with any number of devices.
434For example,
435.Bd -literal
54e5f226 436# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
437.Ed
438.Pp
439Spares can be shared across multiple pools, and can be added with the
440.Nm zpool Cm add
441command and removed with the
442.Nm zpool Cm remove
443command.
444Once a spare replacement is initiated, a new
445.Sy spare
446vdev is created within the configuration that will remain there until the
447original device is replaced.
448At this point, the hot spare becomes available again if another device fails.
449.Pp
450If a pool has a shared spare that is currently being used, the pool can not be
451exported since other pools may use this shared spare, which may lead to
452potential data corruption.
453.Pp
4f3218ae
OF
454Shared spares add some risk. If the pools are imported on different hosts, and
455both pools suffer a device failure at the same time, both could attempt to use
456the spare at the same time. This may not be detected, resulting in data
457corruption.
458.Pp
7c9abcf8 459An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
460If the original faulted device is detached, then the hot spare assumes its
461place in the configuration, and is removed from the spare list of all active
462pools.
463.Pp
058ac9ba 464Spares cannot replace log devices.
cda0317e
GM
465.Ss Intent Log
466The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
467transactions.
468For instance, databases often require their transactions to be on stable storage
469devices when returning from a system call.
470NFS and other applications can also use
471.Xr fsync 2
472to ensure data stability.
473By default, the intent log is allocated from blocks within the main pool.
474However, it might be possible to get better performance using separate intent
475log devices such as NVRAM or a dedicated disk.
476For example:
477.Bd -literal
478# zpool create pool sda sdb log sdc
479.Ed
480.Pp
481Multiple log devices can also be specified, and they can be mirrored.
482See the
483.Sx EXAMPLES
484section for an example of mirroring multiple log devices.
485.Pp
910f3ce7
PA
486Log devices can be added, replaced, attached, detached and removed. In
487addition, log devices are imported and exported as part of the pool
488that contains them.
a1d477c2 489Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
490.Ss Cache Devices
491Devices can be added to a storage pool as
492.Qq cache devices .
493These devices provide an additional layer of caching between main memory and
494disk.
495For read-heavy workloads, where the working set size is much larger than what
496can be cached in main memory, using cache devices allow much more of this
497working set to be served from low latency media.
498Using cache devices provides the greatest performance improvement for random
499read-workloads of mostly static content.
500.Pp
501To create a pool with cache devices, specify a
502.Sy cache
503vdev with any number of devices.
504For example:
505.Bd -literal
506# zpool create pool sda sdb cache sdc sdd
507.Ed
508.Pp
509Cache devices cannot be mirrored or part of a raidz configuration.
510If a read error is encountered on a cache device, that read I/O is reissued to
511the original storage pool device, which might be part of a mirrored or raidz
512configuration.
513.Pp
514The content of the cache devices is considered volatile, as is the case with
515other system caches.
d2734cce
SD
516.Ss Pool checkpoint
517Before starting critical procedures that include destructive actions (e.g
518.Nm zfs Cm destroy
519), an administrator can checkpoint the pool's state and in the case of a
520mistake or failure, rewind the entire pool back to the checkpoint.
521Otherwise, the checkpoint can be discarded when the procedure has completed
522successfully.
523.Pp
524A pool checkpoint can be thought of as a pool-wide snapshot and should be used
525with care as it contains every part of the pool's state, from properties to vdev
526configuration.
527Thus, while a pool has a checkpoint certain operations are not allowed.
528Specifically, vdev removal/attach/detach, mirror splitting, and
529changing the pool's guid.
530Adding a new vdev is supported but in the case of a rewind it will have to be
531added again.
532Finally, users of this feature should keep in mind that scrubs in a pool that
533has a checkpoint do not repair checkpointed data.
534.Pp
535To create a checkpoint for a pool:
536.Bd -literal
537# zpool checkpoint pool
538.Ed
539.Pp
540To later rewind to its checkpointed state, you need to first export it and
541then rewind it during import:
542.Bd -literal
543# zpool export pool
544# zpool import --rewind-to-checkpoint pool
545.Ed
546.Pp
547To discard the checkpoint from a pool:
548.Bd -literal
549# zpool checkpoint -d pool
550.Ed
551.Pp
552Dataset reservations (controlled by the
553.Nm reservation
554or
555.Nm refreservation
556zfs properties) may be unenforceable while a checkpoint exists, because the
557checkpoint is allowed to consume the dataset's reservation.
558Finally, data that is part of the checkpoint but has been freed in the
559current state of the pool won't be scanned during a scrub.
cc99f275
DB
560.Ss Special Allocation Class
561The allocations in the special class are dedicated to specific block types.
562By default this includes all metadata, the indirect blocks of user data, and
563any dedup data. The class can also be provisioned to accept a limited
564percentage of small file data blocks.
565.Pp
566A pool must always have at least one general (non-specified) vdev before
567other devices can be assigned to the special class. If the special class
568becomes full, then allocations intended for it will spill back into the
569normal class.
570.Pp
571Dedup data can be excluded from the special class by setting the
572.Sy zfs_ddt_data_is_special
573zfs module parameter to false (0).
574.Pp
575Inclusion of small file blocks in the special class is opt-in. Each dataset
576can control the size of small file blocks allowed in the special class by
577setting the
578.Sy special_small_blocks
9810410a 579dataset property. It defaults to zero, so you must opt-in by setting it to a
cc99f275
DB
580non-zero value. See
581.Xr zfs 8
582for more info on setting this property.
cda0317e
GM
583.Ss Properties
584Each pool has several properties associated with it.
585Some properties are read-only statistics while others are configurable and
586change the behavior of the pool.
587.Pp
588The following are read-only properties:
589.Bl -tag -width Ds
6df9f8eb
YP
590.It Cm allocated
591Amount of storage used within the pool.
af650793
JS
592See
593.Sy fragmentation
594and
595.Sy free
596for more information.
cda0317e
GM
597.It Sy capacity
598Percentage of pool space used.
599This property can also be referred to by its shortened column name,
600.Sy cap .
601.It Sy expandsize
9ae529ec 602Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
603increase the total capacity of the pool.
604Uninitialized space consists of any space on an EFI labeled vdev which has not
605been brought online
606.Po e.g, using
607.Nm zpool Cm online Fl e
608.Pc .
609This space occurs when a LUN is dynamically expanded.
610.It Sy fragmentation
af650793
JS
611The amount of fragmentation in the pool. As the amount of space
612.Sy allocated
613increases, it becomes more difficult to locate
614.Sy free
615space. This may result in lower write performance compared to pools with more
616unfragmented free space.
cda0317e 617.It Sy free
9ae529ec 618The amount of free space available in the pool.
af650793
JS
619By contrast, the
620.Xr zfs 8
621.Sy available
622property describes how much new data can be written to ZFS filesystems/volumes.
623The zpool
624.Sy free
625property is not generally useful for this purpose, and can be substantially more than the zfs
626.Sy available
627space. This discrepancy is due to several factors, including raidz party; zfs
628reservation, quota, refreservation, and refquota properties; and space set aside by
629.Sy spa_slop_shift
630(see
631.Xr zfs-module-parameters 5
632for more information).
cda0317e 633.It Sy freeing
9ae529ec 634After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
635returned to the pool asynchronously.
636.Sy freeing
637is the amount of space remaining to be reclaimed.
638Over time
639.Sy freeing
640will decrease while
641.Sy free
642increases.
643.It Sy health
644The current health of the pool.
645Health can be one of
646.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
647.It Sy guid
058ac9ba 648A unique identifier for the pool.
a448a255
SD
649.It Sy load_guid
650A unique identifier for the pool.
651Unlike the
652.Sy guid
653property, this identifier is generated every time we load the pool (e.g. does
654not persist across imports/exports) and never changes while the pool is loaded
655(even if a
656.Sy reguid
657operation takes place).
cda0317e 658.It Sy size
058ac9ba 659Total size of the storage pool.
cda0317e
GM
660.It Sy unsupported@ Ns Em feature_guid
661Information about unsupported features that are enabled on the pool.
662See
663.Xr zpool-features 5
664for details.
cda0317e
GM
665.El
666.Pp
667The space usage properties report actual physical space available to the
668storage pool.
669The physical space can be different from the total amount of space that any
670contained datasets can actually use.
671The amount of space used in a raidz configuration depends on the characteristics
672of the data being written.
673In addition, ZFS reserves some space for internal accounting that the
674.Xr zfs 8
675command takes into account, but the
676.Nm
677command does not.
678For non-full pools of a reasonable size, these effects should be invisible.
679For small pools, or pools that are close to being completely full, these
680discrepancies may become more noticeable.
681.Pp
058ac9ba 682The following property can be set at creation time and import time:
cda0317e
GM
683.Bl -tag -width Ds
684.It Sy altroot
685Alternate root directory.
686If set, this directory is prepended to any mount points within the pool.
687This can be used when examining an unknown pool where the mount points cannot be
688trusted, or in an alternate boot environment, where the typical paths are not
689valid.
690.Sy altroot
691is not a persistent property.
692It is valid only while the system is up.
693Setting
694.Sy altroot
695defaults to using
696.Sy cachefile Ns = Ns Sy none ,
697though this may be overridden using an explicit setting.
698.El
699.Pp
700The following property can be set only at import time:
701.Bl -tag -width Ds
702.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
703If set to
704.Sy on ,
705the pool will be imported in read-only mode.
706This property can also be referred to by its shortened column name,
707.Sy rdonly .
708.El
709.Pp
710The following properties can be set at creation time and import time, and later
711changed with the
712.Nm zpool Cm set
713command:
714.Bl -tag -width Ds
715.It Sy ashift Ns = Ns Sy ashift
716Pool sector size exponent, to the power of
717.Sy 2
718(internally referred to as
719.Sy ashift
9810410a 720). Values from 9 to 16, inclusive, are valid; also, the
cda0317e
GM
721value 0 (the default) means to auto-detect using the kernel's block
722layer and a ZFS internal exception list. I/O operations will be aligned
723to the specified size boundaries. Additionally, the minimum (disk)
724write size will be set to the specified size, so this represents a
725space vs. performance trade-off. For optimal performance, the pool
726sector size should be greater than or equal to the sector size of the
727underlying disks. The typical case for setting this property is when
728performance is important and the underlying disks use 4KiB sectors but
729report 512B sectors to the OS (for compatibility reasons); in that
730case, set
731.Sy ashift=12
732(which is 1<<12 = 4096). When set, this property is
733used as the default hint value in subsequent vdev operations (add,
734attach and replace). Changing this value will not modify any existing
735vdev, not even on disk replacement; however it can be used, for
736instance, to replace a dying 512B sectors disk with a newer 4KiB
737sectors device: this will probably result in bad performance but at the
738same time could prevent loss of data.
739.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
740Controls automatic pool expansion when the underlying LUN is grown.
741If set to
742.Sy on ,
743the pool will be resized according to the size of the expanded device.
744If the device is part of a mirror or raidz then all devices within that
745mirror/raidz group must be expanded before the new space is made available to
746the pool.
747The default behavior is
748.Sy off .
749This property can also be referred to by its shortened column name,
750.Sy expand .
751.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
752Controls automatic device replacement.
753If set to
754.Sy off ,
755device replacement must be initiated by the administrator by using the
756.Nm zpool Cm replace
757command.
758If set to
759.Sy on ,
760any new device, found in the same physical location as a device that previously
761belonged to the pool, is automatically formatted and replaced.
762The default behavior is
763.Sy off .
764This property can also be referred to by its shortened column name,
765.Sy replace .
766Autoreplace can also be used with virtual disks (like device
767mapper) provided that you use the /dev/disk/by-vdev paths setup by
768vdev_id.conf. See the
769.Xr vdev_id 8
770man page for more details.
771Autoreplace and autoonline require the ZFS Event Daemon be configured and
772running. See the
773.Xr zed 8
774man page for more details.
775.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
776Identifies the default bootable dataset for the root pool. This property is
777expected to be set mainly by the installation and upgrade programs.
778Not all Linux distribution boot processes use the bootfs property.
779.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
780Controls the location of where the pool configuration is cached.
781Discovering all pools on system startup requires a cached copy of the
782configuration data that is stored on the root file system.
783All pools in this cache are automatically imported when the system boots.
784Some environments, such as install and clustering, need to cache this
785information in a different location so that pools are not automatically
786imported.
787Setting this property caches the pool configuration in a different location that
788can later be imported with
789.Nm zpool Cm import Fl c .
9810410a 790Setting it to the value
cda0317e 791.Sy none
9810410a 792creates a temporary pool that is never cached, and the
cda0317e
GM
793.Qq
794.Pq empty string
795uses the default location.
796.Pp
797Multiple pools can share the same cache file.
798Because the kernel destroys and recreates this file when pools are added and
799removed, care should be taken when attempting to access this file.
800When the last pool using a
801.Sy cachefile
bbf1ad67 802is exported or destroyed, the file will be empty.
cda0317e
GM
803.It Sy comment Ns = Ns Ar text
804A text string consisting of printable ASCII characters that will be stored
805such that it is available even if the pool becomes faulted.
806An administrator can provide additional information about a pool using this
807property.
808.It Sy dedupditto Ns = Ns Ar number
7b337fda
RL
809This property is deprecated. In a future release, it will no longer have any
810effect.
811.Pp
cda0317e
GM
812Threshold for the number of block ditto copies.
813If the reference count for a deduplicated block increases above this number, a
814new ditto copy of this block is automatically stored.
815The default setting is
816.Sy 0
817which causes no ditto copies to be created for deduplicated blocks.
818The minimum legal nonzero setting is
819.Sy 100 .
820.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
821Controls whether a non-privileged user is granted access based on the dataset
822permissions defined on the dataset.
823See
824.Xr zfs 8
825for more information on ZFS delegated administration.
826.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
827Controls the system behavior in the event of catastrophic pool failure.
828This condition is typically a result of a loss of connectivity to the underlying
829storage device(s) or a failure of all devices within the pool.
830The behavior of such an event is determined as follows:
831.Bl -tag -width "continue"
832.It Sy wait
833Blocks all I/O access until the device connectivity is recovered and the errors
834are cleared.
835This is the default behavior.
836.It Sy continue
837Returns
838.Er EIO
839to any new write I/O requests but allows reads to any of the remaining healthy
840devices.
841Any write requests that have yet to be committed to disk would be blocked.
842.It Sy panic
058ac9ba 843Prints out a message to the console and generates a system crash dump.
cda0317e 844.El
1b939560
BB
845.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
846When set to
847.Sy on
848space which has been recently freed, and is no longer allocated by the pool,
849will be periodically trimmed. This allows block device vdevs which support
850BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
851supports hole-punching, to reclaim unused blocks. The default setting for
852this property is
853.Sy off .
854.Pp
855Automatic TRIM does not immediately reclaim blocks after a free. Instead,
856it will optimistically delay allowing smaller ranges to be aggregated in to
857a few larger ones. These can then be issued more efficiently to the storage.
858.Pp
859Be aware that automatic trimming of recently freed data blocks can put
860significant stress on the underlying storage devices. This will vary
861depending of how well the specific device handles these commands. For
862lower end devices it is often possible to achieve most of the benefits
863of automatic trimming by running an on-demand (manual) TRIM periodically
864using the
865.Nm zpool Cm trim
866command.
cda0317e
GM
867.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
868The value of this property is the current state of
869.Ar feature_name .
870The only valid value when setting this property is
871.Sy enabled
872which moves
873.Ar feature_name
874to the enabled state.
875See
876.Xr zpool-features 5
877for details on feature states.
878.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
879Controls whether information about snapshots associated with this pool is
880output when
881.Nm zfs Cm list
882is run without the
883.Fl t
884option.
885The default value is
886.Sy off .
887This property can also be referred to by its shortened name,
888.Sy listsnaps .
379ca9cf
OF
889.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
890Controls whether a pool activity check should be performed during
891.Nm zpool Cm import .
892When a pool is determined to be active it cannot be imported, even with the
893.Fl f
894option. This property is intended to be used in failover configurations
4f3218ae 895where multiple hosts have access to a pool on shared storage.
f72ecb8d 896.Pp
4f3218ae
OF
897Multihost provides protection on import only. It does not protect against an
898individual device being used in multiple pools, regardless of the type of vdev.
899See the discussion under
900.Sy zpool create.
f72ecb8d 901.Pp
4f3218ae
OF
902When this property is on, periodic writes to storage occur to show the pool is
903in use. See
379ca9cf
OF
904.Sy zfs_multihost_interval
905in the
906.Xr zfs-module-parameters 5
907man page. In order to enable this property each host must set a unique hostid.
908See
909.Xr genhostid 1
b9373170 910.Xr zgenhostid 8
d699aaef 911.Xr spl-module-parameters 5
379ca9cf
OF
912for additional details. The default value is
913.Sy off .
cda0317e
GM
914.It Sy version Ns = Ns Ar version
915The current on-disk version of the pool.
916This can be increased, but never decreased.
917The preferred method of updating pools is with the
918.Nm zpool Cm upgrade
919command, though this property can be used when a specific version is needed for
920backwards compatibility.
921Once feature flags are enabled on a pool this property will no longer have a
922value.
923.El
924.Ss Subcommands
925All subcommands that modify state are logged persistently to the pool in their
926original form.
927.Pp
928The
929.Nm
930command provides subcommands to create and destroy storage pools, add capacity
931to storage pools, and provide information about the storage pools.
932The following subcommands are supported:
933.Bl -tag -width Ds
934.It Xo
935.Nm
936.Fl ?
937.Xc
058ac9ba 938Displays a help message.
cda0317e
GM
939.It Xo
940.Nm
50478c6d
T
941.Fl V, -version
942.Xc
943An alias for the
944.Nm zpool Cm version
945subcommand.
946.It Xo
947.Nm
cda0317e
GM
948.Cm add
949.Op Fl fgLnP
950.Oo Fl o Ar property Ns = Ns Ar value Oc
951.Ar pool vdev Ns ...
952.Xc
953Adds the specified virtual devices to the given pool.
954The
955.Ar vdev
956specification is described in the
957.Sx Virtual Devices
958section.
959The behavior of the
960.Fl f
961option, and the device checks performed are described in the
962.Nm zpool Cm create
963subcommand.
964.Bl -tag -width Ds
965.It Fl f
966Forces use of
967.Ar vdev Ns s ,
968even if they appear in use or specify a conflicting replication level.
969Not all devices can be overridden in this manner.
970.It Fl g
971Display
972.Ar vdev ,
973GUIDs instead of the normal device names. These GUIDs can be used in place of
974device names for the zpool detach/offline/remove/replace commands.
975.It Fl L
976Display real paths for
977.Ar vdev Ns s
978resolving all symbolic links. This can be used to look up the current block
979device name regardless of the /dev/disk/ path used to open it.
980.It Fl n
981Displays the configuration that would be used without actually adding the
982.Ar vdev Ns s .
983The actual pool creation can still fail due to insufficient privileges or
984device sharing.
985.It Fl P
986Display real paths for
987.Ar vdev Ns s
988instead of only the last component of the path. This can be used in
85912983 989conjunction with the
990.Fl L
991flag.
cda0317e
GM
992.It Fl o Ar property Ns = Ns Ar value
993Sets the given pool properties. See the
994.Sx Properties
995section for a list of valid properties that can be set. The only property
996supported at the moment is ashift.
997.El
998.It Xo
999.Nm
1000.Cm attach
1001.Op Fl f
1002.Oo Fl o Ar property Ns = Ns Ar value Oc
1003.Ar pool device new_device
1004.Xc
1005Attaches
1006.Ar new_device
1007to the existing
1008.Ar device .
1009The existing device cannot be part of a raidz configuration.
1010If
1011.Ar device
1012is not currently part of a mirrored configuration,
1013.Ar device
1014automatically transforms into a two-way mirror of
1015.Ar device
1016and
1017.Ar new_device .
1018If
1019.Ar device
1020is part of a two-way mirror, attaching
1021.Ar new_device
1022creates a three-way mirror, and so on.
1023In either case,
1024.Ar new_device
1025begins to resilver immediately.
1026.Bl -tag -width Ds
1027.It Fl f
1028Forces use of
1029.Ar new_device ,
74580a94 1030even if it appears to be in use.
cda0317e
GM
1031Not all devices can be overridden in this manner.
1032.It Fl o Ar property Ns = Ns Ar value
1033Sets the given pool properties. See the
1034.Sx Properties
1035section for a list of valid properties that can be set. The only property
1036supported at the moment is ashift.
1037.El
1038.It Xo
1039.Nm
d2734cce
SD
1040.Cm checkpoint
1041.Op Fl d, -discard
1042.Ar pool
1043.Xc
1044Checkpoints the current state of
1045.Ar pool
1046, which can be later restored by
1047.Nm zpool Cm import --rewind-to-checkpoint .
1048The existence of a checkpoint in a pool prohibits the following
1049.Nm zpool
1050commands:
1051.Cm remove ,
1052.Cm attach ,
1053.Cm detach ,
1054.Cm split ,
1055and
1056.Cm reguid .
1057In addition, it may break reservation boundaries if the pool lacks free
1058space.
1059The
1060.Nm zpool Cm status
1061command indicates the existence of a checkpoint or the progress of discarding a
1062checkpoint from a pool.
1063The
1064.Nm zpool Cm list
1065command reports how much space the checkpoint takes from the pool.
1066.Bl -tag -width Ds
1067.It Fl d, -discard
1068Discards an existing checkpoint from
1069.Ar pool .
1070.El
1071.It Xo
1072.Nm
cda0317e
GM
1073.Cm clear
1074.Ar pool
1075.Op Ar device
1076.Xc
1077Clears device errors in a pool.
1078If no arguments are specified, all device errors within the pool are cleared.
1079If one or more devices is specified, only those errors associated with the
1080specified device or devices are cleared.
8133679f
OF
1081If multihost is enabled, and the pool has been suspended, this will not
1082resume I/O. While the pool was suspended, it may have been imported on
1083another host, and resuming I/O could result in pool damage.
cda0317e
GM
1084.It Xo
1085.Nm
1086.Cm create
1087.Op Fl dfn
1088.Op Fl m Ar mountpoint
1089.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1090.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1091.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1092.Op Fl R Ar root
1093.Op Fl t Ar tname
1094.Ar pool vdev Ns ...
1095.Xc
1096Creates a new storage pool containing the virtual devices specified on the
1097command line.
1098The pool name must begin with a letter, and can only contain
1099alphanumeric characters as well as underscore
1100.Pq Qq Sy _ ,
1101dash
90cdf283 1102.Pq Qq Sy \&- ,
cda0317e
GM
1103colon
1104.Pq Qq Sy \&: ,
1105space
90cdf283 1106.Pq Qq Sy \&\ ,
cda0317e
GM
1107and period
1108.Pq Qq Sy \&. .
1109The pool names
1110.Sy mirror ,
1111.Sy raidz ,
1112.Sy spare
1113and
1114.Sy log
5ee220ba
TK
1115are reserved, as are names beginning with
1116.Sy mirror ,
1117.Sy raidz ,
1118.Sy spare ,
1119and the pattern
cda0317e
GM
1120.Sy c[0-9] .
1121The
1122.Ar vdev
1123specification is described in the
1124.Sx Virtual Devices
1125section.
1126.Pp
4f3218ae
OF
1127The command attempts to verify that each device specified is accessible and not
1128currently in use by another subsystem. However this check is not robust enough
1129to detect simultaneous attempts to use a new device in different pools, even if
1130.Sy multihost
1131is
1132.Sy enabled.
1133The
1134administrator must ensure that simultaneous invocations of any combination of
1135.Sy zpool replace ,
1136.Sy zpool create ,
1137.Sy zpool add ,
1138or
1139.Sy zpool labelclear ,
1140do not refer to the same device. Using the same device in two pools will
1141result in pool corruption.
f72ecb8d 1142.Pp
cda0317e
GM
1143There are some uses, such as being currently mounted, or specified as the
1144dedicated dump device, that prevents a device from ever being used by ZFS.
1145Other uses, such as having a preexisting UFS file system, can be overridden with
1146the
1147.Fl f
1148option.
1149.Pp
1150The command also checks that the replication strategy for the pool is
1151consistent.
1152An attempt to combine redundant and non-redundant storage in a single pool, or
1153to mix disks and files, results in an error unless
1154.Fl f
1155is specified.
1156The use of differently sized devices within a single raidz or mirror group is
1157also flagged as an error unless
1158.Fl f
1159is specified.
1160.Pp
1161Unless the
1162.Fl R
1163option is specified, the default mount point is
1164.Pa / Ns Ar pool .
1165The mount point must not exist or must be empty, or else the root dataset
1166cannot be mounted.
1167This can be overridden with the
1168.Fl m
1169option.
1170.Pp
1171By default all supported features are enabled on the new pool unless the
1172.Fl d
1173option is specified.
1174.Bl -tag -width Ds
1175.It Fl d
1176Do not enable any features on the new pool.
1177Individual features can be enabled by setting their corresponding properties to
1178.Sy enabled
1179with the
1180.Fl o
1181option.
1182See
1183.Xr zpool-features 5
1184for details about feature properties.
1185.It Fl f
1186Forces use of
1187.Ar vdev Ns s ,
1188even if they appear in use or specify a conflicting replication level.
1189Not all devices can be overridden in this manner.
1190.It Fl m Ar mountpoint
1191Sets the mount point for the root dataset.
1192The default mount point is
1193.Pa /pool
1194or
1195.Pa altroot/pool
1196if
1197.Ar altroot
1198is specified.
1199The mount point must be an absolute path,
1200.Sy legacy ,
1201or
1202.Sy none .
1203For more information on dataset mount points, see
1204.Xr zfs 8 .
1205.It Fl n
1206Displays the configuration that would be used without actually creating the
1207pool.
1208The actual pool creation can still fail due to insufficient privileges or
1209device sharing.
1210.It Fl o Ar property Ns = Ns Ar value
1211Sets the given pool properties.
1212See the
1213.Sx Properties
1214section for a list of valid properties that can be set.
1215.It Fl o Ar feature@feature Ns = Ns Ar value
1216Sets the given pool feature. See the
1217.Xr zpool-features 5
1218section for a list of valid features that can be set.
1219Value can be either disabled or enabled.
1220.It Fl O Ar file-system-property Ns = Ns Ar value
1221Sets the given file system properties in the root file system of the pool.
1222See the
1223.Sx Properties
1224section of
1225.Xr zfs 8
1226for a list of valid properties that can be set.
1227.It Fl R Ar root
1228Equivalent to
1229.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1230.It Fl t Ar tname
1231Sets the in-core pool name to
1232.Sy tname
1233while the on-disk name will be the name specified as the pool name
1234.Sy pool .
1235This will set the default cachefile property to none. This is intended
1236to handle name space collisions when creating pools for other systems,
1237such as virtual machines or physical machines whose pools live on network
1238block devices.
1239.El
1240.It Xo
1241.Nm
1242.Cm destroy
1243.Op Fl f
1244.Ar pool
1245.Xc
1246Destroys the given pool, freeing up any devices for other use.
1247This command tries to unmount any active datasets before destroying the pool.
1248.Bl -tag -width Ds
1249.It Fl f
058ac9ba 1250Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1251.El
1252.It Xo
1253.Nm
1254.Cm detach
1255.Ar pool device
1256.Xc
1257Detaches
1258.Ar device
1259from a mirror.
1260The operation is refused if there are no other valid replicas of the data.
1261If device may be re-added to the pool later on then consider the
1262.Sy zpool offline
1263command instead.
1264.It Xo
1265.Nm
1266.Cm events
88f9c939 1267.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1268.Xc
1269Lists all recent events generated by the ZFS kernel modules. These events
1270are consumed by the
1271.Xr zed 8
1272and used to automate administrative tasks such as replacing a failed device
1273with a hot spare. For more information about the subclasses and event payloads
1274that can be generated see the
1275.Xr zfs-events 5
1276man page.
1277.Bl -tag -width Ds
1278.It Fl c
d050c627 1279Clear all previous events.
cda0317e
GM
1280.It Fl f
1281Follow mode.
1282.It Fl H
1283Scripted mode. Do not display headers, and separate fields by a
1284single tab instead of arbitrary space.
1285.It Fl v
1286Print the entire payload for each event.
1287.El
1288.It Xo
1289.Nm
1290.Cm export
1291.Op Fl a
1292.Op Fl f
1293.Ar pool Ns ...
1294.Xc
1295Exports the given pools from the system.
1296All devices are marked as exported, but are still considered in use by other
1297subsystems.
1298The devices can be moved between systems
1299.Pq even those of different endianness
1300and imported as long as a sufficient number of devices are present.
1301.Pp
1302Before exporting the pool, all datasets within the pool are unmounted.
1303A pool can not be exported if it has a shared spare that is currently being
1304used.
1305.Pp
1306For pools to be portable, you must give the
1307.Nm
1308command whole disks, not just partitions, so that ZFS can label the disks with
1309portable EFI labels.
1310Otherwise, disk drivers on platforms of different endianness will not recognize
1311the disks.
1312.Bl -tag -width Ds
1313.It Fl a
859735c0 1314Exports all pools imported on the system.
cda0317e
GM
1315.It Fl f
1316Forcefully unmount all datasets, using the
1317.Nm unmount Fl f
1318command.
1319.Pp
1320This command will forcefully export the pool even if it has a shared spare that
1321is currently being used.
1322This may lead to potential data corruption.
1323.El
1324.It Xo
1325.Nm
1326.Cm get
1327.Op Fl Hp
1328.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1329.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 1330.Oo Ar pool Oc Ns ...
cda0317e
GM
1331.Xc
1332Retrieves the given list of properties
1333.Po
1334or all properties if
1335.Sy all
1336is used
1337.Pc
1338for the specified storage pool(s).
1339These properties are displayed with the following fields:
1340.Bd -literal
2a8b84b7 1341 name Name of storage pool
058ac9ba
BB
1342 property Property name
1343 value Property value
1344 source Property source, either 'default' or 'local'.
cda0317e
GM
1345.Ed
1346.Pp
1347See the
1348.Sx Properties
1349section for more information on the available pool properties.
1350.Bl -tag -width Ds
1351.It Fl H
1352Scripted mode.
1353Do not display headers, and separate fields by a single tab instead of arbitrary
1354space.
1355.It Fl o Ar field
1356A comma-separated list of columns to display.
d7323e79 1357.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1358is the default value.
cda0317e
GM
1359.It Fl p
1360Display numbers in parsable (exact) values.
1361.El
1362.It Xo
1363.Nm
1364.Cm history
1365.Op Fl il
1366.Oo Ar pool Oc Ns ...
1367.Xc
1368Displays the command history of the specified pool(s) or all pools if no pool is
1369specified.
1370.Bl -tag -width Ds
1371.It Fl i
1372Displays internally logged ZFS events in addition to user initiated events.
1373.It Fl l
1374Displays log records in long format, which in addition to standard format
1375includes, the user name, the hostname, and the zone in which the operation was
1376performed.
1377.El
1378.It Xo
1379.Nm
1380.Cm import
1381.Op Fl D
522db292 1382.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1383.Xc
1384Lists pools available to import.
1385If the
1386.Fl d
1387option is not specified, this command searches for devices in
1388.Pa /dev .
1389The
1390.Fl d
1391option can be specified multiple times, and all directories are searched.
1392If the device appears to be part of an exported pool, this command displays a
1393summary of the pool with the name of the pool, a numeric identifier, as well as
1394the vdev layout and current health of the device for each device or file.
1395Destroyed pools, pools that were previously destroyed with the
1396.Nm zpool Cm destroy
1397command, are not listed unless the
1398.Fl D
1399option is specified.
1400.Pp
1401The numeric identifier is unique, and can be used instead of the pool name when
1402multiple exported pools of the same name are available.
1403.Bl -tag -width Ds
1404.It Fl c Ar cachefile
1405Reads configuration from the given
1406.Ar cachefile
1407that was created with the
1408.Sy cachefile
1409pool property.
1410This
1411.Ar cachefile
1412is used instead of searching for devices.
522db292
CC
1413.It Fl d Ar dir Ns | Ns Ar device
1414Uses
1415.Ar device
1416or searches for devices or files in
cda0317e
GM
1417.Ar dir .
1418The
1419.Fl d
1420option can be specified multiple times.
1421.It Fl D
058ac9ba 1422Lists destroyed pools only.
cda0317e
GM
1423.El
1424.It Xo
1425.Nm
1426.Cm import
1427.Fl a
b5256303 1428.Op Fl DflmN
cda0317e 1429.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1430.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1431.Op Fl o Ar mntopts
1432.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1433.Op Fl R Ar root
1434.Op Fl s
1435.Xc
1436Imports all pools found in the search directories.
1437Identical to the previous command, except that all pools with a sufficient
1438number of devices available are imported.
1439Destroyed pools, pools that were previously destroyed with the
1440.Nm zpool Cm destroy
1441command, will not be imported unless the
1442.Fl D
1443option is specified.
1444.Bl -tag -width Ds
1445.It Fl a
6b4e21c6 1446Searches for and imports all pools found.
cda0317e
GM
1447.It Fl c Ar cachefile
1448Reads configuration from the given
1449.Ar cachefile
1450that was created with the
1451.Sy cachefile
1452pool property.
1453This
1454.Ar cachefile
1455is used instead of searching for devices.
522db292
CC
1456.It Fl d Ar dir Ns | Ns Ar device
1457Uses
1458.Ar device
1459or searches for devices or files in
cda0317e
GM
1460.Ar dir .
1461The
1462.Fl d
1463option can be specified multiple times.
1464This option is incompatible with the
1465.Fl c
1466option.
1467.It Fl D
1468Imports destroyed pools only.
1469The
1470.Fl f
1471option is also required.
1472.It Fl f
1473Forces import, even if the pool appears to be potentially active.
1474.It Fl F
1475Recovery mode for a non-importable pool.
1476Attempt to return the pool to an importable state by discarding the last few
1477transactions.
1478Not all damaged pools can be recovered by using this option.
1479If successful, the data from the discarded transactions is irretrievably lost.
1480This option is ignored if the pool is importable or already imported.
b5256303
TC
1481.It Fl l
1482Indicates that this command will request encryption keys for all encrypted
1483datasets it attempts to mount as it is bringing the pool online. Note that if
1484any datasets have a
1485.Sy keylocation
1486of
1487.Sy prompt
1488this command will block waiting for the keys to be entered. Without this flag
1489encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1490.It Fl m
7f9d9946 1491Allows a pool to import when there is a missing log device.
cda0317e
GM
1492Recent transactions can be lost because the log device will be discarded.
1493.It Fl n
1494Used with the
1495.Fl F
1496recovery option.
1497Determines whether a non-importable pool can be made importable again, but does
1498not actually perform the pool recovery.
1499For more details about pool recovery mode, see the
1500.Fl F
1501option, above.
1502.It Fl N
7f9d9946 1503Import the pool without mounting any file systems.
cda0317e
GM
1504.It Fl o Ar mntopts
1505Comma-separated list of mount options to use when mounting datasets within the
1506pool.
1507See
1508.Xr zfs 8
1509for a description of dataset properties and mount options.
1510.It Fl o Ar property Ns = Ns Ar value
1511Sets the specified property on the imported pool.
1512See the
1513.Sx Properties
1514section for more information on the available pool properties.
1515.It Fl R Ar root
1516Sets the
1517.Sy cachefile
1518property to
1519.Sy none
1520and the
1521.Sy altroot
1522property to
1523.Ar root .
d2734cce
SD
1524.It Fl -rewind-to-checkpoint
1525Rewinds pool to the checkpointed state.
1526Once the pool is imported with this flag there is no way to undo the rewind.
1527All changes and data that were written after the checkpoint are lost!
1528The only exception is when the
1529.Sy readonly
1530mounting option is enabled.
1531In this case, the checkpointed state of the pool is opened and an
1532administrator can see how the pool would look like if they were
1533to fully rewind.
cda0317e
GM
1534.It Fl s
1535Scan using the default search path, the libblkid cache will not be
1536consulted. A custom search path may be specified by setting the
1537ZPOOL_IMPORT_PATH environment variable.
1538.It Fl X
1539Used with the
1540.Fl F
1541recovery option. Determines whether extreme
1542measures to find a valid txg should take place. This allows the pool to
1543be rolled back to a txg which is no longer guaranteed to be consistent.
1544Pools imported at an inconsistent txg may contain uncorrectable
1545checksum errors. For more details about pool recovery mode, see the
1546.Fl F
1547option, above. WARNING: This option can be extremely hazardous to the
1548health of your pool and should only be used as a last resort.
1549.It Fl T
1550Specify the txg to use for rollback. Implies
1551.Fl FX .
1552For more details
1553about pool recovery mode, see the
1554.Fl X
1555option, above. WARNING: This option can be extremely hazardous to the
1556health of your pool and should only be used as a last resort.
1557.El
1558.It Xo
1559.Nm
1560.Cm import
b5256303 1561.Op Fl Dflm
cda0317e 1562.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1563.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1564.Op Fl o Ar mntopts
1565.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1566.Op Fl R Ar root
1567.Op Fl s
1568.Ar pool Ns | Ns Ar id
1569.Op Ar newpool
1570.Xc
1571Imports a specific pool.
1572A pool can be identified by its name or the numeric identifier.
1573If
1574.Ar newpool
1575is specified, the pool is imported using the name
1576.Ar newpool .
1577Otherwise, it is imported with the same name as its exported name.
1578.Pp
1579If a device is removed from a system without running
1580.Nm zpool Cm export
1581first, the device appears as potentially active.
1582It cannot be determined if this was a failed export, or whether the device is
1583really in use from another host.
1584To import a pool in this state, the
1585.Fl f
1586option is required.
1587.Bl -tag -width Ds
1588.It Fl c Ar cachefile
1589Reads configuration from the given
1590.Ar cachefile
1591that was created with the
1592.Sy cachefile
1593pool property.
1594This
1595.Ar cachefile
1596is used instead of searching for devices.
522db292
CC
1597.It Fl d Ar dir Ns | Ns Ar device
1598Uses
1599.Ar device
1600or searches for devices or files in
cda0317e
GM
1601.Ar dir .
1602The
1603.Fl d
1604option can be specified multiple times.
1605This option is incompatible with the
1606.Fl c
1607option.
1608.It Fl D
1609Imports destroyed pool.
1610The
1611.Fl f
1612option is also required.
1613.It Fl f
058ac9ba 1614Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1615.It Fl F
1616Recovery mode for a non-importable pool.
1617Attempt to return the pool to an importable state by discarding the last few
1618transactions.
1619Not all damaged pools can be recovered by using this option.
1620If successful, the data from the discarded transactions is irretrievably lost.
1621This option is ignored if the pool is importable or already imported.
b5256303
TC
1622.It Fl l
1623Indicates that this command will request encryption keys for all encrypted
1624datasets it attempts to mount as it is bringing the pool online. Note that if
1625any datasets have a
1626.Sy keylocation
1627of
1628.Sy prompt
1629this command will block waiting for the keys to be entered. Without this flag
1630encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1631.It Fl m
7f9d9946 1632Allows a pool to import when there is a missing log device.
cda0317e
GM
1633Recent transactions can be lost because the log device will be discarded.
1634.It Fl n
1635Used with the
1636.Fl F
1637recovery option.
1638Determines whether a non-importable pool can be made importable again, but does
1639not actually perform the pool recovery.
1640For more details about pool recovery mode, see the
1641.Fl F
1642option, above.
1643.It Fl o Ar mntopts
1644Comma-separated list of mount options to use when mounting datasets within the
1645pool.
1646See
1647.Xr zfs 8
1648for a description of dataset properties and mount options.
1649.It Fl o Ar property Ns = Ns Ar value
1650Sets the specified property on the imported pool.
1651See the
1652.Sx Properties
1653section for more information on the available pool properties.
1654.It Fl R Ar root
1655Sets the
1656.Sy cachefile
1657property to
1658.Sy none
1659and the
1660.Sy altroot
1661property to
1662.Ar root .
1663.It Fl s
1664Scan using the default search path, the libblkid cache will not be
1665consulted. A custom search path may be specified by setting the
1666ZPOOL_IMPORT_PATH environment variable.
1667.It Fl X
1668Used with the
1669.Fl F
1670recovery option. Determines whether extreme
1671measures to find a valid txg should take place. This allows the pool to
1672be rolled back to a txg which is no longer guaranteed to be consistent.
1673Pools imported at an inconsistent txg may contain uncorrectable
1674checksum errors. For more details about pool recovery mode, see the
1675.Fl F
1676option, above. WARNING: This option can be extremely hazardous to the
1677health of your pool and should only be used as a last resort.
1678.It Fl T
1679Specify the txg to use for rollback. Implies
1680.Fl FX .
1681For more details
1682about pool recovery mode, see the
1683.Fl X
1684option, above. WARNING: This option can be extremely hazardous to the
1685health of your pool and should only be used as a last resort.
1c68856b 1686.It Fl t
cda0317e
GM
1687Used with
1688.Sy newpool .
1689Specifies that
1690.Sy newpool
1691is temporary. Temporary pool names last until export. Ensures that
1692the original pool name will be used in all label updates and therefore
1693is retained upon export.
1694Will also set -o cachefile=none when not explicitly specified.
1695.El
1696.It Xo
1697.Nm
619f0976 1698.Cm initialize
a769fb53 1699.Op Fl c | Fl s
619f0976
GW
1700.Ar pool
1701.Op Ar device Ns ...
1702.Xc
1703Begins initializing by writing to all unallocated regions on the specified
1704devices, or all eligible devices in the pool if no individual devices are
1705specified.
1706Only leaf data or log devices may be initialized.
1707.Bl -tag -width Ds
1708.It Fl c, -cancel
1709Cancel initializing on the specified devices, or all eligible devices if none
1710are specified.
1711If one or more target devices are invalid or are not currently being
1712initialized, the command will fail and no cancellation will occur on any device.
1713.It Fl s -suspend
1714Suspend initializing on the specified devices, or all eligible devices if none
1715are specified.
1716If one or more target devices are invalid or are not currently being
1717initialized, the command will fail and no suspension will occur on any device.
1718Initializing can then be resumed by running
1719.Nm zpool Cm initialize
1720with no flags on the relevant target devices.
1721.El
1722.It Xo
1723.Nm
cda0317e
GM
1724.Cm iostat
1725.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1726.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 1727.Op Fl ghHLnpPvy
cda0317e
GM
1728.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1729.Op Ar interval Op Ar count
1730.Xc
f8bb2a7e
KP
1731Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1732be observed via
1733.Xr iostat 1 .
1734If writes are located nearby, they may be merged into a single
1735larger operation. Additional I/O may be generated depending on the level of
1736vdev redundancy.
1737To filter output, you may pass in a list of pools, a pool and list of vdevs
1738in that pool, or a list of any vdevs from any pool. If no items are specified,
1739statistics for every pool in the system are shown.
cda0317e
GM
1740When given an
1741.Ar interval ,
1742the statistics are printed every
1743.Ar interval
8fccfa8e
DW
1744seconds until ^C is pressed. If
1745.Fl n
1746flag is specified the headers are displayed only once, otherwise they are
1747displayed periodically. If count is specified, the command exits
cda0317e
GM
1748after count reports are printed. The first report printed is always
1749the statistics since boot regardless of whether
1750.Ar interval
1751and
1752.Ar count
1753are passed. However, this behavior can be suppressed with the
1754.Fl y
1755flag. Also note that the units of
1756.Sy K ,
1757.Sy M ,
1758.Sy G ...
1759that are printed in the report are in base 1024. To get the raw
1760values, use the
1761.Fl p
1762flag.
1763.Bl -tag -width Ds
7a8ed6b8 1764.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1765Run a script (or scripts) on each vdev and include the output as a new column
1766in the
1767.Nm zpool Cm iostat
1768output. Users can run any script found in their
1769.Pa ~/.zpool.d
1770directory or from the system
1771.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1772directory. Script names containing the slash (/) character are not allowed.
1773The default search path can be overridden by setting the
cda0317e
GM
1774ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1775.Fl c
1776if they have the ZPOOL_SCRIPTS_AS_ROOT
1777environment variable set. If a script requires the use of a privileged
1778command, like
7a8ed6b8
NB
1779.Xr smartctl 8 ,
1780then it's recommended you allow the user access to it in
cda0317e
GM
1781.Pa /etc/sudoers
1782or add the user to the
1783.Pa /etc/sudoers.d/zfs
1784file.
1785.Pp
1786If
1787.Fl c
1788is passed without a script name, it prints a list of all scripts.
1789.Fl c
7a8ed6b8 1790also sets verbose mode
90cdf283 1791.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1792.Pp
1793Script output should be in the form of "name=value". The column name is
1794set to "name" and the value is set to "value". Multiple lines can be
1795used to output multiple columns. The first line of output not in the
1796"name=value" format is displayed without a column title, and no more
1797output after that is displayed. This can be useful for printing error
1798messages. Blank or NULL values are printed as a '-' to make output
1799awk-able.
1800.Pp
d6418de0 1801The following environment variables are set before running each script:
cda0317e
GM
1802.Bl -tag -width "VDEV_PATH"
1803.It Sy VDEV_PATH
1804Full path to the vdev
1805.El
1806.Bl -tag -width "VDEV_UPATH"
1807.It Sy VDEV_UPATH
1808Underlying path to the vdev (/dev/sd*). For use with device mapper,
1809multipath, or partitioned vdevs.
1810.El
1811.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1812.It Sy VDEV_ENC_SYSFS_PATH
1813The sysfs path to the enclosure for the vdev (if any).
1814.El
1815.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1816Display a time stamp.
cda0317e
GM
1817Specify
1818.Sy u
1819for a printed representation of the internal representation of time.
1820See
1821.Xr time 2 .
1822Specify
1823.Sy d
1824for standard date format.
1825See
1826.Xr date 1 .
1827.It Fl g
1828Display vdev GUIDs instead of the normal device names. These GUIDs
1829can be used in place of device names for the zpool
1830detach/offline/remove/replace commands.
1831.It Fl H
1832Scripted mode. Do not display headers, and separate fields by a
1833single tab instead of arbitrary space.
1834.It Fl L
1835Display real paths for vdevs resolving all symbolic links. This can
1836be used to look up the current block device name regardless of the
1837.Pa /dev/disk/
1838path used to open it.
8fccfa8e
DW
1839.It Fl n
1840Print headers only once when passed
cda0317e
GM
1841.It Fl p
1842Display numbers in parsable (exact) values. Time values are in
1843nanoseconds.
1844.It Fl P
1845Display full paths for vdevs instead of only the last component of
1846the path. This can be used in conjunction with the
1847.Fl L
1848flag.
1849.It Fl r
1b939560
BB
1850Print request size histograms for the leaf vdev's IO. This includes
1851histograms of individual IOs (ind) and aggregate IOs (agg). These stats
1852can be useful for observing how well IO aggregation is working. Note
1853that TRIM IOs may exceed 16M, but will be counted as 16M.
cda0317e
GM
1854.It Fl v
1855Verbose statistics Reports usage statistics for individual vdevs within the
1856pool, in addition to the pool-wide statistics.
1857.It Fl y
eb201f50
GM
1858Omit statistics since boot.
1859Normally the first line of output reports the statistics since boot.
1860This option suppresses that first line of output.
8fccfa8e 1861.Ar interval
cda0317e 1862.It Fl w
eb201f50
GM
1863Display latency histograms:
1864.Pp
1865.Ar total_wait :
1866Total IO time (queuing + disk IO time).
1867.Ar disk_wait :
1868Disk IO time (time reading/writing the disk).
1869.Ar syncq_wait :
1870Amount of time IO spent in synchronous priority queues. Does not include
1871disk time.
1872.Ar asyncq_wait :
1873Amount of time IO spent in asynchronous priority queues. Does not include
1874disk time.
1875.Ar scrub :
1876Amount of time IO spent in scrub queue. Does not include disk time.
cda0317e 1877.It Fl l
193a37cb 1878Include average latency statistics:
cda0317e
GM
1879.Pp
1880.Ar total_wait :
193a37cb 1881Average total IO time (queuing + disk IO time).
cda0317e 1882.Ar disk_wait :
193a37cb 1883Average disk IO time (time reading/writing the disk).
cda0317e
GM
1884.Ar syncq_wait :
1885Average amount of time IO spent in synchronous priority queues. Does
1886not include disk time.
1887.Ar asyncq_wait :
1888Average amount of time IO spent in asynchronous priority queues.
1889Does not include disk time.
1890.Ar scrub :
1891Average queuing time in scrub queue. Does not include disk time.
1b939560
BB
1892.Ar trim :
1893Average queuing time in trim queue. Does not include disk time.
cda0317e
GM
1894.It Fl q
1895Include active queue statistics. Each priority queue has both
1896pending (
1897.Ar pend )
1898and active (
1899.Ar activ )
1900IOs. Pending IOs are waiting to
1901be issued to the disk, and active IOs have been issued to disk and are
1902waiting for completion. These stats are broken out by priority queue:
1903.Pp
1904.Ar syncq_read/write :
1905Current number of entries in synchronous priority
1906queues.
1907.Ar asyncq_read/write :
193a37cb 1908Current number of entries in asynchronous priority queues.
cda0317e 1909.Ar scrubq_read :
193a37cb 1910Current number of entries in scrub queue.
1b939560
BB
1911.Ar trimq_write :
1912Current number of entries in trim queue.
cda0317e
GM
1913.Pp
1914All queue statistics are instantaneous measurements of the number of
1915entries in the queues. If you specify an interval, the measurements
1916will be sampled from the end of the interval.
1917.El
1918.It Xo
1919.Nm
1920.Cm labelclear
1921.Op Fl f
1922.Ar device
1923.Xc
1924Removes ZFS label information from the specified
1925.Ar device .
1926The
1927.Ar device
1928must not be part of an active pool configuration.
1929.Bl -tag -width Ds
1930.It Fl f
131cc95c 1931Treat exported or foreign devices as inactive.
cda0317e
GM
1932.El
1933.It Xo
1934.Nm
1935.Cm list
1936.Op Fl HgLpPv
1937.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1938.Op Fl T Sy u Ns | Ns Sy d
1939.Oo Ar pool Oc Ns ...
1940.Op Ar interval Op Ar count
1941.Xc
1942Lists the given pools along with a health status and space usage.
1943If no
1944.Ar pool Ns s
1945are specified, all pools in the system are listed.
1946When given an
1947.Ar interval ,
1948the information is printed every
1949.Ar interval
1950seconds until ^C is pressed.
1951If
1952.Ar count
1953is specified, the command exits after
1954.Ar count
1955reports are printed.
1956.Bl -tag -width Ds
1957.It Fl g
1958Display vdev GUIDs instead of the normal device names. These GUIDs
1959can be used in place of device names for the zpool
1960detach/offline/remove/replace commands.
1961.It Fl H
1962Scripted mode.
1963Do not display headers, and separate fields by a single tab instead of arbitrary
1964space.
1965.It Fl o Ar property
1966Comma-separated list of properties to display.
1967See the
1968.Sx Properties
1969section for a list of valid properties.
1970The default list is
fb8a10d5
EA
1971.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1972.Cm capacity , dedupratio , health , altroot .
cda0317e
GM
1973.It Fl L
1974Display real paths for vdevs resolving all symbolic links. This can
1975be used to look up the current block device name regardless of the
1976/dev/disk/ path used to open it.
1977.It Fl p
1978Display numbers in parsable
1979.Pq exact
1980values.
1981.It Fl P
1982Display full paths for vdevs instead of only the last component of
1983the path. This can be used in conjunction with the
85912983 1984.Fl L
1985flag.
cda0317e 1986.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1987Display a time stamp.
cda0317e 1988Specify
f23b0242 1989.Sy u
cda0317e
GM
1990for a printed representation of the internal representation of time.
1991See
1992.Xr time 2 .
1993Specify
f23b0242 1994.Sy d
cda0317e
GM
1995for standard date format.
1996See
1997.Xr date 1 .
1998.It Fl v
1999Verbose statistics.
2000Reports usage statistics for individual vdevs within the pool, in addition to
2001the pool-wise statistics.
2002.El
2003.It Xo
2004.Nm
2005.Cm offline
2006.Op Fl f
2007.Op Fl t
2008.Ar pool Ar device Ns ...
2009.Xc
2010Takes the specified physical device offline.
2011While the
2012.Ar device
2013is offline, no attempt is made to read or write to the device.
2014This command is not applicable to spares.
2015.Bl -tag -width Ds
2016.It Fl f
2017Force fault. Instead of offlining the disk, put it into a faulted
2018state. The fault will persist across imports unless the
2019.Fl t
2020flag was specified.
2021.It Fl t
2022Temporary.
2023Upon reboot, the specified physical device reverts to its previous state.
2024.El
2025.It Xo
2026.Nm
2027.Cm online
2028.Op Fl e
2029.Ar pool Ar device Ns ...
2030.Xc
058ac9ba 2031Brings the specified physical device online.
7c9abcf8 2032This command is not applicable to spares.
cda0317e
GM
2033.Bl -tag -width Ds
2034.It Fl e
2035Expand the device to use all available space.
2036If the device is part of a mirror or raidz then all devices must be expanded
2037before the new space will become available to the pool.
2038.El
2039.It Xo
2040.Nm
2041.Cm reguid
2042.Ar pool
2043.Xc
2044Generates a new unique identifier for the pool.
2045You must ensure that all devices in this pool are online and healthy before
2046performing this action.
2047.It Xo
2048.Nm
2049.Cm reopen
d3f2cd7e 2050.Op Fl n
cda0317e
GM
2051.Ar pool
2052.Xc
5853fe79 2053Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
2054.Bl -tag -width Ds
2055.It Fl n
2056Do not restart an in-progress scrub operation. This is not recommended and can
2057result in partially resilvered devices unless a second scrub is performed.
a94d38c0 2058.El
cda0317e
GM
2059.It Xo
2060.Nm
2061.Cm remove
a1d477c2 2062.Op Fl np
cda0317e
GM
2063.Ar pool Ar device Ns ...
2064.Xc
2065Removes the specified device from the pool.
2ced3cf0
BB
2066This command supports removing hot spare, cache, log, and both mirrored and
2067non-redundant primary top-level vdevs, including dedup and special vdevs.
2068When the primary pool storage includes a top-level raidz vdev only hot spare,
2069cache, and log devices can be removed.
a1d477c2
MA
2070.sp
2071Removing a top-level vdev reduces the total amount of space in the storage pool.
2072The specified device will be evacuated by copying all allocated space from it to
2073the other devices in the pool.
2074In this case, the
2075.Nm zpool Cm remove
2076command initiates the removal and returns, while the evacuation continues in
2077the background.
2078The removal progress can be monitored with
7c9a4292
BB
2079.Nm zpool Cm status .
2080If an IO error is encountered during the removal process it will be
2081cancelled. The
2ced3cf0
BB
2082.Sy device_removal
2083feature flag must be enabled to remove a top-level vdev, see
2084.Xr zpool-features 5 .
a1d477c2
MA
2085.Pp
2086A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2087same.
2088Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
2089the
2090.Nm zpool Cm detach
2091command.
a1d477c2
MA
2092.Bl -tag -width Ds
2093.It Fl n
2094Do not actually perform the removal ("no-op").
2095Instead, print the estimated amount of memory that will be used by the
2096mapping table after the removal completes.
2097This is nonzero only for top-level vdevs.
2098.El
2099.Bl -tag -width Ds
2100.It Fl p
2101Used in conjunction with the
2102.Fl n
2103flag, displays numbers as parsable (exact) values.
2104.El
2105.It Xo
2106.Nm
2107.Cm remove
2108.Fl s
2109.Ar pool
2110.Xc
2111Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
2112.It Xo
2113.Nm
2114.Cm replace
2115.Op Fl f
2116.Op Fl o Ar property Ns = Ns Ar value
2117.Ar pool Ar device Op Ar new_device
2118.Xc
2119Replaces
2120.Ar old_device
2121with
2122.Ar new_device .
2123This is equivalent to attaching
2124.Ar new_device ,
2125waiting for it to resilver, and then detaching
2126.Ar old_device .
2127.Pp
2128The size of
2129.Ar new_device
2130must be greater than or equal to the minimum size of all the devices in a mirror
2131or raidz configuration.
2132.Pp
2133.Ar new_device
2134is required if the pool is not redundant.
2135If
2136.Ar new_device
2137is not specified, it defaults to
2138.Ar old_device .
2139This form of replacement is useful after an existing disk has failed and has
2140been physically replaced.
2141In this case, the new disk may have the same
2142.Pa /dev
2143path as the old device, even though it is actually a different disk.
2144ZFS recognizes this.
2145.Bl -tag -width Ds
2146.It Fl f
2147Forces use of
2148.Ar new_device ,
74580a94 2149even if it appears to be in use.
cda0317e
GM
2150Not all devices can be overridden in this manner.
2151.It Fl o Ar property Ns = Ns Ar value
2152Sets the given pool properties. See the
2153.Sx Properties
2154section for a list of valid properties that can be set.
2155The only property supported at the moment is
2156.Sy ashift .
2157.El
2158.It Xo
2159.Nm
2160.Cm scrub
0ea05c64 2161.Op Fl s | Fl p
cda0317e
GM
2162.Ar pool Ns ...
2163.Xc
0ea05c64 2164Begins a scrub or resumes a paused scrub.
cda0317e
GM
2165The scrub examines all data in the specified pools to verify that it checksums
2166correctly.
2167For replicated
2168.Pq mirror or raidz
2169devices, ZFS automatically repairs any damage discovered during the scrub.
2170The
2171.Nm zpool Cm status
2172command reports the progress of the scrub and summarizes the results of the
2173scrub upon completion.
2174.Pp
2175Scrubbing and resilvering are very similar operations.
2176The difference is that resilvering only examines data that ZFS knows to be out
2177of date
2178.Po
2179for example, when attaching a new device to a mirror or replacing an existing
2180device
2181.Pc ,
2182whereas scrubbing examines all data to discover silent errors due to hardware
2183faults or disk failure.
2184.Pp
2185Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2186one at a time.
0ea05c64 2187If a scrub is paused, the
cda0317e 2188.Nm zpool Cm scrub
0ea05c64 2189resumes it.
cda0317e
GM
2190If a resilver is in progress, ZFS does not allow a scrub to be started until the
2191resilver completes.
85bdc684
TC
2192.Pp
2193Note that, due to changes in pool data on a live system, it is possible for
2194scrubs to progress slightly beyond 100% completion. During this period, no
2195completion time estimate will be provided.
cda0317e
GM
2196.Bl -tag -width Ds
2197.It Fl s
058ac9ba 2198Stop scrubbing.
cda0317e 2199.El
0ea05c64
AP
2200.Bl -tag -width Ds
2201.It Fl p
2202Pause scrubbing.
e4b6b2db
AP
2203Scrub pause state and progress are periodically synced to disk.
2204If the system is restarted or pool is exported during a paused scrub,
2205even after import, scrub will remain paused until it is resumed.
2206Once resumed the scrub will pick up from the place where it was last
2207checkpointed to disk.
0ea05c64
AP
2208To resume a paused scrub issue
2209.Nm zpool Cm scrub
2210again.
2211.El
cda0317e
GM
2212.It Xo
2213.Nm
80a91e74
TC
2214.Cm resilver
2215.Ar pool Ns ...
2216.Xc
2217Starts a resilver. If an existing resilver is already running it will be
2218restarted from the beginning. Any drives that were scheduled for a deferred
fa241660
TC
2219resilver will be added to the new one. This requires the
2220.Sy resilver_defer
2221feature.
80a91e74
TC
2222.It Xo
2223.Nm
1b939560
BB
2224.Cm trim
2225.Op Fl d
2226.Op Fl c | Fl s
2227.Ar pool
2228.Op Ar device Ns ...
2229.Xc
2230Initiates an immediate on-demand TRIM operation for all of the free space in
2231a pool. This operation informs the underlying storage devices of all blocks
2232in the pool which are no longer allocated and allows thinly provisioned
2233devices to reclaim the space.
2234.Pp
2235A manual on-demand TRIM operation can be initiated irrespective of the
2236.Sy autotrim
2237pool property setting. See the documentation for the
2238.Sy autotrim
2239property above for the types of vdev devices which can be trimmed.
2240.Bl -tag -width Ds
2241.It Fl d -secure
2242Causes a secure TRIM to be initiated. When performing a secure TRIM, the
2243device guarantees that data stored on the trimmed blocks has been erased.
2244This requires support from the device and is not supported by all SSDs.
2245.It Fl r -rate Ar rate
2246Controls the rate at which the TRIM operation progresses. Without this
2247option TRIM is executed as quickly as possible. The rate, expressed in bytes
2248per second, is applied on a per-vdev basis and may be set differently for
2249each leaf vdev.
2250.It Fl c, -cancel
2251Cancel trimming on the specified devices, or all eligible devices if none
2252are specified.
2253If one or more target devices are invalid or are not currently being
2254trimmed, the command will fail and no cancellation will occur on any device.
2255.It Fl s -suspend
2256Suspend trimming on the specified devices, or all eligible devices if none
2257are specified.
2258If one or more target devices are invalid or are not currently being
2259trimmed, the command will fail and no suspension will occur on any device.
2260Trimming can then be resumed by running
2261.Nm zpool Cm trim
2262with no flags on the relevant target devices.
2263.El
2264.It Xo
2265.Nm
cda0317e
GM
2266.Cm set
2267.Ar property Ns = Ns Ar value
2268.Ar pool
2269.Xc
2270Sets the given property on the specified pool.
2271See the
2272.Sx Properties
2273section for more information on what properties can be set and acceptable
2274values.
2275.It Xo
2276.Nm
2277.Cm split
b5256303 2278.Op Fl gLlnP
cda0317e
GM
2279.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2280.Op Fl R Ar root
2281.Ar pool newpool
2282.Op Ar device ...
2283.Xc
2284Splits devices off
2285.Ar pool
2286creating
2287.Ar newpool .
2288All vdevs in
2289.Ar pool
2290must be mirrors and the pool must not be in the process of resilvering.
2291At the time of the split,
2292.Ar newpool
2293will be a replica of
2294.Ar pool .
2295By default, the
2296last device in each mirror is split from
2297.Ar pool
2298to create
2299.Ar newpool .
2300.Pp
2301The optional device specification causes the specified device(s) to be
2302included in the new
2303.Ar pool
2304and, should any devices remain unspecified,
2305the last device in each mirror is used as would be by default.
2306.Bl -tag -width Ds
2307.It Fl g
2308Display vdev GUIDs instead of the normal device names. These GUIDs
2309can be used in place of device names for the zpool
2310detach/offline/remove/replace commands.
2311.It Fl L
2312Display real paths for vdevs resolving all symbolic links. This can
2313be used to look up the current block device name regardless of the
2314.Pa /dev/disk/
2315path used to open it.
b5256303
TC
2316.It Fl l
2317Indicates that this command will request encryption keys for all encrypted
2318datasets it attempts to mount as it is bringing the new pool online. Note that
2319if any datasets have a
2320.Sy keylocation
2321of
2322.Sy prompt
2323this command will block waiting for the keys to be entered. Without this flag
2324encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
2325.It Fl n
2326Do dry run, do not actually perform the split.
2327Print out the expected configuration of
2328.Ar newpool .
2329.It Fl P
2330Display full paths for vdevs instead of only the last component of
2331the path. This can be used in conjunction with the
85912983 2332.Fl L
2333flag.
cda0317e
GM
2334.It Fl o Ar property Ns = Ns Ar value
2335Sets the specified property for
2336.Ar newpool .
2337See the
2338.Sx Properties
2339section for more information on the available pool properties.
2340.It Fl R Ar root
2341Set
2342.Sy altroot
2343for
2344.Ar newpool
2345to
2346.Ar root
2347and automatically import it.
2348.El
2349.It Xo
2350.Nm
2351.Cm status
7a8ed6b8 2352.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1b939560 2353.Op Fl DigLpPstvx
cda0317e
GM
2354.Op Fl T Sy u Ns | Ns Sy d
2355.Oo Ar pool Oc Ns ...
2356.Op Ar interval Op Ar count
2357.Xc
2358Displays the detailed health status for the given pools.
2359If no
2360.Ar pool
2361is specified, then the status of each pool in the system is displayed.
2362For more information on pool and device health, see the
2363.Sx Device Failure and Recovery
2364section.
2365.Pp
2366If a scrub or resilver is in progress, this command reports the percentage done
2367and the estimated time to completion.
2368Both of these are only approximate, because the amount of data in the pool and
2369the other workloads on the system can change.
2370.Bl -tag -width Ds
7a8ed6b8 2371.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2372Run a script (or scripts) on each vdev and include the output as a new column
2373in the
2374.Nm zpool Cm status
2375output. See the
2376.Fl c
2377option of
2378.Nm zpool Cm iostat
2379for complete details.
a769fb53
BB
2380.It Fl i
2381Display vdev initialization status.
cda0317e
GM
2382.It Fl g
2383Display vdev GUIDs instead of the normal device names. These GUIDs
2384can be used in place of device names for the zpool
2385detach/offline/remove/replace commands.
2386.It Fl L
2387Display real paths for vdevs resolving all symbolic links. This can
2388be used to look up the current block device name regardless of the
2389.Pa /dev/disk/
2390path used to open it.
ad796b8a
TH
2391.It Fl p
2392Display numbers in parsable (exact) values.
f4ae39a1
BB
2393.It Fl P
2394Display full paths for vdevs instead of only the last component of
2395the path. This can be used in conjunction with the
85912983 2396.Fl L
2397flag.
cda0317e
GM
2398.It Fl D
2399Display a histogram of deduplication statistics, showing the allocated
2400.Pq physically present on disk
2401and referenced
2402.Pq logically referenced in the pool
2403block counts and sizes by reference count.
ad796b8a
TH
2404.It Fl s
2405Display the number of leaf VDEV slow IOs. This is the number of IOs that
2406didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2407This does not necessarily mean the IOs failed to complete, just took an
2408unreasonably long amount of time. This may indicate a problem with the
2409underlying storage.
1b939560
BB
2410.It Fl t
2411Display vdev TRIM status.
cda0317e 2412.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2413Display a time stamp.
cda0317e 2414Specify
f23b0242 2415.Sy u
cda0317e
GM
2416for a printed representation of the internal representation of time.
2417See
2418.Xr time 2 .
2419Specify
f23b0242 2420.Sy d
cda0317e
GM
2421for standard date format.
2422See
2423.Xr date 1 .
2424.It Fl v
2425Displays verbose data error information, printing out a complete list of all
2426data errors since the last complete pool scrub.
2427.It Fl x
2428Only display status for pools that are exhibiting errors or are otherwise
2429unavailable.
2430Warnings about pools not using the latest on-disk format will not be included.
2431.El
2432.It Xo
2433.Nm
2434.Cm sync
2435.Op Ar pool ...
2436.Xc
2437This command forces all in-core dirty data to be written to the primary
2438pool storage and not the ZIL. It will also update administrative
2439information including quota reporting. Without arguments,
2440.Sy zpool sync
2441will sync all pools on the system. Otherwise, it will sync only the
2442specified pool(s).
2443.It Xo
2444.Nm
2445.Cm upgrade
2446.Xc
2447Displays pools which do not have all supported features enabled and pools
2448formatted using a legacy ZFS version number.
2449These pools can continue to be used, but some features may not be available.
2450Use
2451.Nm zpool Cm upgrade Fl a
2452to enable all features on all pools.
2453.It Xo
2454.Nm
2455.Cm upgrade
2456.Fl v
2457.Xc
2458Displays legacy ZFS versions supported by the current software.
2459See
2460.Xr zpool-features 5
2461for a description of feature flags features supported by the current software.
2462.It Xo
2463.Nm
2464.Cm upgrade
2465.Op Fl V Ar version
2466.Fl a Ns | Ns Ar pool Ns ...
2467.Xc
2468Enables all supported features on the given pool.
2469Once this is done, the pool will no longer be accessible on systems that do not
2470support feature flags.
2471See
9d489ab3 2472.Xr zpool-features 5
cda0317e
GM
2473for details on compatibility with systems that support feature flags, but do not
2474support all features enabled on the pool.
2475.Bl -tag -width Ds
2476.It Fl a
b9b24bb4 2477Enables all supported features on all pools.
cda0317e
GM
2478.It Fl V Ar version
2479Upgrade to the specified legacy version.
2480If the
2481.Fl V
2482flag is specified, no features will be enabled on the pool.
2483This option can only be used to increase the version number up to the last
2484supported legacy version number.
2485.El
50478c6d
T
2486.It Xo
2487.Nm
2488.Cm version
2489.Xc
2490Displays the software version of the
2491.Nm
2492userland utility and the zfs kernel module.
cda0317e
GM
2493.El
2494.Sh EXIT STATUS
2495The following exit values are returned:
2496.Bl -tag -width Ds
2497.It Sy 0
2498Successful completion.
2499.It Sy 1
2500An error occurred.
2501.It Sy 2
2502Invalid command line options were specified.
2503.El
2504.Sh EXAMPLES
2505.Bl -tag -width Ds
2506.It Sy Example 1 No Creating a RAID-Z Storage Pool
2507The following command creates a pool with a single raidz root vdev that
2508consists of six disks.
2509.Bd -literal
2510# zpool create tank raidz sda sdb sdc sdd sde sdf
2511.Ed
2512.It Sy Example 2 No Creating a Mirrored Storage Pool
2513The following command creates a pool with two mirrors, where each mirror
2514contains two disks.
2515.Bd -literal
2516# zpool create tank mirror sda sdb mirror sdc sdd
2517.Ed
2518.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2519The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2520.Bd -literal
2521# zpool create tank sda1 sdb2
2522.Ed
2523.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2524The following command creates an unmirrored pool using files.
2525While not recommended, a pool based on files can be useful for experimental
2526purposes.
2527.Bd -literal
2528# zpool create tank /path/to/file/a /path/to/file/b
2529.Ed
2530.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2531The following command adds two mirrored disks to the pool
2532.Em tank ,
2533assuming the pool is already made up of two-way mirrors.
2534The additional space is immediately available to any datasets within the pool.
2535.Bd -literal
2536# zpool add tank mirror sda sdb
2537.Ed
2538.It Sy Example 6 No Listing Available ZFS Storage Pools
2539The following command lists all available pools on the system.
2540In this case, the pool
2541.Em zion
2542is faulted due to a missing device.
058ac9ba 2543The results from this command are similar to the following:
cda0317e
GM
2544.Bd -literal
2545# zpool list
d72cd017
TK
2546NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2547rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2548tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2549zion - - - - - - - FAULTED -
cda0317e
GM
2550.Ed
2551.It Sy Example 7 No Destroying a ZFS Storage Pool
2552The following command destroys the pool
2553.Em tank
2554and any datasets contained within.
2555.Bd -literal
2556# zpool destroy -f tank
2557.Ed
2558.It Sy Example 8 No Exporting a ZFS Storage Pool
2559The following command exports the devices in pool
2560.Em tank
2561so that they can be relocated or later imported.
2562.Bd -literal
2563# zpool export tank
2564.Ed
2565.It Sy Example 9 No Importing a ZFS Storage Pool
2566The following command displays available pools, and then imports the pool
2567.Em tank
2568for use on the system.
058ac9ba 2569The results from this command are similar to the following:
cda0317e
GM
2570.Bd -literal
2571# zpool import
058ac9ba
BB
2572 pool: tank
2573 id: 15451357997522795478
2574 state: ONLINE
2575action: The pool can be imported using its name or numeric identifier.
2576config:
2577
2578 tank ONLINE
2579 mirror ONLINE
54e5f226
RL
2580 sda ONLINE
2581 sdb ONLINE
058ac9ba 2582
cda0317e
GM
2583# zpool import tank
2584.Ed
2585.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2586The following command upgrades all ZFS Storage pools to the current version of
2587the software.
2588.Bd -literal
2589# zpool upgrade -a
2590This system is currently running ZFS version 2.
2591.Ed
2592.It Sy Example 11 No Managing Hot Spares
058ac9ba 2593The following command creates a new pool with an available hot spare:
cda0317e
GM
2594.Bd -literal
2595# zpool create tank mirror sda sdb spare sdc
2596.Ed
2597.Pp
2598If one of the disks were to fail, the pool would be reduced to the degraded
2599state.
2600The failed device can be replaced using the following command:
2601.Bd -literal
2602# zpool replace tank sda sdd
2603.Ed
2604.Pp
2605Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2606made available for use should another device fail.
cda0317e
GM
2607The hot spare can be permanently removed from the pool using the following
2608command:
2609.Bd -literal
2610# zpool remove tank sdc
2611.Ed
2612.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2613The following command creates a ZFS storage pool consisting of two, two-way
2614mirrors and mirrored log devices:
2615.Bd -literal
2616# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2617 sde sdf
2618.Ed
2619.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2620The following command adds two disks for use as cache devices to a ZFS storage
2621pool:
2622.Bd -literal
2623# zpool add pool cache sdc sdd
2624.Ed
2625.Pp
2626Once added, the cache devices gradually fill with content from main memory.
2627Depending on the size of your cache devices, it could take over an hour for
2628them to fill.
2629Capacity and reads can be monitored using the
2630.Cm iostat
2631option as follows:
2632.Bd -literal
2633# zpool iostat -v pool 5
2634.Ed
a1d477c2
MA
2635.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2636The following commands remove the mirrored log device
2637.Sy mirror-2
2638and mirrored top-level data device
2639.Sy mirror-1 .
2640.Pp
058ac9ba 2641Given this configuration:
cda0317e
GM
2642.Bd -literal
2643 pool: tank
2644 state: ONLINE
2645 scrub: none requested
058ac9ba
BB
2646config:
2647
2648 NAME STATE READ WRITE CKSUM
2649 tank ONLINE 0 0 0
2650 mirror-0 ONLINE 0 0 0
54e5f226
RL
2651 sda ONLINE 0 0 0
2652 sdb ONLINE 0 0 0
058ac9ba 2653 mirror-1 ONLINE 0 0 0
54e5f226
RL
2654 sdc ONLINE 0 0 0
2655 sdd ONLINE 0 0 0
058ac9ba
BB
2656 logs
2657 mirror-2 ONLINE 0 0 0
54e5f226
RL
2658 sde ONLINE 0 0 0
2659 sdf ONLINE 0 0 0
cda0317e
GM
2660.Ed
2661.Pp
2662The command to remove the mirrored log
2663.Sy mirror-2
2664is:
2665.Bd -literal
2666# zpool remove tank mirror-2
2667.Ed
a1d477c2
MA
2668.Pp
2669The command to remove the mirrored data
2670.Sy mirror-1
2671is:
2672.Bd -literal
2673# zpool remove tank mirror-1
2674.Ed
cda0317e
GM
2675.It Sy Example 15 No Displaying expanded space on a device
2676The following command displays the detailed information for the pool
2677.Em data .
2678This pool is comprised of a single raidz vdev where one of its devices
2679increased its capacity by 10GB.
2680In this example, the pool will not be able to utilize this extra capacity until
2681all the devices under the raidz vdev have been expanded.
2682.Bd -literal
2683# zpool list -v data
d72cd017
TK
2684NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2685data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2686 raidz1 23.9G 14.6G 9.30G - 48%
2687 sda - - - - -
2688 sdb - - - 10G -
2689 sdc - - - - -
cda0317e
GM
2690.Ed
2691.It Sy Example 16 No Adding output columns
2692Additional columns can be added to the
2693.Nm zpool Cm status
2694and
2695.Nm zpool Cm iostat
2696output with
2697.Fl c
2698option.
2699.Bd -literal
2700# zpool status -c vendor,model,size
2701 NAME STATE READ WRITE CKSUM vendor model size
2702 tank ONLINE 0 0 0
2703 mirror-0 ONLINE 0 0 0
2704 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2705 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2706 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2707 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2708 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2709 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2710
2711# zpool iostat -vc slaves
2712 capacity operations bandwidth
2713 pool alloc free read write read write slaves
2714 ---------- ----- ----- ----- ----- ----- ----- ---------
2715 tank 20.4G 7.23T 26 152 20.7M 21.6M
2716 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2717 U1 - - 0 31 1.46K 20.6M sdb sdff
2718 U10 - - 0 1 3.77K 13.3K sdas sdgw
2719 U11 - - 0 1 288K 13.3K sdat sdgx
2720 U12 - - 0 1 78.4K 13.3K sdau sdgy
2721 U13 - - 0 1 128K 13.3K sdav sdgz
2722 U14 - - 0 1 63.2K 13.3K sdfk sdg
2723.Ed
2724.El
2725.Sh ENVIRONMENT VARIABLES
2726.Bl -tag -width "ZFS_ABORT"
2727.It Ev ZFS_ABORT
2728Cause
2729.Nm zpool
2730to dump core on exit for the purposes of running
90cdf283 2731.Sy ::findleaks .
cda0317e
GM
2732.El
2733.Bl -tag -width "ZPOOL_IMPORT_PATH"
2734.It Ev ZPOOL_IMPORT_PATH
2735The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2736.Nm zpool
2737looks for device nodes and files.
2738Similar to the
2739.Fl d
2740option in
2741.Nm zpool import .
2742.El
2743.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2744.It Ev ZPOOL_VDEV_NAME_GUID
2745Cause
2de17298
TK
2746.Nm zpool
2747subcommands to output vdev guids by default. This behavior is identical to the
cda0317e
GM
2748.Nm zpool status -g
2749command line option.
2750.El
2751.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2752.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2753Cause
2754.Nm zpool
2755subcommands to follow links for vdev names by default. This behavior is identical to the
2756.Nm zpool status -L
2757command line option.
2758.El
2759.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2760.It Ev ZPOOL_VDEV_NAME_PATH
2761Cause
2762.Nm zpool
2763subcommands to output full vdev path names by default. This
2764behavior is identical to the
2765.Nm zpool status -p
2766command line option.
2767.El
2768.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2769.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2770Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2771config VDEV names if a
2772.Sy devid
2773NVP value is present in the pool's config.
2774.Pp
39fc0cb5 2775For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2776value in the config and
2777.Nm zpool status
2778would fail when listing the config.
39fc0cb5 2779This would also be true for future Linux based pools.
cda0317e
GM
2780.Pp
2781A pool can be stripped of any
2782.Sy devid
2783values on import or prevented from adding
2784them on
2785.Nm zpool create
2786or
2787.Nm zpool add
2788by setting
2789.Sy ZFS_VDEV_DEVID_OPT_OUT .
2790.El
2791.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2792.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2793Allow a privileged user to run the
cda0317e
GM
2794.Nm zpool status/iostat
2795with the
2796.Fl c
7a8ed6b8 2797option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2798.Fl c .
2799.El
2800.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2801.It Ev ZPOOL_SCRIPTS_PATH
2802The search path for scripts when running
2803.Nm zpool status/iostat
2804with the
2805.Fl c
099700d9 2806option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2807.Pa ~/.zpool.d
2808and
2809.Pa /etc/zfs/zpool.d
2810search paths.
2811.El
2812.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2813.It Ev ZPOOL_SCRIPTS_ENABLED
2814Allow a user to run
2815.Nm zpool status/iostat
2816with the
2817.Fl c
2818option. If
2819.Sy ZPOOL_SCRIPTS_ENABLED
2820is not set, it is assumed that the user is allowed to run
2821.Nm zpool status/iostat -c .
90cdf283 2822.El
cda0317e
GM
2823.Sh INTERFACE STABILITY
2824.Sy Evolving
2825.Sh SEE ALSO
cda0317e
GM
2826.Xr zfs-events 5 ,
2827.Xr zfs-module-parameters 5 ,
90cdf283 2828.Xr zpool-features 5 ,
2829.Xr zed 8 ,
2830.Xr zfs 8