]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Fix missing option '-e' in zpool online usage
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a1d477c2 23.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
eb201f50 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
eb201f50 30.Dd April 27, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
d2734cce
SD
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
cda0317e
GM
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl m Ar mountpoint
61.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64.Op Fl R Ar root
65.Ar pool vdev Ns ...
66.Nm
67.Cm destroy
68.Op Fl f
69.Ar pool
70.Nm
71.Cm detach
72.Ar pool device
73.Nm
74.Cm events
88f9c939 75.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
76.Nm
77.Cm export
78.Op Fl a
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86.Ar pool Ns ...
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
522db292 94.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
95.Nm
96.Cm import
97.Fl a
b5256303 98.Op Fl DflmN
cda0317e 99.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 100.Op Fl -rewind-to-checkpoint
522db292 101.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
102.Op Fl o Ar mntopts
103.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104.Op Fl R Ar root
105.Nm
106.Cm import
b5256303 107.Op Fl Dflm
cda0317e 108.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 109.Op Fl -rewind-to-checkpoint
522db292 110.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
111.Op Fl o Ar mntopts
112.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113.Op Fl R Ar root
114.Op Fl s
115.Ar pool Ns | Ns Ar id
116.Op Ar newpool Oo Fl t Oc
117.Nm
118.Cm iostat
119.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
120.Op Fl T Sy u Ns | Ns Sy d
121.Op Fl ghHLpPvy
122.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
123.Op Ar interval Op Ar count
124.Nm
125.Cm labelclear
126.Op Fl f
127.Ar device
128.Nm
129.Cm list
130.Op Fl HgLpPv
131.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
132.Op Fl T Sy u Ns | Ns Sy d
133.Oo Ar pool Oc Ns ...
134.Op Ar interval Op Ar count
135.Nm
136.Cm offline
137.Op Fl f
138.Op Fl t
139.Ar pool Ar device Ns ...
140.Nm
141.Cm online
142.Op Fl e
143.Ar pool Ar device Ns ...
144.Nm
145.Cm reguid
146.Ar pool
147.Nm
148.Cm reopen
d3f2cd7e 149.Op Fl n
cda0317e
GM
150.Ar pool
151.Nm
152.Cm remove
a1d477c2 153.Op Fl np
cda0317e
GM
154.Ar pool Ar device Ns ...
155.Nm
a1d477c2
MA
156.Cm remove
157.Fl s
158.Ar pool
159.Nm
cda0317e
GM
160.Cm replace
161.Op Fl f
162.Oo Fl o Ar property Ns = Ns Ar value Oc
163.Ar pool Ar device Op Ar new_device
164.Nm
165.Cm scrub
0ea05c64 166.Op Fl s | Fl p
cda0317e
GM
167.Ar pool Ns ...
168.Nm
169.Cm set
170.Ar property Ns = Ns Ar value
171.Ar pool
172.Nm
173.Cm split
b5256303 174.Op Fl gLlnP
cda0317e
GM
175.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
176.Op Fl R Ar root
177.Ar pool newpool
178.Oo Ar device Oc Ns ...
179.Nm
180.Cm status
181.Oo Fl c Ar SCRIPT Oc
182.Op Fl gLPvxD
183.Op Fl T Sy u Ns | Ns Sy d
184.Oo Ar pool Oc Ns ...
185.Op Ar interval Op Ar count
186.Nm
187.Cm sync
188.Oo Ar pool Oc Ns ...
189.Nm
190.Cm upgrade
191.Nm
192.Cm upgrade
193.Fl v
194.Nm
195.Cm upgrade
196.Op Fl V Ar version
197.Fl a Ns | Ns Ar pool Ns ...
198.Sh DESCRIPTION
199The
200.Nm
201command configures ZFS storage pools.
202A storage pool is a collection of devices that provides physical storage and
203data replication for ZFS datasets.
204All datasets within a storage pool share the same space.
205See
206.Xr zfs 8
207for information on managing datasets.
208.Ss Virtual Devices (vdevs)
209A "virtual device" describes a single device or a collection of devices
210organized according to certain performance and fault characteristics.
211The following virtual devices are supported:
212.Bl -tag -width Ds
213.It Sy disk
214A block device, typically located under
215.Pa /dev .
216ZFS can use individual slices or partitions, though the recommended mode of
217operation is to use whole disks.
218A disk can be specified by a full path, or it can be a shorthand name
219.Po the relative portion of the path under
220.Pa /dev
221.Pc .
222A whole disk can be specified by omitting the slice or partition designation.
223For example,
224.Pa sda
225is equivalent to
226.Pa /dev/sda .
227When given a whole disk, ZFS automatically labels the disk, if necessary.
228.It Sy file
229A regular file.
230The use of files as a backing store is strongly discouraged.
231It is designed primarily for experimental purposes, as the fault tolerance of a
232file is only as good as the file system of which it is a part.
233A file must be specified by a full path.
234.It Sy mirror
235A mirror of two or more devices.
236Data is replicated in an identical fashion across all components of a mirror.
237A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
238failing before data integrity is compromised.
239.It Sy raidz , raidz1 , raidz2 , raidz3
240A variation on RAID-5 that allows for better distribution of parity and
241eliminates the RAID-5
242.Qq write hole
243.Pq in which data and parity become inconsistent after a power loss .
244Data and parity is striped across all disks within a raidz group.
245.Pp
246A raidz group can have single-, double-, or triple-parity, meaning that the
247raidz group can sustain one, two, or three failures, respectively, without
248losing any data.
249The
250.Sy raidz1
251vdev type specifies a single-parity raidz group; the
252.Sy raidz2
253vdev type specifies a double-parity raidz group; and the
254.Sy raidz3
255vdev type specifies a triple-parity raidz group.
256The
257.Sy raidz
258vdev type is an alias for
259.Sy raidz1 .
260.Pp
261A raidz group with N disks of size X with P parity disks can hold approximately
262(N-P)*X bytes and can withstand P device(s) failing before data integrity is
263compromised.
264The minimum number of devices in a raidz group is one more than the number of
265parity disks.
266The recommended number is between 3 and 9 to help increase performance.
267.It Sy spare
268A special pseudo-vdev which keeps track of available hot spares for a pool.
269For more information, see the
270.Sx Hot Spares
271section.
272.It Sy log
273A separate intent log device.
274If more than one log device is specified, then writes are load-balanced between
275devices.
276Log devices can be mirrored.
277However, raidz vdev types are not supported for the intent log.
278For more information, see the
279.Sx Intent Log
280section.
281.It Sy cache
282A device used to cache storage pool data.
283A cache device cannot be configured as a mirror or raidz group.
284For more information, see the
285.Sx Cache Devices
286section.
287.El
288.Pp
289Virtual devices cannot be nested, so a mirror or raidz virtual device can only
290contain files or disks.
291Mirrors of mirrors
292.Pq or other combinations
293are not allowed.
294.Pp
295A pool can have any number of virtual devices at the top of the configuration
296.Po known as
297.Qq root vdevs
298.Pc .
299Data is dynamically distributed across all top-level devices to balance data
300among devices.
301As new virtual devices are added, ZFS automatically places data on the newly
302available devices.
303.Pp
304Virtual devices are specified one at a time on the command line, separated by
305whitespace.
306The keywords
307.Sy mirror
308and
309.Sy raidz
310are used to distinguish where a group ends and another begins.
311For example, the following creates two root vdevs, each a mirror of two disks:
312.Bd -literal
313# zpool create mypool mirror sda sdb mirror sdc sdd
314.Ed
315.Ss Device Failure and Recovery
316ZFS supports a rich set of mechanisms for handling device failure and data
317corruption.
318All metadata and data is checksummed, and ZFS automatically repairs bad data
319from a good copy when corruption is detected.
320.Pp
321In order to take advantage of these features, a pool must make use of some form
322of redundancy, using either mirrored or raidz groups.
323While ZFS supports running in a non-redundant configuration, where each root
324vdev is simply a disk or file, this is strongly discouraged.
325A single case of bit corruption can render some or all of your data unavailable.
326.Pp
327A pool's health status is described by one of three states: online, degraded,
328or faulted.
329An online pool has all devices operating normally.
330A degraded pool is one in which one or more devices have failed, but the data is
331still available due to a redundant configuration.
332A faulted pool has corrupted metadata, or one or more faulted devices, and
333insufficient replicas to continue functioning.
334.Pp
335The health of the top-level vdev, such as mirror or raidz device, is
336potentially impacted by the state of its associated vdevs, or component
337devices.
338A top-level vdev or component device is in one of the following states:
339.Bl -tag -width "DEGRADED"
340.It Sy DEGRADED
341One or more top-level vdevs is in the degraded state because one or more
342component devices are offline.
343Sufficient replicas exist to continue functioning.
344.Pp
345One or more component devices is in the degraded or faulted state, but
346sufficient replicas exist to continue functioning.
347The underlying conditions are as follows:
348.Bl -bullet
349.It
350The number of checksum errors exceeds acceptable levels and the device is
351degraded as an indication that something may be wrong.
352ZFS continues to use the device as necessary.
353.It
354The number of I/O errors exceeds acceptable levels.
355The device could not be marked as faulted because there are insufficient
356replicas to continue functioning.
357.El
358.It Sy FAULTED
359One or more top-level vdevs is in the faulted state because one or more
360component devices are offline.
361Insufficient replicas exist to continue functioning.
362.Pp
363One or more component devices is in the faulted state, and insufficient
364replicas exist to continue functioning.
365The underlying conditions are as follows:
366.Bl -bullet
367.It
6b4e21c6 368The device could be opened, but the contents did not match expected values.
cda0317e
GM
369.It
370The number of I/O errors exceeds acceptable levels and the device is faulted to
371prevent further use of the device.
372.El
373.It Sy OFFLINE
374The device was explicitly taken offline by the
375.Nm zpool Cm offline
376command.
377.It Sy ONLINE
058ac9ba 378The device is online and functioning.
cda0317e
GM
379.It Sy REMOVED
380The device was physically removed while the system was running.
381Device removal detection is hardware-dependent and may not be supported on all
382platforms.
383.It Sy UNAVAIL
384The device could not be opened.
385If a pool is imported when a device was unavailable, then the device will be
386identified by a unique identifier instead of its path since the path was never
387correct in the first place.
388.El
389.Pp
390If a device is removed and later re-attached to the system, ZFS attempts
391to put the device online automatically.
392Device attach detection is hardware-dependent and might not be supported on all
393platforms.
394.Ss Hot Spares
395ZFS allows devices to be associated with pools as
396.Qq hot spares .
397These devices are not actively used in the pool, but when an active device
398fails, it is automatically replaced by a hot spare.
399To create a pool with hot spares, specify a
400.Sy spare
401vdev with any number of devices.
402For example,
403.Bd -literal
54e5f226 404# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
405.Ed
406.Pp
407Spares can be shared across multiple pools, and can be added with the
408.Nm zpool Cm add
409command and removed with the
410.Nm zpool Cm remove
411command.
412Once a spare replacement is initiated, a new
413.Sy spare
414vdev is created within the configuration that will remain there until the
415original device is replaced.
416At this point, the hot spare becomes available again if another device fails.
417.Pp
418If a pool has a shared spare that is currently being used, the pool can not be
419exported since other pools may use this shared spare, which may lead to
420potential data corruption.
421.Pp
7c9abcf8 422An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
423If the original faulted device is detached, then the hot spare assumes its
424place in the configuration, and is removed from the spare list of all active
425pools.
426.Pp
058ac9ba 427Spares cannot replace log devices.
cda0317e
GM
428.Ss Intent Log
429The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
430transactions.
431For instance, databases often require their transactions to be on stable storage
432devices when returning from a system call.
433NFS and other applications can also use
434.Xr fsync 2
435to ensure data stability.
436By default, the intent log is allocated from blocks within the main pool.
437However, it might be possible to get better performance using separate intent
438log devices such as NVRAM or a dedicated disk.
439For example:
440.Bd -literal
441# zpool create pool sda sdb log sdc
442.Ed
443.Pp
444Multiple log devices can also be specified, and they can be mirrored.
445See the
446.Sx EXAMPLES
447section for an example of mirroring multiple log devices.
448.Pp
910f3ce7
PA
449Log devices can be added, replaced, attached, detached and removed. In
450addition, log devices are imported and exported as part of the pool
451that contains them.
a1d477c2 452Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
453.Ss Cache Devices
454Devices can be added to a storage pool as
455.Qq cache devices .
456These devices provide an additional layer of caching between main memory and
457disk.
458For read-heavy workloads, where the working set size is much larger than what
459can be cached in main memory, using cache devices allow much more of this
460working set to be served from low latency media.
461Using cache devices provides the greatest performance improvement for random
462read-workloads of mostly static content.
463.Pp
464To create a pool with cache devices, specify a
465.Sy cache
466vdev with any number of devices.
467For example:
468.Bd -literal
469# zpool create pool sda sdb cache sdc sdd
470.Ed
471.Pp
472Cache devices cannot be mirrored or part of a raidz configuration.
473If a read error is encountered on a cache device, that read I/O is reissued to
474the original storage pool device, which might be part of a mirrored or raidz
475configuration.
476.Pp
477The content of the cache devices is considered volatile, as is the case with
478other system caches.
d2734cce
SD
479.Ss Pool checkpoint
480Before starting critical procedures that include destructive actions (e.g
481.Nm zfs Cm destroy
482), an administrator can checkpoint the pool's state and in the case of a
483mistake or failure, rewind the entire pool back to the checkpoint.
484Otherwise, the checkpoint can be discarded when the procedure has completed
485successfully.
486.Pp
487A pool checkpoint can be thought of as a pool-wide snapshot and should be used
488with care as it contains every part of the pool's state, from properties to vdev
489configuration.
490Thus, while a pool has a checkpoint certain operations are not allowed.
491Specifically, vdev removal/attach/detach, mirror splitting, and
492changing the pool's guid.
493Adding a new vdev is supported but in the case of a rewind it will have to be
494added again.
495Finally, users of this feature should keep in mind that scrubs in a pool that
496has a checkpoint do not repair checkpointed data.
497.Pp
498To create a checkpoint for a pool:
499.Bd -literal
500# zpool checkpoint pool
501.Ed
502.Pp
503To later rewind to its checkpointed state, you need to first export it and
504then rewind it during import:
505.Bd -literal
506# zpool export pool
507# zpool import --rewind-to-checkpoint pool
508.Ed
509.Pp
510To discard the checkpoint from a pool:
511.Bd -literal
512# zpool checkpoint -d pool
513.Ed
514.Pp
515Dataset reservations (controlled by the
516.Nm reservation
517or
518.Nm refreservation
519zfs properties) may be unenforceable while a checkpoint exists, because the
520checkpoint is allowed to consume the dataset's reservation.
521Finally, data that is part of the checkpoint but has been freed in the
522current state of the pool won't be scanned during a scrub.
cda0317e
GM
523.Ss Properties
524Each pool has several properties associated with it.
525Some properties are read-only statistics while others are configurable and
526change the behavior of the pool.
527.Pp
528The following are read-only properties:
529.Bl -tag -width Ds
6df9f8eb
YP
530.It Cm allocated
531Amount of storage used within the pool.
cda0317e
GM
532.It Sy capacity
533Percentage of pool space used.
534This property can also be referred to by its shortened column name,
535.Sy cap .
536.It Sy expandsize
9ae529ec 537Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
538increase the total capacity of the pool.
539Uninitialized space consists of any space on an EFI labeled vdev which has not
540been brought online
541.Po e.g, using
542.Nm zpool Cm online Fl e
543.Pc .
544This space occurs when a LUN is dynamically expanded.
545.It Sy fragmentation
f3a7f661 546The amount of fragmentation in the pool.
cda0317e 547.It Sy free
9ae529ec 548The amount of free space available in the pool.
cda0317e 549.It Sy freeing
9ae529ec 550After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
551returned to the pool asynchronously.
552.Sy freeing
553is the amount of space remaining to be reclaimed.
554Over time
555.Sy freeing
556will decrease while
557.Sy free
558increases.
559.It Sy health
560The current health of the pool.
561Health can be one of
562.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
563.It Sy guid
058ac9ba 564A unique identifier for the pool.
cda0317e 565.It Sy size
058ac9ba 566Total size of the storage pool.
cda0317e
GM
567.It Sy unsupported@ Ns Em feature_guid
568Information about unsupported features that are enabled on the pool.
569See
570.Xr zpool-features 5
571for details.
cda0317e
GM
572.El
573.Pp
574The space usage properties report actual physical space available to the
575storage pool.
576The physical space can be different from the total amount of space that any
577contained datasets can actually use.
578The amount of space used in a raidz configuration depends on the characteristics
579of the data being written.
580In addition, ZFS reserves some space for internal accounting that the
581.Xr zfs 8
582command takes into account, but the
583.Nm
584command does not.
585For non-full pools of a reasonable size, these effects should be invisible.
586For small pools, or pools that are close to being completely full, these
587discrepancies may become more noticeable.
588.Pp
058ac9ba 589The following property can be set at creation time and import time:
cda0317e
GM
590.Bl -tag -width Ds
591.It Sy altroot
592Alternate root directory.
593If set, this directory is prepended to any mount points within the pool.
594This can be used when examining an unknown pool where the mount points cannot be
595trusted, or in an alternate boot environment, where the typical paths are not
596valid.
597.Sy altroot
598is not a persistent property.
599It is valid only while the system is up.
600Setting
601.Sy altroot
602defaults to using
603.Sy cachefile Ns = Ns Sy none ,
604though this may be overridden using an explicit setting.
605.El
606.Pp
607The following property can be set only at import time:
608.Bl -tag -width Ds
609.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
610If set to
611.Sy on ,
612the pool will be imported in read-only mode.
613This property can also be referred to by its shortened column name,
614.Sy rdonly .
615.El
616.Pp
617The following properties can be set at creation time and import time, and later
618changed with the
619.Nm zpool Cm set
620command:
621.Bl -tag -width Ds
622.It Sy ashift Ns = Ns Sy ashift
623Pool sector size exponent, to the power of
624.Sy 2
625(internally referred to as
626.Sy ashift
627). Values from 9 to 16, inclusive, are valid; also, the special
628value 0 (the default) means to auto-detect using the kernel's block
629layer and a ZFS internal exception list. I/O operations will be aligned
630to the specified size boundaries. Additionally, the minimum (disk)
631write size will be set to the specified size, so this represents a
632space vs. performance trade-off. For optimal performance, the pool
633sector size should be greater than or equal to the sector size of the
634underlying disks. The typical case for setting this property is when
635performance is important and the underlying disks use 4KiB sectors but
636report 512B sectors to the OS (for compatibility reasons); in that
637case, set
638.Sy ashift=12
639(which is 1<<12 = 4096). When set, this property is
640used as the default hint value in subsequent vdev operations (add,
641attach and replace). Changing this value will not modify any existing
642vdev, not even on disk replacement; however it can be used, for
643instance, to replace a dying 512B sectors disk with a newer 4KiB
644sectors device: this will probably result in bad performance but at the
645same time could prevent loss of data.
646.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
647Controls automatic pool expansion when the underlying LUN is grown.
648If set to
649.Sy on ,
650the pool will be resized according to the size of the expanded device.
651If the device is part of a mirror or raidz then all devices within that
652mirror/raidz group must be expanded before the new space is made available to
653the pool.
654The default behavior is
655.Sy off .
656This property can also be referred to by its shortened column name,
657.Sy expand .
658.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
659Controls automatic device replacement.
660If set to
661.Sy off ,
662device replacement must be initiated by the administrator by using the
663.Nm zpool Cm replace
664command.
665If set to
666.Sy on ,
667any new device, found in the same physical location as a device that previously
668belonged to the pool, is automatically formatted and replaced.
669The default behavior is
670.Sy off .
671This property can also be referred to by its shortened column name,
672.Sy replace .
673Autoreplace can also be used with virtual disks (like device
674mapper) provided that you use the /dev/disk/by-vdev paths setup by
675vdev_id.conf. See the
676.Xr vdev_id 8
677man page for more details.
678Autoreplace and autoonline require the ZFS Event Daemon be configured and
679running. See the
680.Xr zed 8
681man page for more details.
682.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
683Identifies the default bootable dataset for the root pool. This property is
684expected to be set mainly by the installation and upgrade programs.
685Not all Linux distribution boot processes use the bootfs property.
686.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
687Controls the location of where the pool configuration is cached.
688Discovering all pools on system startup requires a cached copy of the
689configuration data that is stored on the root file system.
690All pools in this cache are automatically imported when the system boots.
691Some environments, such as install and clustering, need to cache this
692information in a different location so that pools are not automatically
693imported.
694Setting this property caches the pool configuration in a different location that
695can later be imported with
696.Nm zpool Cm import Fl c .
697Setting it to the special value
698.Sy none
699creates a temporary pool that is never cached, and the special value
700.Qq
701.Pq empty string
702uses the default location.
703.Pp
704Multiple pools can share the same cache file.
705Because the kernel destroys and recreates this file when pools are added and
706removed, care should be taken when attempting to access this file.
707When the last pool using a
708.Sy cachefile
bbf1ad67 709is exported or destroyed, the file will be empty.
cda0317e
GM
710.It Sy comment Ns = Ns Ar text
711A text string consisting of printable ASCII characters that will be stored
712such that it is available even if the pool becomes faulted.
713An administrator can provide additional information about a pool using this
714property.
715.It Sy dedupditto Ns = Ns Ar number
716Threshold for the number of block ditto copies.
717If the reference count for a deduplicated block increases above this number, a
718new ditto copy of this block is automatically stored.
719The default setting is
720.Sy 0
721which causes no ditto copies to be created for deduplicated blocks.
722The minimum legal nonzero setting is
723.Sy 100 .
724.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
725Controls whether a non-privileged user is granted access based on the dataset
726permissions defined on the dataset.
727See
728.Xr zfs 8
729for more information on ZFS delegated administration.
730.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
731Controls the system behavior in the event of catastrophic pool failure.
732This condition is typically a result of a loss of connectivity to the underlying
733storage device(s) or a failure of all devices within the pool.
734The behavior of such an event is determined as follows:
735.Bl -tag -width "continue"
736.It Sy wait
737Blocks all I/O access until the device connectivity is recovered and the errors
738are cleared.
739This is the default behavior.
740.It Sy continue
741Returns
742.Er EIO
743to any new write I/O requests but allows reads to any of the remaining healthy
744devices.
745Any write requests that have yet to be committed to disk would be blocked.
746.It Sy panic
058ac9ba 747Prints out a message to the console and generates a system crash dump.
cda0317e
GM
748.El
749.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
750The value of this property is the current state of
751.Ar feature_name .
752The only valid value when setting this property is
753.Sy enabled
754which moves
755.Ar feature_name
756to the enabled state.
757See
758.Xr zpool-features 5
759for details on feature states.
760.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
761Controls whether information about snapshots associated with this pool is
762output when
763.Nm zfs Cm list
764is run without the
765.Fl t
766option.
767The default value is
768.Sy off .
769This property can also be referred to by its shortened name,
770.Sy listsnaps .
379ca9cf
OF
771.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
772Controls whether a pool activity check should be performed during
773.Nm zpool Cm import .
774When a pool is determined to be active it cannot be imported, even with the
775.Fl f
776option. This property is intended to be used in failover configurations
777where multiple hosts have access to a pool on shared storage. When this
778property is on, periodic writes to storage occur to show the pool is in use.
779See
780.Sy zfs_multihost_interval
781in the
782.Xr zfs-module-parameters 5
783man page. In order to enable this property each host must set a unique hostid.
784See
785.Xr genhostid 1
b9373170 786.Xr zgenhostid 8
d699aaef 787.Xr spl-module-parameters 5
379ca9cf
OF
788for additional details. The default value is
789.Sy off .
cda0317e
GM
790.It Sy version Ns = Ns Ar version
791The current on-disk version of the pool.
792This can be increased, but never decreased.
793The preferred method of updating pools is with the
794.Nm zpool Cm upgrade
795command, though this property can be used when a specific version is needed for
796backwards compatibility.
797Once feature flags are enabled on a pool this property will no longer have a
798value.
799.El
800.Ss Subcommands
801All subcommands that modify state are logged persistently to the pool in their
802original form.
803.Pp
804The
805.Nm
806command provides subcommands to create and destroy storage pools, add capacity
807to storage pools, and provide information about the storage pools.
808The following subcommands are supported:
809.Bl -tag -width Ds
810.It Xo
811.Nm
812.Fl ?
813.Xc
058ac9ba 814Displays a help message.
cda0317e
GM
815.It Xo
816.Nm
817.Cm add
818.Op Fl fgLnP
819.Oo Fl o Ar property Ns = Ns Ar value Oc
820.Ar pool vdev Ns ...
821.Xc
822Adds the specified virtual devices to the given pool.
823The
824.Ar vdev
825specification is described in the
826.Sx Virtual Devices
827section.
828The behavior of the
829.Fl f
830option, and the device checks performed are described in the
831.Nm zpool Cm create
832subcommand.
833.Bl -tag -width Ds
834.It Fl f
835Forces use of
836.Ar vdev Ns s ,
837even if they appear in use or specify a conflicting replication level.
838Not all devices can be overridden in this manner.
839.It Fl g
840Display
841.Ar vdev ,
842GUIDs instead of the normal device names. These GUIDs can be used in place of
843device names for the zpool detach/offline/remove/replace commands.
844.It Fl L
845Display real paths for
846.Ar vdev Ns s
847resolving all symbolic links. This can be used to look up the current block
848device name regardless of the /dev/disk/ path used to open it.
849.It Fl n
850Displays the configuration that would be used without actually adding the
851.Ar vdev Ns s .
852The actual pool creation can still fail due to insufficient privileges or
853device sharing.
854.It Fl P
855Display real paths for
856.Ar vdev Ns s
857instead of only the last component of the path. This can be used in
85912983 858conjunction with the
859.Fl L
860flag.
cda0317e
GM
861.It Fl o Ar property Ns = Ns Ar value
862Sets the given pool properties. See the
863.Sx Properties
864section for a list of valid properties that can be set. The only property
865supported at the moment is ashift.
866.El
867.It Xo
868.Nm
869.Cm attach
870.Op Fl f
871.Oo Fl o Ar property Ns = Ns Ar value Oc
872.Ar pool device new_device
873.Xc
874Attaches
875.Ar new_device
876to the existing
877.Ar device .
878The existing device cannot be part of a raidz configuration.
879If
880.Ar device
881is not currently part of a mirrored configuration,
882.Ar device
883automatically transforms into a two-way mirror of
884.Ar device
885and
886.Ar new_device .
887If
888.Ar device
889is part of a two-way mirror, attaching
890.Ar new_device
891creates a three-way mirror, and so on.
892In either case,
893.Ar new_device
894begins to resilver immediately.
895.Bl -tag -width Ds
896.It Fl f
897Forces use of
898.Ar new_device ,
899even if its appears to be in use.
900Not all devices can be overridden in this manner.
901.It Fl o Ar property Ns = Ns Ar value
902Sets the given pool properties. See the
903.Sx Properties
904section for a list of valid properties that can be set. The only property
905supported at the moment is ashift.
906.El
907.It Xo
908.Nm
d2734cce
SD
909.Cm checkpoint
910.Op Fl d, -discard
911.Ar pool
912.Xc
913Checkpoints the current state of
914.Ar pool
915, which can be later restored by
916.Nm zpool Cm import --rewind-to-checkpoint .
917The existence of a checkpoint in a pool prohibits the following
918.Nm zpool
919commands:
920.Cm remove ,
921.Cm attach ,
922.Cm detach ,
923.Cm split ,
924and
925.Cm reguid .
926In addition, it may break reservation boundaries if the pool lacks free
927space.
928The
929.Nm zpool Cm status
930command indicates the existence of a checkpoint or the progress of discarding a
931checkpoint from a pool.
932The
933.Nm zpool Cm list
934command reports how much space the checkpoint takes from the pool.
935.Bl -tag -width Ds
936.It Fl d, -discard
937Discards an existing checkpoint from
938.Ar pool .
939.El
940.It Xo
941.Nm
cda0317e
GM
942.Cm clear
943.Ar pool
944.Op Ar device
945.Xc
946Clears device errors in a pool.
947If no arguments are specified, all device errors within the pool are cleared.
948If one or more devices is specified, only those errors associated with the
949specified device or devices are cleared.
950.It Xo
951.Nm
952.Cm create
953.Op Fl dfn
954.Op Fl m Ar mountpoint
955.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
956.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
957.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
958.Op Fl R Ar root
959.Op Fl t Ar tname
960.Ar pool vdev Ns ...
961.Xc
962Creates a new storage pool containing the virtual devices specified on the
963command line.
964The pool name must begin with a letter, and can only contain
965alphanumeric characters as well as underscore
966.Pq Qq Sy _ ,
967dash
90cdf283 968.Pq Qq Sy \&- ,
cda0317e
GM
969colon
970.Pq Qq Sy \&: ,
971space
90cdf283 972.Pq Qq Sy \&\ ,
cda0317e
GM
973and period
974.Pq Qq Sy \&. .
975The pool names
976.Sy mirror ,
977.Sy raidz ,
978.Sy spare
979and
980.Sy log
5ee220ba
TK
981are reserved, as are names beginning with
982.Sy mirror ,
983.Sy raidz ,
984.Sy spare ,
985and the pattern
cda0317e
GM
986.Sy c[0-9] .
987The
988.Ar vdev
989specification is described in the
990.Sx Virtual Devices
991section.
992.Pp
993The command verifies that each device specified is accessible and not currently
994in use by another subsystem.
995There are some uses, such as being currently mounted, or specified as the
996dedicated dump device, that prevents a device from ever being used by ZFS.
997Other uses, such as having a preexisting UFS file system, can be overridden with
998the
999.Fl f
1000option.
1001.Pp
1002The command also checks that the replication strategy for the pool is
1003consistent.
1004An attempt to combine redundant and non-redundant storage in a single pool, or
1005to mix disks and files, results in an error unless
1006.Fl f
1007is specified.
1008The use of differently sized devices within a single raidz or mirror group is
1009also flagged as an error unless
1010.Fl f
1011is specified.
1012.Pp
1013Unless the
1014.Fl R
1015option is specified, the default mount point is
1016.Pa / Ns Ar pool .
1017The mount point must not exist or must be empty, or else the root dataset
1018cannot be mounted.
1019This can be overridden with the
1020.Fl m
1021option.
1022.Pp
1023By default all supported features are enabled on the new pool unless the
1024.Fl d
1025option is specified.
1026.Bl -tag -width Ds
1027.It Fl d
1028Do not enable any features on the new pool.
1029Individual features can be enabled by setting their corresponding properties to
1030.Sy enabled
1031with the
1032.Fl o
1033option.
1034See
1035.Xr zpool-features 5
1036for details about feature properties.
1037.It Fl f
1038Forces use of
1039.Ar vdev Ns s ,
1040even if they appear in use or specify a conflicting replication level.
1041Not all devices can be overridden in this manner.
1042.It Fl m Ar mountpoint
1043Sets the mount point for the root dataset.
1044The default mount point is
1045.Pa /pool
1046or
1047.Pa altroot/pool
1048if
1049.Ar altroot
1050is specified.
1051The mount point must be an absolute path,
1052.Sy legacy ,
1053or
1054.Sy none .
1055For more information on dataset mount points, see
1056.Xr zfs 8 .
1057.It Fl n
1058Displays the configuration that would be used without actually creating the
1059pool.
1060The actual pool creation can still fail due to insufficient privileges or
1061device sharing.
1062.It Fl o Ar property Ns = Ns Ar value
1063Sets the given pool properties.
1064See the
1065.Sx Properties
1066section for a list of valid properties that can be set.
1067.It Fl o Ar feature@feature Ns = Ns Ar value
1068Sets the given pool feature. See the
1069.Xr zpool-features 5
1070section for a list of valid features that can be set.
1071Value can be either disabled or enabled.
1072.It Fl O Ar file-system-property Ns = Ns Ar value
1073Sets the given file system properties in the root file system of the pool.
1074See the
1075.Sx Properties
1076section of
1077.Xr zfs 8
1078for a list of valid properties that can be set.
1079.It Fl R Ar root
1080Equivalent to
1081.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1082.It Fl t Ar tname
1083Sets the in-core pool name to
1084.Sy tname
1085while the on-disk name will be the name specified as the pool name
1086.Sy pool .
1087This will set the default cachefile property to none. This is intended
1088to handle name space collisions when creating pools for other systems,
1089such as virtual machines or physical machines whose pools live on network
1090block devices.
1091.El
1092.It Xo
1093.Nm
1094.Cm destroy
1095.Op Fl f
1096.Ar pool
1097.Xc
1098Destroys the given pool, freeing up any devices for other use.
1099This command tries to unmount any active datasets before destroying the pool.
1100.Bl -tag -width Ds
1101.It Fl f
058ac9ba 1102Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1103.El
1104.It Xo
1105.Nm
1106.Cm detach
1107.Ar pool device
1108.Xc
1109Detaches
1110.Ar device
1111from a mirror.
1112The operation is refused if there are no other valid replicas of the data.
1113If device may be re-added to the pool later on then consider the
1114.Sy zpool offline
1115command instead.
1116.It Xo
1117.Nm
1118.Cm events
88f9c939 1119.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1120.Xc
1121Lists all recent events generated by the ZFS kernel modules. These events
1122are consumed by the
1123.Xr zed 8
1124and used to automate administrative tasks such as replacing a failed device
1125with a hot spare. For more information about the subclasses and event payloads
1126that can be generated see the
1127.Xr zfs-events 5
1128man page.
1129.Bl -tag -width Ds
1130.It Fl c
d050c627 1131Clear all previous events.
cda0317e
GM
1132.It Fl f
1133Follow mode.
1134.It Fl H
1135Scripted mode. Do not display headers, and separate fields by a
1136single tab instead of arbitrary space.
1137.It Fl v
1138Print the entire payload for each event.
1139.El
1140.It Xo
1141.Nm
1142.Cm export
1143.Op Fl a
1144.Op Fl f
1145.Ar pool Ns ...
1146.Xc
1147Exports the given pools from the system.
1148All devices are marked as exported, but are still considered in use by other
1149subsystems.
1150The devices can be moved between systems
1151.Pq even those of different endianness
1152and imported as long as a sufficient number of devices are present.
1153.Pp
1154Before exporting the pool, all datasets within the pool are unmounted.
1155A pool can not be exported if it has a shared spare that is currently being
1156used.
1157.Pp
1158For pools to be portable, you must give the
1159.Nm
1160command whole disks, not just partitions, so that ZFS can label the disks with
1161portable EFI labels.
1162Otherwise, disk drivers on platforms of different endianness will not recognize
1163the disks.
1164.Bl -tag -width Ds
1165.It Fl a
859735c0 1166Exports all pools imported on the system.
cda0317e
GM
1167.It Fl f
1168Forcefully unmount all datasets, using the
1169.Nm unmount Fl f
1170command.
1171.Pp
1172This command will forcefully export the pool even if it has a shared spare that
1173is currently being used.
1174This may lead to potential data corruption.
1175.El
1176.It Xo
1177.Nm
1178.Cm get
1179.Op Fl Hp
1180.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1181.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1182.Ar pool Ns ...
1183.Xc
1184Retrieves the given list of properties
1185.Po
1186or all properties if
1187.Sy all
1188is used
1189.Pc
1190for the specified storage pool(s).
1191These properties are displayed with the following fields:
1192.Bd -literal
2a8b84b7 1193 name Name of storage pool
058ac9ba
BB
1194 property Property name
1195 value Property value
1196 source Property source, either 'default' or 'local'.
cda0317e
GM
1197.Ed
1198.Pp
1199See the
1200.Sx Properties
1201section for more information on the available pool properties.
1202.Bl -tag -width Ds
1203.It Fl H
1204Scripted mode.
1205Do not display headers, and separate fields by a single tab instead of arbitrary
1206space.
1207.It Fl o Ar field
1208A comma-separated list of columns to display.
d7323e79 1209.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1210is the default value.
cda0317e
GM
1211.It Fl p
1212Display numbers in parsable (exact) values.
1213.El
1214.It Xo
1215.Nm
1216.Cm history
1217.Op Fl il
1218.Oo Ar pool Oc Ns ...
1219.Xc
1220Displays the command history of the specified pool(s) or all pools if no pool is
1221specified.
1222.Bl -tag -width Ds
1223.It Fl i
1224Displays internally logged ZFS events in addition to user initiated events.
1225.It Fl l
1226Displays log records in long format, which in addition to standard format
1227includes, the user name, the hostname, and the zone in which the operation was
1228performed.
1229.El
1230.It Xo
1231.Nm
1232.Cm import
1233.Op Fl D
522db292 1234.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1235.Xc
1236Lists pools available to import.
1237If the
1238.Fl d
1239option is not specified, this command searches for devices in
1240.Pa /dev .
1241The
1242.Fl d
1243option can be specified multiple times, and all directories are searched.
1244If the device appears to be part of an exported pool, this command displays a
1245summary of the pool with the name of the pool, a numeric identifier, as well as
1246the vdev layout and current health of the device for each device or file.
1247Destroyed pools, pools that were previously destroyed with the
1248.Nm zpool Cm destroy
1249command, are not listed unless the
1250.Fl D
1251option is specified.
1252.Pp
1253The numeric identifier is unique, and can be used instead of the pool name when
1254multiple exported pools of the same name are available.
1255.Bl -tag -width Ds
1256.It Fl c Ar cachefile
1257Reads configuration from the given
1258.Ar cachefile
1259that was created with the
1260.Sy cachefile
1261pool property.
1262This
1263.Ar cachefile
1264is used instead of searching for devices.
522db292
CC
1265.It Fl d Ar dir Ns | Ns Ar device
1266Uses
1267.Ar device
1268or searches for devices or files in
cda0317e
GM
1269.Ar dir .
1270The
1271.Fl d
1272option can be specified multiple times.
1273.It Fl D
058ac9ba 1274Lists destroyed pools only.
cda0317e
GM
1275.El
1276.It Xo
1277.Nm
1278.Cm import
1279.Fl a
b5256303 1280.Op Fl DflmN
cda0317e 1281.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1282.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1283.Op Fl o Ar mntopts
1284.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1285.Op Fl R Ar root
1286.Op Fl s
1287.Xc
1288Imports all pools found in the search directories.
1289Identical to the previous command, except that all pools with a sufficient
1290number of devices available are imported.
1291Destroyed pools, pools that were previously destroyed with the
1292.Nm zpool Cm destroy
1293command, will not be imported unless the
1294.Fl D
1295option is specified.
1296.Bl -tag -width Ds
1297.It Fl a
6b4e21c6 1298Searches for and imports all pools found.
cda0317e
GM
1299.It Fl c Ar cachefile
1300Reads configuration from the given
1301.Ar cachefile
1302that was created with the
1303.Sy cachefile
1304pool property.
1305This
1306.Ar cachefile
1307is used instead of searching for devices.
522db292
CC
1308.It Fl d Ar dir Ns | Ns Ar device
1309Uses
1310.Ar device
1311or searches for devices or files in
cda0317e
GM
1312.Ar dir .
1313The
1314.Fl d
1315option can be specified multiple times.
1316This option is incompatible with the
1317.Fl c
1318option.
1319.It Fl D
1320Imports destroyed pools only.
1321The
1322.Fl f
1323option is also required.
1324.It Fl f
1325Forces import, even if the pool appears to be potentially active.
1326.It Fl F
1327Recovery mode for a non-importable pool.
1328Attempt to return the pool to an importable state by discarding the last few
1329transactions.
1330Not all damaged pools can be recovered by using this option.
1331If successful, the data from the discarded transactions is irretrievably lost.
1332This option is ignored if the pool is importable or already imported.
b5256303
TC
1333.It Fl l
1334Indicates that this command will request encryption keys for all encrypted
1335datasets it attempts to mount as it is bringing the pool online. Note that if
1336any datasets have a
1337.Sy keylocation
1338of
1339.Sy prompt
1340this command will block waiting for the keys to be entered. Without this flag
1341encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1342.It Fl m
7f9d9946 1343Allows a pool to import when there is a missing log device.
cda0317e
GM
1344Recent transactions can be lost because the log device will be discarded.
1345.It Fl n
1346Used with the
1347.Fl F
1348recovery option.
1349Determines whether a non-importable pool can be made importable again, but does
1350not actually perform the pool recovery.
1351For more details about pool recovery mode, see the
1352.Fl F
1353option, above.
1354.It Fl N
7f9d9946 1355Import the pool without mounting any file systems.
cda0317e
GM
1356.It Fl o Ar mntopts
1357Comma-separated list of mount options to use when mounting datasets within the
1358pool.
1359See
1360.Xr zfs 8
1361for a description of dataset properties and mount options.
1362.It Fl o Ar property Ns = Ns Ar value
1363Sets the specified property on the imported pool.
1364See the
1365.Sx Properties
1366section for more information on the available pool properties.
1367.It Fl R Ar root
1368Sets the
1369.Sy cachefile
1370property to
1371.Sy none
1372and the
1373.Sy altroot
1374property to
1375.Ar root .
d2734cce
SD
1376.It Fl -rewind-to-checkpoint
1377Rewinds pool to the checkpointed state.
1378Once the pool is imported with this flag there is no way to undo the rewind.
1379All changes and data that were written after the checkpoint are lost!
1380The only exception is when the
1381.Sy readonly
1382mounting option is enabled.
1383In this case, the checkpointed state of the pool is opened and an
1384administrator can see how the pool would look like if they were
1385to fully rewind.
cda0317e
GM
1386.It Fl s
1387Scan using the default search path, the libblkid cache will not be
1388consulted. A custom search path may be specified by setting the
1389ZPOOL_IMPORT_PATH environment variable.
1390.It Fl X
1391Used with the
1392.Fl F
1393recovery option. Determines whether extreme
1394measures to find a valid txg should take place. This allows the pool to
1395be rolled back to a txg which is no longer guaranteed to be consistent.
1396Pools imported at an inconsistent txg may contain uncorrectable
1397checksum errors. For more details about pool recovery mode, see the
1398.Fl F
1399option, above. WARNING: This option can be extremely hazardous to the
1400health of your pool and should only be used as a last resort.
1401.It Fl T
1402Specify the txg to use for rollback. Implies
1403.Fl FX .
1404For more details
1405about pool recovery mode, see the
1406.Fl X
1407option, above. WARNING: This option can be extremely hazardous to the
1408health of your pool and should only be used as a last resort.
1409.El
1410.It Xo
1411.Nm
1412.Cm import
b5256303 1413.Op Fl Dflm
cda0317e 1414.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1415.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1416.Op Fl o Ar mntopts
1417.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1418.Op Fl R Ar root
1419.Op Fl s
1420.Ar pool Ns | Ns Ar id
1421.Op Ar newpool
1422.Xc
1423Imports a specific pool.
1424A pool can be identified by its name or the numeric identifier.
1425If
1426.Ar newpool
1427is specified, the pool is imported using the name
1428.Ar newpool .
1429Otherwise, it is imported with the same name as its exported name.
1430.Pp
1431If a device is removed from a system without running
1432.Nm zpool Cm export
1433first, the device appears as potentially active.
1434It cannot be determined if this was a failed export, or whether the device is
1435really in use from another host.
1436To import a pool in this state, the
1437.Fl f
1438option is required.
1439.Bl -tag -width Ds
1440.It Fl c Ar cachefile
1441Reads configuration from the given
1442.Ar cachefile
1443that was created with the
1444.Sy cachefile
1445pool property.
1446This
1447.Ar cachefile
1448is used instead of searching for devices.
522db292
CC
1449.It Fl d Ar dir Ns | Ns Ar device
1450Uses
1451.Ar device
1452or searches for devices or files in
cda0317e
GM
1453.Ar dir .
1454The
1455.Fl d
1456option can be specified multiple times.
1457This option is incompatible with the
1458.Fl c
1459option.
1460.It Fl D
1461Imports destroyed pool.
1462The
1463.Fl f
1464option is also required.
1465.It Fl f
058ac9ba 1466Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1467.It Fl F
1468Recovery mode for a non-importable pool.
1469Attempt to return the pool to an importable state by discarding the last few
1470transactions.
1471Not all damaged pools can be recovered by using this option.
1472If successful, the data from the discarded transactions is irretrievably lost.
1473This option is ignored if the pool is importable or already imported.
b5256303
TC
1474.It Fl l
1475Indicates that this command will request encryption keys for all encrypted
1476datasets it attempts to mount as it is bringing the pool online. Note that if
1477any datasets have a
1478.Sy keylocation
1479of
1480.Sy prompt
1481this command will block waiting for the keys to be entered. Without this flag
1482encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1483.It Fl m
7f9d9946 1484Allows a pool to import when there is a missing log device.
cda0317e
GM
1485Recent transactions can be lost because the log device will be discarded.
1486.It Fl n
1487Used with the
1488.Fl F
1489recovery option.
1490Determines whether a non-importable pool can be made importable again, but does
1491not actually perform the pool recovery.
1492For more details about pool recovery mode, see the
1493.Fl F
1494option, above.
1495.It Fl o Ar mntopts
1496Comma-separated list of mount options to use when mounting datasets within the
1497pool.
1498See
1499.Xr zfs 8
1500for a description of dataset properties and mount options.
1501.It Fl o Ar property Ns = Ns Ar value
1502Sets the specified property on the imported pool.
1503See the
1504.Sx Properties
1505section for more information on the available pool properties.
1506.It Fl R Ar root
1507Sets the
1508.Sy cachefile
1509property to
1510.Sy none
1511and the
1512.Sy altroot
1513property to
1514.Ar root .
1515.It Fl s
1516Scan using the default search path, the libblkid cache will not be
1517consulted. A custom search path may be specified by setting the
1518ZPOOL_IMPORT_PATH environment variable.
1519.It Fl X
1520Used with the
1521.Fl F
1522recovery option. Determines whether extreme
1523measures to find a valid txg should take place. This allows the pool to
1524be rolled back to a txg which is no longer guaranteed to be consistent.
1525Pools imported at an inconsistent txg may contain uncorrectable
1526checksum errors. For more details about pool recovery mode, see the
1527.Fl F
1528option, above. WARNING: This option can be extremely hazardous to the
1529health of your pool and should only be used as a last resort.
1530.It Fl T
1531Specify the txg to use for rollback. Implies
1532.Fl FX .
1533For more details
1534about pool recovery mode, see the
1535.Fl X
1536option, above. WARNING: This option can be extremely hazardous to the
1537health of your pool and should only be used as a last resort.
1c68856b 1538.It Fl t
cda0317e
GM
1539Used with
1540.Sy newpool .
1541Specifies that
1542.Sy newpool
1543is temporary. Temporary pool names last until export. Ensures that
1544the original pool name will be used in all label updates and therefore
1545is retained upon export.
1546Will also set -o cachefile=none when not explicitly specified.
1547.El
1548.It Xo
1549.Nm
1550.Cm iostat
1551.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1552.Op Fl T Sy u Ns | Ns Sy d
1553.Op Fl ghHLpPvy
1554.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1555.Op Ar interval Op Ar count
1556.Xc
1557Displays I/O statistics for the given pools/vdevs. You can pass in a
1558list of pools, a pool and list of vdevs in that pool, or a list of any
1559vdevs from any pool. If no items are specified, statistics for every
1560pool in the system are shown.
1561When given an
1562.Ar interval ,
1563the statistics are printed every
1564.Ar interval
1565seconds until ^C is pressed. If count is specified, the command exits
1566after count reports are printed. The first report printed is always
1567the statistics since boot regardless of whether
1568.Ar interval
1569and
1570.Ar count
1571are passed. However, this behavior can be suppressed with the
1572.Fl y
1573flag. Also note that the units of
1574.Sy K ,
1575.Sy M ,
1576.Sy G ...
1577that are printed in the report are in base 1024. To get the raw
1578values, use the
1579.Fl p
1580flag.
1581.Bl -tag -width Ds
7a8ed6b8 1582.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1583Run a script (or scripts) on each vdev and include the output as a new column
1584in the
1585.Nm zpool Cm iostat
1586output. Users can run any script found in their
1587.Pa ~/.zpool.d
1588directory or from the system
1589.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1590directory. Script names containing the slash (/) character are not allowed.
1591The default search path can be overridden by setting the
cda0317e
GM
1592ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1593.Fl c
1594if they have the ZPOOL_SCRIPTS_AS_ROOT
1595environment variable set. If a script requires the use of a privileged
1596command, like
7a8ed6b8
NB
1597.Xr smartctl 8 ,
1598then it's recommended you allow the user access to it in
cda0317e
GM
1599.Pa /etc/sudoers
1600or add the user to the
1601.Pa /etc/sudoers.d/zfs
1602file.
1603.Pp
1604If
1605.Fl c
1606is passed without a script name, it prints a list of all scripts.
1607.Fl c
7a8ed6b8 1608also sets verbose mode
90cdf283 1609.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1610.Pp
1611Script output should be in the form of "name=value". The column name is
1612set to "name" and the value is set to "value". Multiple lines can be
1613used to output multiple columns. The first line of output not in the
1614"name=value" format is displayed without a column title, and no more
1615output after that is displayed. This can be useful for printing error
1616messages. Blank or NULL values are printed as a '-' to make output
1617awk-able.
1618.Pp
d6418de0 1619The following environment variables are set before running each script:
cda0317e
GM
1620.Bl -tag -width "VDEV_PATH"
1621.It Sy VDEV_PATH
1622Full path to the vdev
1623.El
1624.Bl -tag -width "VDEV_UPATH"
1625.It Sy VDEV_UPATH
1626Underlying path to the vdev (/dev/sd*). For use with device mapper,
1627multipath, or partitioned vdevs.
1628.El
1629.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1630.It Sy VDEV_ENC_SYSFS_PATH
1631The sysfs path to the enclosure for the vdev (if any).
1632.El
1633.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1634Display a time stamp.
cda0317e
GM
1635Specify
1636.Sy u
1637for a printed representation of the internal representation of time.
1638See
1639.Xr time 2 .
1640Specify
1641.Sy d
1642for standard date format.
1643See
1644.Xr date 1 .
1645.It Fl g
1646Display vdev GUIDs instead of the normal device names. These GUIDs
1647can be used in place of device names for the zpool
1648detach/offline/remove/replace commands.
1649.It Fl H
1650Scripted mode. Do not display headers, and separate fields by a
1651single tab instead of arbitrary space.
1652.It Fl L
1653Display real paths for vdevs resolving all symbolic links. This can
1654be used to look up the current block device name regardless of the
1655.Pa /dev/disk/
1656path used to open it.
1657.It Fl p
1658Display numbers in parsable (exact) values. Time values are in
1659nanoseconds.
1660.It Fl P
1661Display full paths for vdevs instead of only the last component of
1662the path. This can be used in conjunction with the
1663.Fl L
1664flag.
1665.It Fl r
1666Print request size histograms for the leaf ZIOs. This includes
1667histograms of individual ZIOs (
1668.Ar ind )
1669and aggregate ZIOs (
1670.Ar agg ).
1671These stats can be useful for seeing how well the ZFS IO aggregator is
1672working. Do not confuse these request size stats with the block layer
1673requests; it's possible ZIOs can be broken up before being sent to the
1674block device.
1675.It Fl v
1676Verbose statistics Reports usage statistics for individual vdevs within the
1677pool, in addition to the pool-wide statistics.
1678.It Fl y
eb201f50
GM
1679Omit statistics since boot.
1680Normally the first line of output reports the statistics since boot.
1681This option suppresses that first line of output.
cda0317e 1682.It Fl w
eb201f50
GM
1683Display latency histograms:
1684.Pp
1685.Ar total_wait :
1686Total IO time (queuing + disk IO time).
1687.Ar disk_wait :
1688Disk IO time (time reading/writing the disk).
1689.Ar syncq_wait :
1690Amount of time IO spent in synchronous priority queues. Does not include
1691disk time.
1692.Ar asyncq_wait :
1693Amount of time IO spent in asynchronous priority queues. Does not include
1694disk time.
1695.Ar scrub :
1696Amount of time IO spent in scrub queue. Does not include disk time.
cda0317e 1697.It Fl l
193a37cb 1698Include average latency statistics:
cda0317e
GM
1699.Pp
1700.Ar total_wait :
193a37cb 1701Average total IO time (queuing + disk IO time).
cda0317e 1702.Ar disk_wait :
193a37cb 1703Average disk IO time (time reading/writing the disk).
cda0317e
GM
1704.Ar syncq_wait :
1705Average amount of time IO spent in synchronous priority queues. Does
1706not include disk time.
1707.Ar asyncq_wait :
1708Average amount of time IO spent in asynchronous priority queues.
1709Does not include disk time.
1710.Ar scrub :
1711Average queuing time in scrub queue. Does not include disk time.
1712.It Fl q
1713Include active queue statistics. Each priority queue has both
1714pending (
1715.Ar pend )
1716and active (
1717.Ar activ )
1718IOs. Pending IOs are waiting to
1719be issued to the disk, and active IOs have been issued to disk and are
1720waiting for completion. These stats are broken out by priority queue:
1721.Pp
1722.Ar syncq_read/write :
1723Current number of entries in synchronous priority
1724queues.
1725.Ar asyncq_read/write :
193a37cb 1726Current number of entries in asynchronous priority queues.
cda0317e 1727.Ar scrubq_read :
193a37cb 1728Current number of entries in scrub queue.
cda0317e
GM
1729.Pp
1730All queue statistics are instantaneous measurements of the number of
1731entries in the queues. If you specify an interval, the measurements
1732will be sampled from the end of the interval.
1733.El
1734.It Xo
1735.Nm
1736.Cm labelclear
1737.Op Fl f
1738.Ar device
1739.Xc
1740Removes ZFS label information from the specified
1741.Ar device .
1742The
1743.Ar device
1744must not be part of an active pool configuration.
1745.Bl -tag -width Ds
1746.It Fl f
131cc95c 1747Treat exported or foreign devices as inactive.
cda0317e
GM
1748.El
1749.It Xo
1750.Nm
1751.Cm list
1752.Op Fl HgLpPv
1753.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1754.Op Fl T Sy u Ns | Ns Sy d
1755.Oo Ar pool Oc Ns ...
1756.Op Ar interval Op Ar count
1757.Xc
1758Lists the given pools along with a health status and space usage.
1759If no
1760.Ar pool Ns s
1761are specified, all pools in the system are listed.
1762When given an
1763.Ar interval ,
1764the information is printed every
1765.Ar interval
1766seconds until ^C is pressed.
1767If
1768.Ar count
1769is specified, the command exits after
1770.Ar count
1771reports are printed.
1772.Bl -tag -width Ds
1773.It Fl g
1774Display vdev GUIDs instead of the normal device names. These GUIDs
1775can be used in place of device names for the zpool
1776detach/offline/remove/replace commands.
1777.It Fl H
1778Scripted mode.
1779Do not display headers, and separate fields by a single tab instead of arbitrary
1780space.
1781.It Fl o Ar property
1782Comma-separated list of properties to display.
1783See the
1784.Sx Properties
1785section for a list of valid properties.
1786The default list is
6df9f8eb
YP
1787.Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1788.Cm dedupratio , health , altroot .
cda0317e
GM
1789.It Fl L
1790Display real paths for vdevs resolving all symbolic links. This can
1791be used to look up the current block device name regardless of the
1792/dev/disk/ path used to open it.
1793.It Fl p
1794Display numbers in parsable
1795.Pq exact
1796values.
1797.It Fl P
1798Display full paths for vdevs instead of only the last component of
1799the path. This can be used in conjunction with the
85912983 1800.Fl L
1801flag.
cda0317e 1802.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1803Display a time stamp.
cda0317e
GM
1804Specify
1805.Fl u
1806for a printed representation of the internal representation of time.
1807See
1808.Xr time 2 .
1809Specify
1810.Fl d
1811for standard date format.
1812See
1813.Xr date 1 .
1814.It Fl v
1815Verbose statistics.
1816Reports usage statistics for individual vdevs within the pool, in addition to
1817the pool-wise statistics.
1818.El
1819.It Xo
1820.Nm
1821.Cm offline
1822.Op Fl f
1823.Op Fl t
1824.Ar pool Ar device Ns ...
1825.Xc
1826Takes the specified physical device offline.
1827While the
1828.Ar device
1829is offline, no attempt is made to read or write to the device.
1830This command is not applicable to spares.
1831.Bl -tag -width Ds
1832.It Fl f
1833Force fault. Instead of offlining the disk, put it into a faulted
1834state. The fault will persist across imports unless the
1835.Fl t
1836flag was specified.
1837.It Fl t
1838Temporary.
1839Upon reboot, the specified physical device reverts to its previous state.
1840.El
1841.It Xo
1842.Nm
1843.Cm online
1844.Op Fl e
1845.Ar pool Ar device Ns ...
1846.Xc
058ac9ba 1847Brings the specified physical device online.
7c9abcf8 1848This command is not applicable to spares.
cda0317e
GM
1849.Bl -tag -width Ds
1850.It Fl e
1851Expand the device to use all available space.
1852If the device is part of a mirror or raidz then all devices must be expanded
1853before the new space will become available to the pool.
1854.El
1855.It Xo
1856.Nm
1857.Cm reguid
1858.Ar pool
1859.Xc
1860Generates a new unique identifier for the pool.
1861You must ensure that all devices in this pool are online and healthy before
1862performing this action.
1863.It Xo
1864.Nm
1865.Cm reopen
d3f2cd7e 1866.Op Fl n
cda0317e
GM
1867.Ar pool
1868.Xc
5853fe79 1869Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
1870.Bl -tag -width Ds
1871.It Fl n
1872Do not restart an in-progress scrub operation. This is not recommended and can
1873result in partially resilvered devices unless a second scrub is performed.
a94d38c0 1874.El
cda0317e
GM
1875.It Xo
1876.Nm
1877.Cm remove
a1d477c2 1878.Op Fl np
cda0317e
GM
1879.Ar pool Ar device Ns ...
1880.Xc
1881Removes the specified device from the pool.
a1d477c2
MA
1882This command currently only supports removing hot spares, cache, log
1883devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1884.sp
1885Removing a top-level vdev reduces the total amount of space in the storage pool.
1886The specified device will be evacuated by copying all allocated space from it to
1887the other devices in the pool.
1888In this case, the
1889.Nm zpool Cm remove
1890command initiates the removal and returns, while the evacuation continues in
1891the background.
1892The removal progress can be monitored with
1893.Nm zpool Cm status.
1894This feature must be enabled to be used, see
1895.Xr zpool-features 5
1896.Pp
1897A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1898same.
1899Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
1900the
1901.Nm zpool Cm detach
1902command.
a1d477c2
MA
1903.Bl -tag -width Ds
1904.It Fl n
1905Do not actually perform the removal ("no-op").
1906Instead, print the estimated amount of memory that will be used by the
1907mapping table after the removal completes.
1908This is nonzero only for top-level vdevs.
1909.El
1910.Bl -tag -width Ds
1911.It Fl p
1912Used in conjunction with the
1913.Fl n
1914flag, displays numbers as parsable (exact) values.
1915.El
1916.It Xo
1917.Nm
1918.Cm remove
1919.Fl s
1920.Ar pool
1921.Xc
1922Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
1923.It Xo
1924.Nm
1925.Cm replace
1926.Op Fl f
1927.Op Fl o Ar property Ns = Ns Ar value
1928.Ar pool Ar device Op Ar new_device
1929.Xc
1930Replaces
1931.Ar old_device
1932with
1933.Ar new_device .
1934This is equivalent to attaching
1935.Ar new_device ,
1936waiting for it to resilver, and then detaching
1937.Ar old_device .
1938.Pp
1939The size of
1940.Ar new_device
1941must be greater than or equal to the minimum size of all the devices in a mirror
1942or raidz configuration.
1943.Pp
1944.Ar new_device
1945is required if the pool is not redundant.
1946If
1947.Ar new_device
1948is not specified, it defaults to
1949.Ar old_device .
1950This form of replacement is useful after an existing disk has failed and has
1951been physically replaced.
1952In this case, the new disk may have the same
1953.Pa /dev
1954path as the old device, even though it is actually a different disk.
1955ZFS recognizes this.
1956.Bl -tag -width Ds
1957.It Fl f
1958Forces use of
1959.Ar new_device ,
1960even if its appears to be in use.
1961Not all devices can be overridden in this manner.
1962.It Fl o Ar property Ns = Ns Ar value
1963Sets the given pool properties. See the
1964.Sx Properties
1965section for a list of valid properties that can be set.
1966The only property supported at the moment is
1967.Sy ashift .
1968.El
1969.It Xo
1970.Nm
1971.Cm scrub
0ea05c64 1972.Op Fl s | Fl p
cda0317e
GM
1973.Ar pool Ns ...
1974.Xc
0ea05c64 1975Begins a scrub or resumes a paused scrub.
cda0317e
GM
1976The scrub examines all data in the specified pools to verify that it checksums
1977correctly.
1978For replicated
1979.Pq mirror or raidz
1980devices, ZFS automatically repairs any damage discovered during the scrub.
1981The
1982.Nm zpool Cm status
1983command reports the progress of the scrub and summarizes the results of the
1984scrub upon completion.
1985.Pp
1986Scrubbing and resilvering are very similar operations.
1987The difference is that resilvering only examines data that ZFS knows to be out
1988of date
1989.Po
1990for example, when attaching a new device to a mirror or replacing an existing
1991device
1992.Pc ,
1993whereas scrubbing examines all data to discover silent errors due to hardware
1994faults or disk failure.
1995.Pp
1996Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1997one at a time.
0ea05c64 1998If a scrub is paused, the
cda0317e 1999.Nm zpool Cm scrub
0ea05c64 2000resumes it.
cda0317e
GM
2001If a resilver is in progress, ZFS does not allow a scrub to be started until the
2002resilver completes.
2003.Bl -tag -width Ds
2004.It Fl s
058ac9ba 2005Stop scrubbing.
cda0317e 2006.El
0ea05c64
AP
2007.Bl -tag -width Ds
2008.It Fl p
2009Pause scrubbing.
e4b6b2db
AP
2010Scrub pause state and progress are periodically synced to disk.
2011If the system is restarted or pool is exported during a paused scrub,
2012even after import, scrub will remain paused until it is resumed.
2013Once resumed the scrub will pick up from the place where it was last
2014checkpointed to disk.
0ea05c64
AP
2015To resume a paused scrub issue
2016.Nm zpool Cm scrub
2017again.
2018.El
cda0317e
GM
2019.It Xo
2020.Nm
2021.Cm set
2022.Ar property Ns = Ns Ar value
2023.Ar pool
2024.Xc
2025Sets the given property on the specified pool.
2026See the
2027.Sx Properties
2028section for more information on what properties can be set and acceptable
2029values.
2030.It Xo
2031.Nm
2032.Cm split
b5256303 2033.Op Fl gLlnP
cda0317e
GM
2034.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2035.Op Fl R Ar root
2036.Ar pool newpool
2037.Op Ar device ...
2038.Xc
2039Splits devices off
2040.Ar pool
2041creating
2042.Ar newpool .
2043All vdevs in
2044.Ar pool
2045must be mirrors and the pool must not be in the process of resilvering.
2046At the time of the split,
2047.Ar newpool
2048will be a replica of
2049.Ar pool .
2050By default, the
2051last device in each mirror is split from
2052.Ar pool
2053to create
2054.Ar newpool .
2055.Pp
2056The optional device specification causes the specified device(s) to be
2057included in the new
2058.Ar pool
2059and, should any devices remain unspecified,
2060the last device in each mirror is used as would be by default.
2061.Bl -tag -width Ds
2062.It Fl g
2063Display vdev GUIDs instead of the normal device names. These GUIDs
2064can be used in place of device names for the zpool
2065detach/offline/remove/replace commands.
2066.It Fl L
2067Display real paths for vdevs resolving all symbolic links. This can
2068be used to look up the current block device name regardless of the
2069.Pa /dev/disk/
2070path used to open it.
b5256303
TC
2071.It Fl l
2072Indicates that this command will request encryption keys for all encrypted
2073datasets it attempts to mount as it is bringing the new pool online. Note that
2074if any datasets have a
2075.Sy keylocation
2076of
2077.Sy prompt
2078this command will block waiting for the keys to be entered. Without this flag
2079encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
2080.It Fl n
2081Do dry run, do not actually perform the split.
2082Print out the expected configuration of
2083.Ar newpool .
2084.It Fl P
2085Display full paths for vdevs instead of only the last component of
2086the path. This can be used in conjunction with the
85912983 2087.Fl L
2088flag.
cda0317e
GM
2089.It Fl o Ar property Ns = Ns Ar value
2090Sets the specified property for
2091.Ar newpool .
2092See the
2093.Sx Properties
2094section for more information on the available pool properties.
2095.It Fl R Ar root
2096Set
2097.Sy altroot
2098for
2099.Ar newpool
2100to
2101.Ar root
2102and automatically import it.
2103.El
2104.It Xo
2105.Nm
2106.Cm status
7a8ed6b8 2107.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2108.Op Fl gLPvxD
2109.Op Fl T Sy u Ns | Ns Sy d
2110.Oo Ar pool Oc Ns ...
2111.Op Ar interval Op Ar count
2112.Xc
2113Displays the detailed health status for the given pools.
2114If no
2115.Ar pool
2116is specified, then the status of each pool in the system is displayed.
2117For more information on pool and device health, see the
2118.Sx Device Failure and Recovery
2119section.
2120.Pp
2121If a scrub or resilver is in progress, this command reports the percentage done
2122and the estimated time to completion.
2123Both of these are only approximate, because the amount of data in the pool and
2124the other workloads on the system can change.
2125.Bl -tag -width Ds
7a8ed6b8 2126.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2127Run a script (or scripts) on each vdev and include the output as a new column
2128in the
2129.Nm zpool Cm status
2130output. See the
2131.Fl c
2132option of
2133.Nm zpool Cm iostat
2134for complete details.
2135.It Fl g
2136Display vdev GUIDs instead of the normal device names. These GUIDs
2137can be used in place of device names for the zpool
2138detach/offline/remove/replace commands.
2139.It Fl L
2140Display real paths for vdevs resolving all symbolic links. This can
2141be used to look up the current block device name regardless of the
2142.Pa /dev/disk/
2143path used to open it.
f4ae39a1
BB
2144.It Fl P
2145Display full paths for vdevs instead of only the last component of
2146the path. This can be used in conjunction with the
85912983 2147.Fl L
2148flag.
cda0317e
GM
2149.It Fl D
2150Display a histogram of deduplication statistics, showing the allocated
2151.Pq physically present on disk
2152and referenced
2153.Pq logically referenced in the pool
2154block counts and sizes by reference count.
2155.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2156Display a time stamp.
cda0317e
GM
2157Specify
2158.Fl u
2159for a printed representation of the internal representation of time.
2160See
2161.Xr time 2 .
2162Specify
2163.Fl d
2164for standard date format.
2165See
2166.Xr date 1 .
2167.It Fl v
2168Displays verbose data error information, printing out a complete list of all
2169data errors since the last complete pool scrub.
2170.It Fl x
2171Only display status for pools that are exhibiting errors or are otherwise
2172unavailable.
2173Warnings about pools not using the latest on-disk format will not be included.
2174.El
2175.It Xo
2176.Nm
2177.Cm sync
2178.Op Ar pool ...
2179.Xc
2180This command forces all in-core dirty data to be written to the primary
2181pool storage and not the ZIL. It will also update administrative
2182information including quota reporting. Without arguments,
2183.Sy zpool sync
2184will sync all pools on the system. Otherwise, it will sync only the
2185specified pool(s).
2186.It Xo
2187.Nm
2188.Cm upgrade
2189.Xc
2190Displays pools which do not have all supported features enabled and pools
2191formatted using a legacy ZFS version number.
2192These pools can continue to be used, but some features may not be available.
2193Use
2194.Nm zpool Cm upgrade Fl a
2195to enable all features on all pools.
2196.It Xo
2197.Nm
2198.Cm upgrade
2199.Fl v
2200.Xc
2201Displays legacy ZFS versions supported by the current software.
2202See
2203.Xr zpool-features 5
2204for a description of feature flags features supported by the current software.
2205.It Xo
2206.Nm
2207.Cm upgrade
2208.Op Fl V Ar version
2209.Fl a Ns | Ns Ar pool Ns ...
2210.Xc
2211Enables all supported features on the given pool.
2212Once this is done, the pool will no longer be accessible on systems that do not
2213support feature flags.
2214See
2215.Xr zfs-features 5
2216for details on compatibility with systems that support feature flags, but do not
2217support all features enabled on the pool.
2218.Bl -tag -width Ds
2219.It Fl a
b9b24bb4 2220Enables all supported features on all pools.
cda0317e
GM
2221.It Fl V Ar version
2222Upgrade to the specified legacy version.
2223If the
2224.Fl V
2225flag is specified, no features will be enabled on the pool.
2226This option can only be used to increase the version number up to the last
2227supported legacy version number.
2228.El
2229.El
2230.Sh EXIT STATUS
2231The following exit values are returned:
2232.Bl -tag -width Ds
2233.It Sy 0
2234Successful completion.
2235.It Sy 1
2236An error occurred.
2237.It Sy 2
2238Invalid command line options were specified.
2239.El
2240.Sh EXAMPLES
2241.Bl -tag -width Ds
2242.It Sy Example 1 No Creating a RAID-Z Storage Pool
2243The following command creates a pool with a single raidz root vdev that
2244consists of six disks.
2245.Bd -literal
2246# zpool create tank raidz sda sdb sdc sdd sde sdf
2247.Ed
2248.It Sy Example 2 No Creating a Mirrored Storage Pool
2249The following command creates a pool with two mirrors, where each mirror
2250contains two disks.
2251.Bd -literal
2252# zpool create tank mirror sda sdb mirror sdc sdd
2253.Ed
2254.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2255The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2256.Bd -literal
2257# zpool create tank sda1 sdb2
2258.Ed
2259.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2260The following command creates an unmirrored pool using files.
2261While not recommended, a pool based on files can be useful for experimental
2262purposes.
2263.Bd -literal
2264# zpool create tank /path/to/file/a /path/to/file/b
2265.Ed
2266.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2267The following command adds two mirrored disks to the pool
2268.Em tank ,
2269assuming the pool is already made up of two-way mirrors.
2270The additional space is immediately available to any datasets within the pool.
2271.Bd -literal
2272# zpool add tank mirror sda sdb
2273.Ed
2274.It Sy Example 6 No Listing Available ZFS Storage Pools
2275The following command lists all available pools on the system.
2276In this case, the pool
2277.Em zion
2278is faulted due to a missing device.
058ac9ba 2279The results from this command are similar to the following:
cda0317e
GM
2280.Bd -literal
2281# zpool list
d72cd017
TK
2282NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2283rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2284tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2285zion - - - - - - - FAULTED -
cda0317e
GM
2286.Ed
2287.It Sy Example 7 No Destroying a ZFS Storage Pool
2288The following command destroys the pool
2289.Em tank
2290and any datasets contained within.
2291.Bd -literal
2292# zpool destroy -f tank
2293.Ed
2294.It Sy Example 8 No Exporting a ZFS Storage Pool
2295The following command exports the devices in pool
2296.Em tank
2297so that they can be relocated or later imported.
2298.Bd -literal
2299# zpool export tank
2300.Ed
2301.It Sy Example 9 No Importing a ZFS Storage Pool
2302The following command displays available pools, and then imports the pool
2303.Em tank
2304for use on the system.
058ac9ba 2305The results from this command are similar to the following:
cda0317e
GM
2306.Bd -literal
2307# zpool import
058ac9ba
BB
2308 pool: tank
2309 id: 15451357997522795478
2310 state: ONLINE
2311action: The pool can be imported using its name or numeric identifier.
2312config:
2313
2314 tank ONLINE
2315 mirror ONLINE
54e5f226
RL
2316 sda ONLINE
2317 sdb ONLINE
058ac9ba 2318
cda0317e
GM
2319# zpool import tank
2320.Ed
2321.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2322The following command upgrades all ZFS Storage pools to the current version of
2323the software.
2324.Bd -literal
2325# zpool upgrade -a
2326This system is currently running ZFS version 2.
2327.Ed
2328.It Sy Example 11 No Managing Hot Spares
058ac9ba 2329The following command creates a new pool with an available hot spare:
cda0317e
GM
2330.Bd -literal
2331# zpool create tank mirror sda sdb spare sdc
2332.Ed
2333.Pp
2334If one of the disks were to fail, the pool would be reduced to the degraded
2335state.
2336The failed device can be replaced using the following command:
2337.Bd -literal
2338# zpool replace tank sda sdd
2339.Ed
2340.Pp
2341Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2342made available for use should another device fail.
cda0317e
GM
2343The hot spare can be permanently removed from the pool using the following
2344command:
2345.Bd -literal
2346# zpool remove tank sdc
2347.Ed
2348.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2349The following command creates a ZFS storage pool consisting of two, two-way
2350mirrors and mirrored log devices:
2351.Bd -literal
2352# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2353 sde sdf
2354.Ed
2355.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2356The following command adds two disks for use as cache devices to a ZFS storage
2357pool:
2358.Bd -literal
2359# zpool add pool cache sdc sdd
2360.Ed
2361.Pp
2362Once added, the cache devices gradually fill with content from main memory.
2363Depending on the size of your cache devices, it could take over an hour for
2364them to fill.
2365Capacity and reads can be monitored using the
2366.Cm iostat
2367option as follows:
2368.Bd -literal
2369# zpool iostat -v pool 5
2370.Ed
a1d477c2
MA
2371.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2372The following commands remove the mirrored log device
2373.Sy mirror-2
2374and mirrored top-level data device
2375.Sy mirror-1 .
2376.Pp
058ac9ba 2377Given this configuration:
cda0317e
GM
2378.Bd -literal
2379 pool: tank
2380 state: ONLINE
2381 scrub: none requested
058ac9ba
BB
2382config:
2383
2384 NAME STATE READ WRITE CKSUM
2385 tank ONLINE 0 0 0
2386 mirror-0 ONLINE 0 0 0
54e5f226
RL
2387 sda ONLINE 0 0 0
2388 sdb ONLINE 0 0 0
058ac9ba 2389 mirror-1 ONLINE 0 0 0
54e5f226
RL
2390 sdc ONLINE 0 0 0
2391 sdd ONLINE 0 0 0
058ac9ba
BB
2392 logs
2393 mirror-2 ONLINE 0 0 0
54e5f226
RL
2394 sde ONLINE 0 0 0
2395 sdf ONLINE 0 0 0
cda0317e
GM
2396.Ed
2397.Pp
2398The command to remove the mirrored log
2399.Sy mirror-2
2400is:
2401.Bd -literal
2402# zpool remove tank mirror-2
2403.Ed
a1d477c2
MA
2404.Pp
2405The command to remove the mirrored data
2406.Sy mirror-1
2407is:
2408.Bd -literal
2409# zpool remove tank mirror-1
2410.Ed
cda0317e
GM
2411.It Sy Example 15 No Displaying expanded space on a device
2412The following command displays the detailed information for the pool
2413.Em data .
2414This pool is comprised of a single raidz vdev where one of its devices
2415increased its capacity by 10GB.
2416In this example, the pool will not be able to utilize this extra capacity until
2417all the devices under the raidz vdev have been expanded.
2418.Bd -literal
2419# zpool list -v data
d72cd017
TK
2420NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2421data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2422 raidz1 23.9G 14.6G 9.30G - 48%
2423 sda - - - - -
2424 sdb - - - 10G -
2425 sdc - - - - -
cda0317e
GM
2426.Ed
2427.It Sy Example 16 No Adding output columns
2428Additional columns can be added to the
2429.Nm zpool Cm status
2430and
2431.Nm zpool Cm iostat
2432output with
2433.Fl c
2434option.
2435.Bd -literal
2436# zpool status -c vendor,model,size
2437 NAME STATE READ WRITE CKSUM vendor model size
2438 tank ONLINE 0 0 0
2439 mirror-0 ONLINE 0 0 0
2440 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2441 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2442 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2443 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2444 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2445 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2446
2447# zpool iostat -vc slaves
2448 capacity operations bandwidth
2449 pool alloc free read write read write slaves
2450 ---------- ----- ----- ----- ----- ----- ----- ---------
2451 tank 20.4G 7.23T 26 152 20.7M 21.6M
2452 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2453 U1 - - 0 31 1.46K 20.6M sdb sdff
2454 U10 - - 0 1 3.77K 13.3K sdas sdgw
2455 U11 - - 0 1 288K 13.3K sdat sdgx
2456 U12 - - 0 1 78.4K 13.3K sdau sdgy
2457 U13 - - 0 1 128K 13.3K sdav sdgz
2458 U14 - - 0 1 63.2K 13.3K sdfk sdg
2459.Ed
2460.El
2461.Sh ENVIRONMENT VARIABLES
2462.Bl -tag -width "ZFS_ABORT"
2463.It Ev ZFS_ABORT
2464Cause
2465.Nm zpool
2466to dump core on exit for the purposes of running
90cdf283 2467.Sy ::findleaks .
cda0317e
GM
2468.El
2469.Bl -tag -width "ZPOOL_IMPORT_PATH"
2470.It Ev ZPOOL_IMPORT_PATH
2471The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2472.Nm zpool
2473looks for device nodes and files.
2474Similar to the
2475.Fl d
2476option in
2477.Nm zpool import .
2478.El
2479.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2480.It Ev ZPOOL_VDEV_NAME_GUID
2481Cause
2482.Nm zpool subcommands to output vdev guids by default. This behavior
2483is identical to the
2484.Nm zpool status -g
2485command line option.
2486.El
2487.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2488.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2489Cause
2490.Nm zpool
2491subcommands to follow links for vdev names by default. This behavior is identical to the
2492.Nm zpool status -L
2493command line option.
2494.El
2495.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2496.It Ev ZPOOL_VDEV_NAME_PATH
2497Cause
2498.Nm zpool
2499subcommands to output full vdev path names by default. This
2500behavior is identical to the
2501.Nm zpool status -p
2502command line option.
2503.El
2504.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2505.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2506Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2507config VDEV names if a
2508.Sy devid
2509NVP value is present in the pool's config.
2510.Pp
39fc0cb5 2511For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2512value in the config and
2513.Nm zpool status
2514would fail when listing the config.
39fc0cb5 2515This would also be true for future Linux based pools.
cda0317e
GM
2516.Pp
2517A pool can be stripped of any
2518.Sy devid
2519values on import or prevented from adding
2520them on
2521.Nm zpool create
2522or
2523.Nm zpool add
2524by setting
2525.Sy ZFS_VDEV_DEVID_OPT_OUT .
2526.El
2527.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2528.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2529Allow a privileged user to run the
cda0317e
GM
2530.Nm zpool status/iostat
2531with the
2532.Fl c
7a8ed6b8 2533option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2534.Fl c .
2535.El
2536.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2537.It Ev ZPOOL_SCRIPTS_PATH
2538The search path for scripts when running
2539.Nm zpool status/iostat
2540with the
2541.Fl c
099700d9 2542option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2543.Pa ~/.zpool.d
2544and
2545.Pa /etc/zfs/zpool.d
2546search paths.
2547.El
2548.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2549.It Ev ZPOOL_SCRIPTS_ENABLED
2550Allow a user to run
2551.Nm zpool status/iostat
2552with the
2553.Fl c
2554option. If
2555.Sy ZPOOL_SCRIPTS_ENABLED
2556is not set, it is assumed that the user is allowed to run
2557.Nm zpool status/iostat -c .
90cdf283 2558.El
cda0317e
GM
2559.Sh INTERFACE STABILITY
2560.Sy Evolving
2561.Sh SEE ALSO
cda0317e
GM
2562.Xr zfs-events 5 ,
2563.Xr zfs-module-parameters 5 ,
90cdf283 2564.Xr zpool-features 5 ,
2565.Xr zed 8 ,
2566.Xr zfs 8