]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Fix '-T u|d' descriptions in zpool(8)
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a448a255 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
eb201f50 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
7c9a4292 30.Dd November 29, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
d2734cce
SD
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
cda0317e
GM
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl m Ar mountpoint
61.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64.Op Fl R Ar root
65.Ar pool vdev Ns ...
66.Nm
67.Cm destroy
68.Op Fl f
69.Ar pool
70.Nm
71.Cm detach
72.Ar pool device
73.Nm
74.Cm events
88f9c939 75.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
76.Nm
77.Cm export
78.Op Fl a
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 86.Oo Ar pool Oc Ns ...
cda0317e
GM
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
522db292 94.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
95.Nm
96.Cm import
97.Fl a
b5256303 98.Op Fl DflmN
cda0317e 99.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 100.Op Fl -rewind-to-checkpoint
522db292 101.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
102.Op Fl o Ar mntopts
103.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104.Op Fl R Ar root
105.Nm
106.Cm import
b5256303 107.Op Fl Dflm
cda0317e 108.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 109.Op Fl -rewind-to-checkpoint
522db292 110.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
111.Op Fl o Ar mntopts
112.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113.Op Fl R Ar root
114.Op Fl s
115.Ar pool Ns | Ns Ar id
116.Op Ar newpool Oo Fl t Oc
117.Nm
619f0976 118.Cm initialize
a769fb53 119.Op Fl c | Fl s
619f0976
GW
120.Ar pool
121.Op Ar device Ns ...
122.Nm
cda0317e
GM
123.Cm iostat
124.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 126.Op Fl ghHLnpPvy
cda0317e
GM
127.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128.Op Ar interval Op Ar count
129.Nm
130.Cm labelclear
131.Op Fl f
132.Ar device
133.Nm
134.Cm list
135.Op Fl HgLpPv
136.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137.Op Fl T Sy u Ns | Ns Sy d
138.Oo Ar pool Oc Ns ...
139.Op Ar interval Op Ar count
140.Nm
141.Cm offline
142.Op Fl f
143.Op Fl t
144.Ar pool Ar device Ns ...
145.Nm
146.Cm online
147.Op Fl e
148.Ar pool Ar device Ns ...
149.Nm
150.Cm reguid
151.Ar pool
152.Nm
153.Cm reopen
d3f2cd7e 154.Op Fl n
cda0317e
GM
155.Ar pool
156.Nm
157.Cm remove
a1d477c2 158.Op Fl np
cda0317e
GM
159.Ar pool Ar device Ns ...
160.Nm
a1d477c2
MA
161.Cm remove
162.Fl s
163.Ar pool
164.Nm
cda0317e
GM
165.Cm replace
166.Op Fl f
167.Oo Fl o Ar property Ns = Ns Ar value Oc
168.Ar pool Ar device Op Ar new_device
169.Nm
80a91e74
TC
170.Cm resilver
171.Ar pool Ns ...
172.Nm
cda0317e 173.Cm scrub
0ea05c64 174.Op Fl s | Fl p
cda0317e
GM
175.Ar pool Ns ...
176.Nm
177.Cm set
178.Ar property Ns = Ns Ar value
179.Ar pool
180.Nm
181.Cm split
b5256303 182.Op Fl gLlnP
cda0317e
GM
183.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
184.Op Fl R Ar root
185.Ar pool newpool
186.Oo Ar device Oc Ns ...
187.Nm
188.Cm status
189.Oo Fl c Ar SCRIPT Oc
a769fb53 190.Op Fl DigLpPsvx
cda0317e
GM
191.Op Fl T Sy u Ns | Ns Sy d
192.Oo Ar pool Oc Ns ...
193.Op Ar interval Op Ar count
194.Nm
195.Cm sync
196.Oo Ar pool Oc Ns ...
197.Nm
198.Cm upgrade
199.Nm
200.Cm upgrade
201.Fl v
202.Nm
203.Cm upgrade
204.Op Fl V Ar version
205.Fl a Ns | Ns Ar pool Ns ...
206.Sh DESCRIPTION
207The
208.Nm
209command configures ZFS storage pools.
210A storage pool is a collection of devices that provides physical storage and
211data replication for ZFS datasets.
212All datasets within a storage pool share the same space.
213See
214.Xr zfs 8
215for information on managing datasets.
216.Ss Virtual Devices (vdevs)
217A "virtual device" describes a single device or a collection of devices
218organized according to certain performance and fault characteristics.
219The following virtual devices are supported:
220.Bl -tag -width Ds
221.It Sy disk
222A block device, typically located under
223.Pa /dev .
224ZFS can use individual slices or partitions, though the recommended mode of
225operation is to use whole disks.
226A disk can be specified by a full path, or it can be a shorthand name
227.Po the relative portion of the path under
228.Pa /dev
229.Pc .
230A whole disk can be specified by omitting the slice or partition designation.
231For example,
232.Pa sda
233is equivalent to
234.Pa /dev/sda .
235When given a whole disk, ZFS automatically labels the disk, if necessary.
236.It Sy file
237A regular file.
238The use of files as a backing store is strongly discouraged.
239It is designed primarily for experimental purposes, as the fault tolerance of a
240file is only as good as the file system of which it is a part.
241A file must be specified by a full path.
242.It Sy mirror
243A mirror of two or more devices.
244Data is replicated in an identical fashion across all components of a mirror.
245A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
246failing before data integrity is compromised.
247.It Sy raidz , raidz1 , raidz2 , raidz3
248A variation on RAID-5 that allows for better distribution of parity and
249eliminates the RAID-5
250.Qq write hole
251.Pq in which data and parity become inconsistent after a power loss .
252Data and parity is striped across all disks within a raidz group.
253.Pp
254A raidz group can have single-, double-, or triple-parity, meaning that the
255raidz group can sustain one, two, or three failures, respectively, without
256losing any data.
257The
258.Sy raidz1
259vdev type specifies a single-parity raidz group; the
260.Sy raidz2
261vdev type specifies a double-parity raidz group; and the
262.Sy raidz3
263vdev type specifies a triple-parity raidz group.
264The
265.Sy raidz
266vdev type is an alias for
267.Sy raidz1 .
268.Pp
269A raidz group with N disks of size X with P parity disks can hold approximately
270(N-P)*X bytes and can withstand P device(s) failing before data integrity is
271compromised.
272The minimum number of devices in a raidz group is one more than the number of
273parity disks.
274The recommended number is between 3 and 9 to help increase performance.
275.It Sy spare
276A special pseudo-vdev which keeps track of available hot spares for a pool.
277For more information, see the
278.Sx Hot Spares
279section.
280.It Sy log
281A separate intent log device.
282If more than one log device is specified, then writes are load-balanced between
283devices.
284Log devices can be mirrored.
285However, raidz vdev types are not supported for the intent log.
286For more information, see the
287.Sx Intent Log
288section.
cc99f275
DB
289.It Sy dedup
290A device dedicated solely for allocating dedup data.
291The redundancy of this device should match the redundancy of the other normal
292devices in the pool. If more than one dedup device is specified, then
293allocations are load-balanced between devices.
294.It Sy special
295A device dedicated solely for allocating various kinds of internal metadata,
296and optionally small file data.
297The redundancy of this device should match the redundancy of the other normal
298devices in the pool. If more than one special device is specified, then
299allocations are load-balanced between devices.
300.Pp
301For more information on special allocations, see the
302.Sx Special Allocation Class
303section.
cda0317e
GM
304.It Sy cache
305A device used to cache storage pool data.
306A cache device cannot be configured as a mirror or raidz group.
307For more information, see the
308.Sx Cache Devices
309section.
310.El
311.Pp
312Virtual devices cannot be nested, so a mirror or raidz virtual device can only
313contain files or disks.
314Mirrors of mirrors
315.Pq or other combinations
316are not allowed.
317.Pp
318A pool can have any number of virtual devices at the top of the configuration
319.Po known as
320.Qq root vdevs
321.Pc .
322Data is dynamically distributed across all top-level devices to balance data
323among devices.
324As new virtual devices are added, ZFS automatically places data on the newly
325available devices.
326.Pp
327Virtual devices are specified one at a time on the command line, separated by
328whitespace.
329The keywords
330.Sy mirror
331and
332.Sy raidz
333are used to distinguish where a group ends and another begins.
334For example, the following creates two root vdevs, each a mirror of two disks:
335.Bd -literal
336# zpool create mypool mirror sda sdb mirror sdc sdd
337.Ed
338.Ss Device Failure and Recovery
339ZFS supports a rich set of mechanisms for handling device failure and data
340corruption.
341All metadata and data is checksummed, and ZFS automatically repairs bad data
342from a good copy when corruption is detected.
343.Pp
344In order to take advantage of these features, a pool must make use of some form
345of redundancy, using either mirrored or raidz groups.
346While ZFS supports running in a non-redundant configuration, where each root
347vdev is simply a disk or file, this is strongly discouraged.
348A single case of bit corruption can render some or all of your data unavailable.
349.Pp
350A pool's health status is described by one of three states: online, degraded,
351or faulted.
352An online pool has all devices operating normally.
353A degraded pool is one in which one or more devices have failed, but the data is
354still available due to a redundant configuration.
355A faulted pool has corrupted metadata, or one or more faulted devices, and
356insufficient replicas to continue functioning.
357.Pp
358The health of the top-level vdev, such as mirror or raidz device, is
359potentially impacted by the state of its associated vdevs, or component
360devices.
361A top-level vdev or component device is in one of the following states:
362.Bl -tag -width "DEGRADED"
363.It Sy DEGRADED
364One or more top-level vdevs is in the degraded state because one or more
365component devices are offline.
366Sufficient replicas exist to continue functioning.
367.Pp
368One or more component devices is in the degraded or faulted state, but
369sufficient replicas exist to continue functioning.
370The underlying conditions are as follows:
371.Bl -bullet
372.It
373The number of checksum errors exceeds acceptable levels and the device is
374degraded as an indication that something may be wrong.
375ZFS continues to use the device as necessary.
376.It
377The number of I/O errors exceeds acceptable levels.
378The device could not be marked as faulted because there are insufficient
379replicas to continue functioning.
380.El
381.It Sy FAULTED
382One or more top-level vdevs is in the faulted state because one or more
383component devices are offline.
384Insufficient replicas exist to continue functioning.
385.Pp
386One or more component devices is in the faulted state, and insufficient
387replicas exist to continue functioning.
388The underlying conditions are as follows:
389.Bl -bullet
390.It
6b4e21c6 391The device could be opened, but the contents did not match expected values.
cda0317e
GM
392.It
393The number of I/O errors exceeds acceptable levels and the device is faulted to
394prevent further use of the device.
395.El
396.It Sy OFFLINE
397The device was explicitly taken offline by the
398.Nm zpool Cm offline
399command.
400.It Sy ONLINE
058ac9ba 401The device is online and functioning.
cda0317e
GM
402.It Sy REMOVED
403The device was physically removed while the system was running.
404Device removal detection is hardware-dependent and may not be supported on all
405platforms.
406.It Sy UNAVAIL
407The device could not be opened.
408If a pool is imported when a device was unavailable, then the device will be
409identified by a unique identifier instead of its path since the path was never
410correct in the first place.
411.El
412.Pp
413If a device is removed and later re-attached to the system, ZFS attempts
414to put the device online automatically.
415Device attach detection is hardware-dependent and might not be supported on all
416platforms.
417.Ss Hot Spares
418ZFS allows devices to be associated with pools as
419.Qq hot spares .
420These devices are not actively used in the pool, but when an active device
421fails, it is automatically replaced by a hot spare.
422To create a pool with hot spares, specify a
423.Sy spare
424vdev with any number of devices.
425For example,
426.Bd -literal
54e5f226 427# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
428.Ed
429.Pp
430Spares can be shared across multiple pools, and can be added with the
431.Nm zpool Cm add
432command and removed with the
433.Nm zpool Cm remove
434command.
435Once a spare replacement is initiated, a new
436.Sy spare
437vdev is created within the configuration that will remain there until the
438original device is replaced.
439At this point, the hot spare becomes available again if another device fails.
440.Pp
441If a pool has a shared spare that is currently being used, the pool can not be
442exported since other pools may use this shared spare, which may lead to
443potential data corruption.
444.Pp
7c9abcf8 445An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
446If the original faulted device is detached, then the hot spare assumes its
447place in the configuration, and is removed from the spare list of all active
448pools.
449.Pp
058ac9ba 450Spares cannot replace log devices.
cda0317e
GM
451.Ss Intent Log
452The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
453transactions.
454For instance, databases often require their transactions to be on stable storage
455devices when returning from a system call.
456NFS and other applications can also use
457.Xr fsync 2
458to ensure data stability.
459By default, the intent log is allocated from blocks within the main pool.
460However, it might be possible to get better performance using separate intent
461log devices such as NVRAM or a dedicated disk.
462For example:
463.Bd -literal
464# zpool create pool sda sdb log sdc
465.Ed
466.Pp
467Multiple log devices can also be specified, and they can be mirrored.
468See the
469.Sx EXAMPLES
470section for an example of mirroring multiple log devices.
471.Pp
910f3ce7
PA
472Log devices can be added, replaced, attached, detached and removed. In
473addition, log devices are imported and exported as part of the pool
474that contains them.
a1d477c2 475Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
476.Ss Cache Devices
477Devices can be added to a storage pool as
478.Qq cache devices .
479These devices provide an additional layer of caching between main memory and
480disk.
481For read-heavy workloads, where the working set size is much larger than what
482can be cached in main memory, using cache devices allow much more of this
483working set to be served from low latency media.
484Using cache devices provides the greatest performance improvement for random
485read-workloads of mostly static content.
486.Pp
487To create a pool with cache devices, specify a
488.Sy cache
489vdev with any number of devices.
490For example:
491.Bd -literal
492# zpool create pool sda sdb cache sdc sdd
493.Ed
494.Pp
495Cache devices cannot be mirrored or part of a raidz configuration.
496If a read error is encountered on a cache device, that read I/O is reissued to
497the original storage pool device, which might be part of a mirrored or raidz
498configuration.
499.Pp
500The content of the cache devices is considered volatile, as is the case with
501other system caches.
d2734cce
SD
502.Ss Pool checkpoint
503Before starting critical procedures that include destructive actions (e.g
504.Nm zfs Cm destroy
505), an administrator can checkpoint the pool's state and in the case of a
506mistake or failure, rewind the entire pool back to the checkpoint.
507Otherwise, the checkpoint can be discarded when the procedure has completed
508successfully.
509.Pp
510A pool checkpoint can be thought of as a pool-wide snapshot and should be used
511with care as it contains every part of the pool's state, from properties to vdev
512configuration.
513Thus, while a pool has a checkpoint certain operations are not allowed.
514Specifically, vdev removal/attach/detach, mirror splitting, and
515changing the pool's guid.
516Adding a new vdev is supported but in the case of a rewind it will have to be
517added again.
518Finally, users of this feature should keep in mind that scrubs in a pool that
519has a checkpoint do not repair checkpointed data.
520.Pp
521To create a checkpoint for a pool:
522.Bd -literal
523# zpool checkpoint pool
524.Ed
525.Pp
526To later rewind to its checkpointed state, you need to first export it and
527then rewind it during import:
528.Bd -literal
529# zpool export pool
530# zpool import --rewind-to-checkpoint pool
531.Ed
532.Pp
533To discard the checkpoint from a pool:
534.Bd -literal
535# zpool checkpoint -d pool
536.Ed
537.Pp
538Dataset reservations (controlled by the
539.Nm reservation
540or
541.Nm refreservation
542zfs properties) may be unenforceable while a checkpoint exists, because the
543checkpoint is allowed to consume the dataset's reservation.
544Finally, data that is part of the checkpoint but has been freed in the
545current state of the pool won't be scanned during a scrub.
cc99f275
DB
546.Ss Special Allocation Class
547The allocations in the special class are dedicated to specific block types.
548By default this includes all metadata, the indirect blocks of user data, and
549any dedup data. The class can also be provisioned to accept a limited
550percentage of small file data blocks.
551.Pp
552A pool must always have at least one general (non-specified) vdev before
553other devices can be assigned to the special class. If the special class
554becomes full, then allocations intended for it will spill back into the
555normal class.
556.Pp
557Dedup data can be excluded from the special class by setting the
558.Sy zfs_ddt_data_is_special
559zfs module parameter to false (0).
560.Pp
561Inclusion of small file blocks in the special class is opt-in. Each dataset
562can control the size of small file blocks allowed in the special class by
563setting the
564.Sy special_small_blocks
565dataset property. It defaults to zero so you must opt-in by setting it to a
566non-zero value. See
567.Xr zfs 8
568for more info on setting this property.
cda0317e
GM
569.Ss Properties
570Each pool has several properties associated with it.
571Some properties are read-only statistics while others are configurable and
572change the behavior of the pool.
573.Pp
574The following are read-only properties:
575.Bl -tag -width Ds
6df9f8eb
YP
576.It Cm allocated
577Amount of storage used within the pool.
cda0317e
GM
578.It Sy capacity
579Percentage of pool space used.
580This property can also be referred to by its shortened column name,
581.Sy cap .
582.It Sy expandsize
9ae529ec 583Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
584increase the total capacity of the pool.
585Uninitialized space consists of any space on an EFI labeled vdev which has not
586been brought online
587.Po e.g, using
588.Nm zpool Cm online Fl e
589.Pc .
590This space occurs when a LUN is dynamically expanded.
591.It Sy fragmentation
f3a7f661 592The amount of fragmentation in the pool.
cda0317e 593.It Sy free
9ae529ec 594The amount of free space available in the pool.
cda0317e 595.It Sy freeing
9ae529ec 596After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
597returned to the pool asynchronously.
598.Sy freeing
599is the amount of space remaining to be reclaimed.
600Over time
601.Sy freeing
602will decrease while
603.Sy free
604increases.
605.It Sy health
606The current health of the pool.
607Health can be one of
608.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
609.It Sy guid
058ac9ba 610A unique identifier for the pool.
a448a255
SD
611.It Sy load_guid
612A unique identifier for the pool.
613Unlike the
614.Sy guid
615property, this identifier is generated every time we load the pool (e.g. does
616not persist across imports/exports) and never changes while the pool is loaded
617(even if a
618.Sy reguid
619operation takes place).
cda0317e 620.It Sy size
058ac9ba 621Total size of the storage pool.
cda0317e
GM
622.It Sy unsupported@ Ns Em feature_guid
623Information about unsupported features that are enabled on the pool.
624See
625.Xr zpool-features 5
626for details.
cda0317e
GM
627.El
628.Pp
629The space usage properties report actual physical space available to the
630storage pool.
631The physical space can be different from the total amount of space that any
632contained datasets can actually use.
633The amount of space used in a raidz configuration depends on the characteristics
634of the data being written.
635In addition, ZFS reserves some space for internal accounting that the
636.Xr zfs 8
637command takes into account, but the
638.Nm
639command does not.
640For non-full pools of a reasonable size, these effects should be invisible.
641For small pools, or pools that are close to being completely full, these
642discrepancies may become more noticeable.
643.Pp
058ac9ba 644The following property can be set at creation time and import time:
cda0317e
GM
645.Bl -tag -width Ds
646.It Sy altroot
647Alternate root directory.
648If set, this directory is prepended to any mount points within the pool.
649This can be used when examining an unknown pool where the mount points cannot be
650trusted, or in an alternate boot environment, where the typical paths are not
651valid.
652.Sy altroot
653is not a persistent property.
654It is valid only while the system is up.
655Setting
656.Sy altroot
657defaults to using
658.Sy cachefile Ns = Ns Sy none ,
659though this may be overridden using an explicit setting.
660.El
661.Pp
662The following property can be set only at import time:
663.Bl -tag -width Ds
664.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
665If set to
666.Sy on ,
667the pool will be imported in read-only mode.
668This property can also be referred to by its shortened column name,
669.Sy rdonly .
670.El
671.Pp
672The following properties can be set at creation time and import time, and later
673changed with the
674.Nm zpool Cm set
675command:
676.Bl -tag -width Ds
677.It Sy ashift Ns = Ns Sy ashift
678Pool sector size exponent, to the power of
679.Sy 2
680(internally referred to as
681.Sy ashift
682). Values from 9 to 16, inclusive, are valid; also, the special
683value 0 (the default) means to auto-detect using the kernel's block
684layer and a ZFS internal exception list. I/O operations will be aligned
685to the specified size boundaries. Additionally, the minimum (disk)
686write size will be set to the specified size, so this represents a
687space vs. performance trade-off. For optimal performance, the pool
688sector size should be greater than or equal to the sector size of the
689underlying disks. The typical case for setting this property is when
690performance is important and the underlying disks use 4KiB sectors but
691report 512B sectors to the OS (for compatibility reasons); in that
692case, set
693.Sy ashift=12
694(which is 1<<12 = 4096). When set, this property is
695used as the default hint value in subsequent vdev operations (add,
696attach and replace). Changing this value will not modify any existing
697vdev, not even on disk replacement; however it can be used, for
698instance, to replace a dying 512B sectors disk with a newer 4KiB
699sectors device: this will probably result in bad performance but at the
700same time could prevent loss of data.
701.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
702Controls automatic pool expansion when the underlying LUN is grown.
703If set to
704.Sy on ,
705the pool will be resized according to the size of the expanded device.
706If the device is part of a mirror or raidz then all devices within that
707mirror/raidz group must be expanded before the new space is made available to
708the pool.
709The default behavior is
710.Sy off .
711This property can also be referred to by its shortened column name,
712.Sy expand .
713.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
714Controls automatic device replacement.
715If set to
716.Sy off ,
717device replacement must be initiated by the administrator by using the
718.Nm zpool Cm replace
719command.
720If set to
721.Sy on ,
722any new device, found in the same physical location as a device that previously
723belonged to the pool, is automatically formatted and replaced.
724The default behavior is
725.Sy off .
726This property can also be referred to by its shortened column name,
727.Sy replace .
728Autoreplace can also be used with virtual disks (like device
729mapper) provided that you use the /dev/disk/by-vdev paths setup by
730vdev_id.conf. See the
731.Xr vdev_id 8
732man page for more details.
733Autoreplace and autoonline require the ZFS Event Daemon be configured and
734running. See the
735.Xr zed 8
736man page for more details.
737.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
738Identifies the default bootable dataset for the root pool. This property is
739expected to be set mainly by the installation and upgrade programs.
740Not all Linux distribution boot processes use the bootfs property.
741.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
742Controls the location of where the pool configuration is cached.
743Discovering all pools on system startup requires a cached copy of the
744configuration data that is stored on the root file system.
745All pools in this cache are automatically imported when the system boots.
746Some environments, such as install and clustering, need to cache this
747information in a different location so that pools are not automatically
748imported.
749Setting this property caches the pool configuration in a different location that
750can later be imported with
751.Nm zpool Cm import Fl c .
752Setting it to the special value
753.Sy none
754creates a temporary pool that is never cached, and the special value
755.Qq
756.Pq empty string
757uses the default location.
758.Pp
759Multiple pools can share the same cache file.
760Because the kernel destroys and recreates this file when pools are added and
761removed, care should be taken when attempting to access this file.
762When the last pool using a
763.Sy cachefile
bbf1ad67 764is exported or destroyed, the file will be empty.
cda0317e
GM
765.It Sy comment Ns = Ns Ar text
766A text string consisting of printable ASCII characters that will be stored
767such that it is available even if the pool becomes faulted.
768An administrator can provide additional information about a pool using this
769property.
770.It Sy dedupditto Ns = Ns Ar number
771Threshold for the number of block ditto copies.
772If the reference count for a deduplicated block increases above this number, a
773new ditto copy of this block is automatically stored.
774The default setting is
775.Sy 0
776which causes no ditto copies to be created for deduplicated blocks.
777The minimum legal nonzero setting is
778.Sy 100 .
779.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
780Controls whether a non-privileged user is granted access based on the dataset
781permissions defined on the dataset.
782See
783.Xr zfs 8
784for more information on ZFS delegated administration.
785.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
786Controls the system behavior in the event of catastrophic pool failure.
787This condition is typically a result of a loss of connectivity to the underlying
788storage device(s) or a failure of all devices within the pool.
789The behavior of such an event is determined as follows:
790.Bl -tag -width "continue"
791.It Sy wait
792Blocks all I/O access until the device connectivity is recovered and the errors
793are cleared.
794This is the default behavior.
795.It Sy continue
796Returns
797.Er EIO
798to any new write I/O requests but allows reads to any of the remaining healthy
799devices.
800Any write requests that have yet to be committed to disk would be blocked.
801.It Sy panic
058ac9ba 802Prints out a message to the console and generates a system crash dump.
cda0317e
GM
803.El
804.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
805The value of this property is the current state of
806.Ar feature_name .
807The only valid value when setting this property is
808.Sy enabled
809which moves
810.Ar feature_name
811to the enabled state.
812See
813.Xr zpool-features 5
814for details on feature states.
815.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
816Controls whether information about snapshots associated with this pool is
817output when
818.Nm zfs Cm list
819is run without the
820.Fl t
821option.
822The default value is
823.Sy off .
824This property can also be referred to by its shortened name,
825.Sy listsnaps .
379ca9cf
OF
826.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
827Controls whether a pool activity check should be performed during
828.Nm zpool Cm import .
829When a pool is determined to be active it cannot be imported, even with the
830.Fl f
831option. This property is intended to be used in failover configurations
832where multiple hosts have access to a pool on shared storage. When this
833property is on, periodic writes to storage occur to show the pool is in use.
834See
835.Sy zfs_multihost_interval
836in the
837.Xr zfs-module-parameters 5
838man page. In order to enable this property each host must set a unique hostid.
839See
840.Xr genhostid 1
b9373170 841.Xr zgenhostid 8
d699aaef 842.Xr spl-module-parameters 5
379ca9cf
OF
843for additional details. The default value is
844.Sy off .
cda0317e
GM
845.It Sy version Ns = Ns Ar version
846The current on-disk version of the pool.
847This can be increased, but never decreased.
848The preferred method of updating pools is with the
849.Nm zpool Cm upgrade
850command, though this property can be used when a specific version is needed for
851backwards compatibility.
852Once feature flags are enabled on a pool this property will no longer have a
853value.
854.El
855.Ss Subcommands
856All subcommands that modify state are logged persistently to the pool in their
857original form.
858.Pp
859The
860.Nm
861command provides subcommands to create and destroy storage pools, add capacity
862to storage pools, and provide information about the storage pools.
863The following subcommands are supported:
864.Bl -tag -width Ds
865.It Xo
866.Nm
867.Fl ?
868.Xc
058ac9ba 869Displays a help message.
cda0317e
GM
870.It Xo
871.Nm
872.Cm add
873.Op Fl fgLnP
874.Oo Fl o Ar property Ns = Ns Ar value Oc
875.Ar pool vdev Ns ...
876.Xc
877Adds the specified virtual devices to the given pool.
878The
879.Ar vdev
880specification is described in the
881.Sx Virtual Devices
882section.
883The behavior of the
884.Fl f
885option, and the device checks performed are described in the
886.Nm zpool Cm create
887subcommand.
888.Bl -tag -width Ds
889.It Fl f
890Forces use of
891.Ar vdev Ns s ,
892even if they appear in use or specify a conflicting replication level.
893Not all devices can be overridden in this manner.
894.It Fl g
895Display
896.Ar vdev ,
897GUIDs instead of the normal device names. These GUIDs can be used in place of
898device names for the zpool detach/offline/remove/replace commands.
899.It Fl L
900Display real paths for
901.Ar vdev Ns s
902resolving all symbolic links. This can be used to look up the current block
903device name regardless of the /dev/disk/ path used to open it.
904.It Fl n
905Displays the configuration that would be used without actually adding the
906.Ar vdev Ns s .
907The actual pool creation can still fail due to insufficient privileges or
908device sharing.
909.It Fl P
910Display real paths for
911.Ar vdev Ns s
912instead of only the last component of the path. This can be used in
85912983 913conjunction with the
914.Fl L
915flag.
cda0317e
GM
916.It Fl o Ar property Ns = Ns Ar value
917Sets the given pool properties. See the
918.Sx Properties
919section for a list of valid properties that can be set. The only property
920supported at the moment is ashift.
921.El
922.It Xo
923.Nm
924.Cm attach
925.Op Fl f
926.Oo Fl o Ar property Ns = Ns Ar value Oc
927.Ar pool device new_device
928.Xc
929Attaches
930.Ar new_device
931to the existing
932.Ar device .
933The existing device cannot be part of a raidz configuration.
934If
935.Ar device
936is not currently part of a mirrored configuration,
937.Ar device
938automatically transforms into a two-way mirror of
939.Ar device
940and
941.Ar new_device .
942If
943.Ar device
944is part of a two-way mirror, attaching
945.Ar new_device
946creates a three-way mirror, and so on.
947In either case,
948.Ar new_device
949begins to resilver immediately.
950.Bl -tag -width Ds
951.It Fl f
952Forces use of
953.Ar new_device ,
954even if its appears to be in use.
955Not all devices can be overridden in this manner.
956.It Fl o Ar property Ns = Ns Ar value
957Sets the given pool properties. See the
958.Sx Properties
959section for a list of valid properties that can be set. The only property
960supported at the moment is ashift.
961.El
962.It Xo
963.Nm
d2734cce
SD
964.Cm checkpoint
965.Op Fl d, -discard
966.Ar pool
967.Xc
968Checkpoints the current state of
969.Ar pool
970, which can be later restored by
971.Nm zpool Cm import --rewind-to-checkpoint .
972The existence of a checkpoint in a pool prohibits the following
973.Nm zpool
974commands:
975.Cm remove ,
976.Cm attach ,
977.Cm detach ,
978.Cm split ,
979and
980.Cm reguid .
981In addition, it may break reservation boundaries if the pool lacks free
982space.
983The
984.Nm zpool Cm status
985command indicates the existence of a checkpoint or the progress of discarding a
986checkpoint from a pool.
987The
988.Nm zpool Cm list
989command reports how much space the checkpoint takes from the pool.
990.Bl -tag -width Ds
991.It Fl d, -discard
992Discards an existing checkpoint from
993.Ar pool .
994.El
995.It Xo
996.Nm
cda0317e
GM
997.Cm clear
998.Ar pool
999.Op Ar device
1000.Xc
1001Clears device errors in a pool.
1002If no arguments are specified, all device errors within the pool are cleared.
1003If one or more devices is specified, only those errors associated with the
1004specified device or devices are cleared.
1005.It Xo
1006.Nm
1007.Cm create
1008.Op Fl dfn
1009.Op Fl m Ar mountpoint
1010.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1011.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1012.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1013.Op Fl R Ar root
1014.Op Fl t Ar tname
1015.Ar pool vdev Ns ...
1016.Xc
1017Creates a new storage pool containing the virtual devices specified on the
1018command line.
1019The pool name must begin with a letter, and can only contain
1020alphanumeric characters as well as underscore
1021.Pq Qq Sy _ ,
1022dash
90cdf283 1023.Pq Qq Sy \&- ,
cda0317e
GM
1024colon
1025.Pq Qq Sy \&: ,
1026space
90cdf283 1027.Pq Qq Sy \&\ ,
cda0317e
GM
1028and period
1029.Pq Qq Sy \&. .
1030The pool names
1031.Sy mirror ,
1032.Sy raidz ,
1033.Sy spare
1034and
1035.Sy log
5ee220ba
TK
1036are reserved, as are names beginning with
1037.Sy mirror ,
1038.Sy raidz ,
1039.Sy spare ,
1040and the pattern
cda0317e
GM
1041.Sy c[0-9] .
1042The
1043.Ar vdev
1044specification is described in the
1045.Sx Virtual Devices
1046section.
1047.Pp
1048The command verifies that each device specified is accessible and not currently
1049in use by another subsystem.
1050There are some uses, such as being currently mounted, or specified as the
1051dedicated dump device, that prevents a device from ever being used by ZFS.
1052Other uses, such as having a preexisting UFS file system, can be overridden with
1053the
1054.Fl f
1055option.
1056.Pp
1057The command also checks that the replication strategy for the pool is
1058consistent.
1059An attempt to combine redundant and non-redundant storage in a single pool, or
1060to mix disks and files, results in an error unless
1061.Fl f
1062is specified.
1063The use of differently sized devices within a single raidz or mirror group is
1064also flagged as an error unless
1065.Fl f
1066is specified.
1067.Pp
1068Unless the
1069.Fl R
1070option is specified, the default mount point is
1071.Pa / Ns Ar pool .
1072The mount point must not exist or must be empty, or else the root dataset
1073cannot be mounted.
1074This can be overridden with the
1075.Fl m
1076option.
1077.Pp
1078By default all supported features are enabled on the new pool unless the
1079.Fl d
1080option is specified.
1081.Bl -tag -width Ds
1082.It Fl d
1083Do not enable any features on the new pool.
1084Individual features can be enabled by setting their corresponding properties to
1085.Sy enabled
1086with the
1087.Fl o
1088option.
1089See
1090.Xr zpool-features 5
1091for details about feature properties.
1092.It Fl f
1093Forces use of
1094.Ar vdev Ns s ,
1095even if they appear in use or specify a conflicting replication level.
1096Not all devices can be overridden in this manner.
1097.It Fl m Ar mountpoint
1098Sets the mount point for the root dataset.
1099The default mount point is
1100.Pa /pool
1101or
1102.Pa altroot/pool
1103if
1104.Ar altroot
1105is specified.
1106The mount point must be an absolute path,
1107.Sy legacy ,
1108or
1109.Sy none .
1110For more information on dataset mount points, see
1111.Xr zfs 8 .
1112.It Fl n
1113Displays the configuration that would be used without actually creating the
1114pool.
1115The actual pool creation can still fail due to insufficient privileges or
1116device sharing.
1117.It Fl o Ar property Ns = Ns Ar value
1118Sets the given pool properties.
1119See the
1120.Sx Properties
1121section for a list of valid properties that can be set.
1122.It Fl o Ar feature@feature Ns = Ns Ar value
1123Sets the given pool feature. See the
1124.Xr zpool-features 5
1125section for a list of valid features that can be set.
1126Value can be either disabled or enabled.
1127.It Fl O Ar file-system-property Ns = Ns Ar value
1128Sets the given file system properties in the root file system of the pool.
1129See the
1130.Sx Properties
1131section of
1132.Xr zfs 8
1133for a list of valid properties that can be set.
1134.It Fl R Ar root
1135Equivalent to
1136.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1137.It Fl t Ar tname
1138Sets the in-core pool name to
1139.Sy tname
1140while the on-disk name will be the name specified as the pool name
1141.Sy pool .
1142This will set the default cachefile property to none. This is intended
1143to handle name space collisions when creating pools for other systems,
1144such as virtual machines or physical machines whose pools live on network
1145block devices.
1146.El
1147.It Xo
1148.Nm
1149.Cm destroy
1150.Op Fl f
1151.Ar pool
1152.Xc
1153Destroys the given pool, freeing up any devices for other use.
1154This command tries to unmount any active datasets before destroying the pool.
1155.Bl -tag -width Ds
1156.It Fl f
058ac9ba 1157Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1158.El
1159.It Xo
1160.Nm
1161.Cm detach
1162.Ar pool device
1163.Xc
1164Detaches
1165.Ar device
1166from a mirror.
1167The operation is refused if there are no other valid replicas of the data.
1168If device may be re-added to the pool later on then consider the
1169.Sy zpool offline
1170command instead.
1171.It Xo
1172.Nm
1173.Cm events
88f9c939 1174.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1175.Xc
1176Lists all recent events generated by the ZFS kernel modules. These events
1177are consumed by the
1178.Xr zed 8
1179and used to automate administrative tasks such as replacing a failed device
1180with a hot spare. For more information about the subclasses and event payloads
1181that can be generated see the
1182.Xr zfs-events 5
1183man page.
1184.Bl -tag -width Ds
1185.It Fl c
d050c627 1186Clear all previous events.
cda0317e
GM
1187.It Fl f
1188Follow mode.
1189.It Fl H
1190Scripted mode. Do not display headers, and separate fields by a
1191single tab instead of arbitrary space.
1192.It Fl v
1193Print the entire payload for each event.
1194.El
1195.It Xo
1196.Nm
1197.Cm export
1198.Op Fl a
1199.Op Fl f
1200.Ar pool Ns ...
1201.Xc
1202Exports the given pools from the system.
1203All devices are marked as exported, but are still considered in use by other
1204subsystems.
1205The devices can be moved between systems
1206.Pq even those of different endianness
1207and imported as long as a sufficient number of devices are present.
1208.Pp
1209Before exporting the pool, all datasets within the pool are unmounted.
1210A pool can not be exported if it has a shared spare that is currently being
1211used.
1212.Pp
1213For pools to be portable, you must give the
1214.Nm
1215command whole disks, not just partitions, so that ZFS can label the disks with
1216portable EFI labels.
1217Otherwise, disk drivers on platforms of different endianness will not recognize
1218the disks.
1219.Bl -tag -width Ds
1220.It Fl a
859735c0 1221Exports all pools imported on the system.
cda0317e
GM
1222.It Fl f
1223Forcefully unmount all datasets, using the
1224.Nm unmount Fl f
1225command.
1226.Pp
1227This command will forcefully export the pool even if it has a shared spare that
1228is currently being used.
1229This may lead to potential data corruption.
1230.El
1231.It Xo
1232.Nm
1233.Cm get
1234.Op Fl Hp
1235.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1236.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 1237.Oo Ar pool Oc Ns ...
cda0317e
GM
1238.Xc
1239Retrieves the given list of properties
1240.Po
1241or all properties if
1242.Sy all
1243is used
1244.Pc
1245for the specified storage pool(s).
1246These properties are displayed with the following fields:
1247.Bd -literal
2a8b84b7 1248 name Name of storage pool
058ac9ba
BB
1249 property Property name
1250 value Property value
1251 source Property source, either 'default' or 'local'.
cda0317e
GM
1252.Ed
1253.Pp
1254See the
1255.Sx Properties
1256section for more information on the available pool properties.
1257.Bl -tag -width Ds
1258.It Fl H
1259Scripted mode.
1260Do not display headers, and separate fields by a single tab instead of arbitrary
1261space.
1262.It Fl o Ar field
1263A comma-separated list of columns to display.
d7323e79 1264.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1265is the default value.
cda0317e
GM
1266.It Fl p
1267Display numbers in parsable (exact) values.
1268.El
1269.It Xo
1270.Nm
1271.Cm history
1272.Op Fl il
1273.Oo Ar pool Oc Ns ...
1274.Xc
1275Displays the command history of the specified pool(s) or all pools if no pool is
1276specified.
1277.Bl -tag -width Ds
1278.It Fl i
1279Displays internally logged ZFS events in addition to user initiated events.
1280.It Fl l
1281Displays log records in long format, which in addition to standard format
1282includes, the user name, the hostname, and the zone in which the operation was
1283performed.
1284.El
1285.It Xo
1286.Nm
1287.Cm import
1288.Op Fl D
522db292 1289.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1290.Xc
1291Lists pools available to import.
1292If the
1293.Fl d
1294option is not specified, this command searches for devices in
1295.Pa /dev .
1296The
1297.Fl d
1298option can be specified multiple times, and all directories are searched.
1299If the device appears to be part of an exported pool, this command displays a
1300summary of the pool with the name of the pool, a numeric identifier, as well as
1301the vdev layout and current health of the device for each device or file.
1302Destroyed pools, pools that were previously destroyed with the
1303.Nm zpool Cm destroy
1304command, are not listed unless the
1305.Fl D
1306option is specified.
1307.Pp
1308The numeric identifier is unique, and can be used instead of the pool name when
1309multiple exported pools of the same name are available.
1310.Bl -tag -width Ds
1311.It Fl c Ar cachefile
1312Reads configuration from the given
1313.Ar cachefile
1314that was created with the
1315.Sy cachefile
1316pool property.
1317This
1318.Ar cachefile
1319is used instead of searching for devices.
522db292
CC
1320.It Fl d Ar dir Ns | Ns Ar device
1321Uses
1322.Ar device
1323or searches for devices or files in
cda0317e
GM
1324.Ar dir .
1325The
1326.Fl d
1327option can be specified multiple times.
1328.It Fl D
058ac9ba 1329Lists destroyed pools only.
cda0317e
GM
1330.El
1331.It Xo
1332.Nm
1333.Cm import
1334.Fl a
b5256303 1335.Op Fl DflmN
cda0317e 1336.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1337.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1338.Op Fl o Ar mntopts
1339.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1340.Op Fl R Ar root
1341.Op Fl s
1342.Xc
1343Imports all pools found in the search directories.
1344Identical to the previous command, except that all pools with a sufficient
1345number of devices available are imported.
1346Destroyed pools, pools that were previously destroyed with the
1347.Nm zpool Cm destroy
1348command, will not be imported unless the
1349.Fl D
1350option is specified.
1351.Bl -tag -width Ds
1352.It Fl a
6b4e21c6 1353Searches for and imports all pools found.
cda0317e
GM
1354.It Fl c Ar cachefile
1355Reads configuration from the given
1356.Ar cachefile
1357that was created with the
1358.Sy cachefile
1359pool property.
1360This
1361.Ar cachefile
1362is used instead of searching for devices.
522db292
CC
1363.It Fl d Ar dir Ns | Ns Ar device
1364Uses
1365.Ar device
1366or searches for devices or files in
cda0317e
GM
1367.Ar dir .
1368The
1369.Fl d
1370option can be specified multiple times.
1371This option is incompatible with the
1372.Fl c
1373option.
1374.It Fl D
1375Imports destroyed pools only.
1376The
1377.Fl f
1378option is also required.
1379.It Fl f
1380Forces import, even if the pool appears to be potentially active.
1381.It Fl F
1382Recovery mode for a non-importable pool.
1383Attempt to return the pool to an importable state by discarding the last few
1384transactions.
1385Not all damaged pools can be recovered by using this option.
1386If successful, the data from the discarded transactions is irretrievably lost.
1387This option is ignored if the pool is importable or already imported.
b5256303
TC
1388.It Fl l
1389Indicates that this command will request encryption keys for all encrypted
1390datasets it attempts to mount as it is bringing the pool online. Note that if
1391any datasets have a
1392.Sy keylocation
1393of
1394.Sy prompt
1395this command will block waiting for the keys to be entered. Without this flag
1396encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1397.It Fl m
7f9d9946 1398Allows a pool to import when there is a missing log device.
cda0317e
GM
1399Recent transactions can be lost because the log device will be discarded.
1400.It Fl n
1401Used with the
1402.Fl F
1403recovery option.
1404Determines whether a non-importable pool can be made importable again, but does
1405not actually perform the pool recovery.
1406For more details about pool recovery mode, see the
1407.Fl F
1408option, above.
1409.It Fl N
7f9d9946 1410Import the pool without mounting any file systems.
cda0317e
GM
1411.It Fl o Ar mntopts
1412Comma-separated list of mount options to use when mounting datasets within the
1413pool.
1414See
1415.Xr zfs 8
1416for a description of dataset properties and mount options.
1417.It Fl o Ar property Ns = Ns Ar value
1418Sets the specified property on the imported pool.
1419See the
1420.Sx Properties
1421section for more information on the available pool properties.
1422.It Fl R Ar root
1423Sets the
1424.Sy cachefile
1425property to
1426.Sy none
1427and the
1428.Sy altroot
1429property to
1430.Ar root .
d2734cce
SD
1431.It Fl -rewind-to-checkpoint
1432Rewinds pool to the checkpointed state.
1433Once the pool is imported with this flag there is no way to undo the rewind.
1434All changes and data that were written after the checkpoint are lost!
1435The only exception is when the
1436.Sy readonly
1437mounting option is enabled.
1438In this case, the checkpointed state of the pool is opened and an
1439administrator can see how the pool would look like if they were
1440to fully rewind.
cda0317e
GM
1441.It Fl s
1442Scan using the default search path, the libblkid cache will not be
1443consulted. A custom search path may be specified by setting the
1444ZPOOL_IMPORT_PATH environment variable.
1445.It Fl X
1446Used with the
1447.Fl F
1448recovery option. Determines whether extreme
1449measures to find a valid txg should take place. This allows the pool to
1450be rolled back to a txg which is no longer guaranteed to be consistent.
1451Pools imported at an inconsistent txg may contain uncorrectable
1452checksum errors. For more details about pool recovery mode, see the
1453.Fl F
1454option, above. WARNING: This option can be extremely hazardous to the
1455health of your pool and should only be used as a last resort.
1456.It Fl T
1457Specify the txg to use for rollback. Implies
1458.Fl FX .
1459For more details
1460about pool recovery mode, see the
1461.Fl X
1462option, above. WARNING: This option can be extremely hazardous to the
1463health of your pool and should only be used as a last resort.
1464.El
1465.It Xo
1466.Nm
1467.Cm import
b5256303 1468.Op Fl Dflm
cda0317e 1469.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1470.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1471.Op Fl o Ar mntopts
1472.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1473.Op Fl R Ar root
1474.Op Fl s
1475.Ar pool Ns | Ns Ar id
1476.Op Ar newpool
1477.Xc
1478Imports a specific pool.
1479A pool can be identified by its name or the numeric identifier.
1480If
1481.Ar newpool
1482is specified, the pool is imported using the name
1483.Ar newpool .
1484Otherwise, it is imported with the same name as its exported name.
1485.Pp
1486If a device is removed from a system without running
1487.Nm zpool Cm export
1488first, the device appears as potentially active.
1489It cannot be determined if this was a failed export, or whether the device is
1490really in use from another host.
1491To import a pool in this state, the
1492.Fl f
1493option is required.
1494.Bl -tag -width Ds
1495.It Fl c Ar cachefile
1496Reads configuration from the given
1497.Ar cachefile
1498that was created with the
1499.Sy cachefile
1500pool property.
1501This
1502.Ar cachefile
1503is used instead of searching for devices.
522db292
CC
1504.It Fl d Ar dir Ns | Ns Ar device
1505Uses
1506.Ar device
1507or searches for devices or files in
cda0317e
GM
1508.Ar dir .
1509The
1510.Fl d
1511option can be specified multiple times.
1512This option is incompatible with the
1513.Fl c
1514option.
1515.It Fl D
1516Imports destroyed pool.
1517The
1518.Fl f
1519option is also required.
1520.It Fl f
058ac9ba 1521Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1522.It Fl F
1523Recovery mode for a non-importable pool.
1524Attempt to return the pool to an importable state by discarding the last few
1525transactions.
1526Not all damaged pools can be recovered by using this option.
1527If successful, the data from the discarded transactions is irretrievably lost.
1528This option is ignored if the pool is importable or already imported.
b5256303
TC
1529.It Fl l
1530Indicates that this command will request encryption keys for all encrypted
1531datasets it attempts to mount as it is bringing the pool online. Note that if
1532any datasets have a
1533.Sy keylocation
1534of
1535.Sy prompt
1536this command will block waiting for the keys to be entered. Without this flag
1537encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1538.It Fl m
7f9d9946 1539Allows a pool to import when there is a missing log device.
cda0317e
GM
1540Recent transactions can be lost because the log device will be discarded.
1541.It Fl n
1542Used with the
1543.Fl F
1544recovery option.
1545Determines whether a non-importable pool can be made importable again, but does
1546not actually perform the pool recovery.
1547For more details about pool recovery mode, see the
1548.Fl F
1549option, above.
1550.It Fl o Ar mntopts
1551Comma-separated list of mount options to use when mounting datasets within the
1552pool.
1553See
1554.Xr zfs 8
1555for a description of dataset properties and mount options.
1556.It Fl o Ar property Ns = Ns Ar value
1557Sets the specified property on the imported pool.
1558See the
1559.Sx Properties
1560section for more information on the available pool properties.
1561.It Fl R Ar root
1562Sets the
1563.Sy cachefile
1564property to
1565.Sy none
1566and the
1567.Sy altroot
1568property to
1569.Ar root .
1570.It Fl s
1571Scan using the default search path, the libblkid cache will not be
1572consulted. A custom search path may be specified by setting the
1573ZPOOL_IMPORT_PATH environment variable.
1574.It Fl X
1575Used with the
1576.Fl F
1577recovery option. Determines whether extreme
1578measures to find a valid txg should take place. This allows the pool to
1579be rolled back to a txg which is no longer guaranteed to be consistent.
1580Pools imported at an inconsistent txg may contain uncorrectable
1581checksum errors. For more details about pool recovery mode, see the
1582.Fl F
1583option, above. WARNING: This option can be extremely hazardous to the
1584health of your pool and should only be used as a last resort.
1585.It Fl T
1586Specify the txg to use for rollback. Implies
1587.Fl FX .
1588For more details
1589about pool recovery mode, see the
1590.Fl X
1591option, above. WARNING: This option can be extremely hazardous to the
1592health of your pool and should only be used as a last resort.
1c68856b 1593.It Fl t
cda0317e
GM
1594Used with
1595.Sy newpool .
1596Specifies that
1597.Sy newpool
1598is temporary. Temporary pool names last until export. Ensures that
1599the original pool name will be used in all label updates and therefore
1600is retained upon export.
1601Will also set -o cachefile=none when not explicitly specified.
1602.El
1603.It Xo
1604.Nm
619f0976 1605.Cm initialize
a769fb53 1606.Op Fl c | Fl s
619f0976
GW
1607.Ar pool
1608.Op Ar device Ns ...
1609.Xc
1610Begins initializing by writing to all unallocated regions on the specified
1611devices, or all eligible devices in the pool if no individual devices are
1612specified.
1613Only leaf data or log devices may be initialized.
1614.Bl -tag -width Ds
1615.It Fl c, -cancel
1616Cancel initializing on the specified devices, or all eligible devices if none
1617are specified.
1618If one or more target devices are invalid or are not currently being
1619initialized, the command will fail and no cancellation will occur on any device.
1620.It Fl s -suspend
1621Suspend initializing on the specified devices, or all eligible devices if none
1622are specified.
1623If one or more target devices are invalid or are not currently being
1624initialized, the command will fail and no suspension will occur on any device.
1625Initializing can then be resumed by running
1626.Nm zpool Cm initialize
1627with no flags on the relevant target devices.
1628.El
1629.It Xo
1630.Nm
cda0317e
GM
1631.Cm iostat
1632.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1633.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 1634.Op Fl ghHLnpPvy
cda0317e
GM
1635.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1636.Op Ar interval Op Ar count
1637.Xc
1638Displays I/O statistics for the given pools/vdevs. You can pass in a
1639list of pools, a pool and list of vdevs in that pool, or a list of any
1640vdevs from any pool. If no items are specified, statistics for every
1641pool in the system are shown.
1642When given an
1643.Ar interval ,
1644the statistics are printed every
1645.Ar interval
8fccfa8e
DW
1646seconds until ^C is pressed. If
1647.Fl n
1648flag is specified the headers are displayed only once, otherwise they are
1649displayed periodically. If count is specified, the command exits
cda0317e
GM
1650after count reports are printed. The first report printed is always
1651the statistics since boot regardless of whether
1652.Ar interval
1653and
1654.Ar count
1655are passed. However, this behavior can be suppressed with the
1656.Fl y
1657flag. Also note that the units of
1658.Sy K ,
1659.Sy M ,
1660.Sy G ...
1661that are printed in the report are in base 1024. To get the raw
1662values, use the
1663.Fl p
1664flag.
1665.Bl -tag -width Ds
7a8ed6b8 1666.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1667Run a script (or scripts) on each vdev and include the output as a new column
1668in the
1669.Nm zpool Cm iostat
1670output. Users can run any script found in their
1671.Pa ~/.zpool.d
1672directory or from the system
1673.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1674directory. Script names containing the slash (/) character are not allowed.
1675The default search path can be overridden by setting the
cda0317e
GM
1676ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1677.Fl c
1678if they have the ZPOOL_SCRIPTS_AS_ROOT
1679environment variable set. If a script requires the use of a privileged
1680command, like
7a8ed6b8
NB
1681.Xr smartctl 8 ,
1682then it's recommended you allow the user access to it in
cda0317e
GM
1683.Pa /etc/sudoers
1684or add the user to the
1685.Pa /etc/sudoers.d/zfs
1686file.
1687.Pp
1688If
1689.Fl c
1690is passed without a script name, it prints a list of all scripts.
1691.Fl c
7a8ed6b8 1692also sets verbose mode
90cdf283 1693.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1694.Pp
1695Script output should be in the form of "name=value". The column name is
1696set to "name" and the value is set to "value". Multiple lines can be
1697used to output multiple columns. The first line of output not in the
1698"name=value" format is displayed without a column title, and no more
1699output after that is displayed. This can be useful for printing error
1700messages. Blank or NULL values are printed as a '-' to make output
1701awk-able.
1702.Pp
d6418de0 1703The following environment variables are set before running each script:
cda0317e
GM
1704.Bl -tag -width "VDEV_PATH"
1705.It Sy VDEV_PATH
1706Full path to the vdev
1707.El
1708.Bl -tag -width "VDEV_UPATH"
1709.It Sy VDEV_UPATH
1710Underlying path to the vdev (/dev/sd*). For use with device mapper,
1711multipath, or partitioned vdevs.
1712.El
1713.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1714.It Sy VDEV_ENC_SYSFS_PATH
1715The sysfs path to the enclosure for the vdev (if any).
1716.El
1717.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1718Display a time stamp.
cda0317e
GM
1719Specify
1720.Sy u
1721for a printed representation of the internal representation of time.
1722See
1723.Xr time 2 .
1724Specify
1725.Sy d
1726for standard date format.
1727See
1728.Xr date 1 .
1729.It Fl g
1730Display vdev GUIDs instead of the normal device names. These GUIDs
1731can be used in place of device names for the zpool
1732detach/offline/remove/replace commands.
1733.It Fl H
1734Scripted mode. Do not display headers, and separate fields by a
1735single tab instead of arbitrary space.
1736.It Fl L
1737Display real paths for vdevs resolving all symbolic links. This can
1738be used to look up the current block device name regardless of the
1739.Pa /dev/disk/
1740path used to open it.
8fccfa8e
DW
1741.It Fl n
1742Print headers only once when passed
cda0317e
GM
1743.It Fl p
1744Display numbers in parsable (exact) values. Time values are in
1745nanoseconds.
1746.It Fl P
1747Display full paths for vdevs instead of only the last component of
1748the path. This can be used in conjunction with the
1749.Fl L
1750flag.
1751.It Fl r
1752Print request size histograms for the leaf ZIOs. This includes
1753histograms of individual ZIOs (
1754.Ar ind )
1755and aggregate ZIOs (
1756.Ar agg ).
1757These stats can be useful for seeing how well the ZFS IO aggregator is
1758working. Do not confuse these request size stats with the block layer
1759requests; it's possible ZIOs can be broken up before being sent to the
1760block device.
1761.It Fl v
1762Verbose statistics Reports usage statistics for individual vdevs within the
1763pool, in addition to the pool-wide statistics.
1764.It Fl y
eb201f50
GM
1765Omit statistics since boot.
1766Normally the first line of output reports the statistics since boot.
1767This option suppresses that first line of output.
8fccfa8e 1768.Ar interval
cda0317e 1769.It Fl w
eb201f50
GM
1770Display latency histograms:
1771.Pp
1772.Ar total_wait :
1773Total IO time (queuing + disk IO time).
1774.Ar disk_wait :
1775Disk IO time (time reading/writing the disk).
1776.Ar syncq_wait :
1777Amount of time IO spent in synchronous priority queues. Does not include
1778disk time.
1779.Ar asyncq_wait :
1780Amount of time IO spent in asynchronous priority queues. Does not include
1781disk time.
1782.Ar scrub :
1783Amount of time IO spent in scrub queue. Does not include disk time.
cda0317e 1784.It Fl l
193a37cb 1785Include average latency statistics:
cda0317e
GM
1786.Pp
1787.Ar total_wait :
193a37cb 1788Average total IO time (queuing + disk IO time).
cda0317e 1789.Ar disk_wait :
193a37cb 1790Average disk IO time (time reading/writing the disk).
cda0317e
GM
1791.Ar syncq_wait :
1792Average amount of time IO spent in synchronous priority queues. Does
1793not include disk time.
1794.Ar asyncq_wait :
1795Average amount of time IO spent in asynchronous priority queues.
1796Does not include disk time.
1797.Ar scrub :
1798Average queuing time in scrub queue. Does not include disk time.
1799.It Fl q
1800Include active queue statistics. Each priority queue has both
1801pending (
1802.Ar pend )
1803and active (
1804.Ar activ )
1805IOs. Pending IOs are waiting to
1806be issued to the disk, and active IOs have been issued to disk and are
1807waiting for completion. These stats are broken out by priority queue:
1808.Pp
1809.Ar syncq_read/write :
1810Current number of entries in synchronous priority
1811queues.
1812.Ar asyncq_read/write :
193a37cb 1813Current number of entries in asynchronous priority queues.
cda0317e 1814.Ar scrubq_read :
193a37cb 1815Current number of entries in scrub queue.
cda0317e
GM
1816.Pp
1817All queue statistics are instantaneous measurements of the number of
1818entries in the queues. If you specify an interval, the measurements
1819will be sampled from the end of the interval.
1820.El
1821.It Xo
1822.Nm
1823.Cm labelclear
1824.Op Fl f
1825.Ar device
1826.Xc
1827Removes ZFS label information from the specified
1828.Ar device .
1829The
1830.Ar device
1831must not be part of an active pool configuration.
1832.Bl -tag -width Ds
1833.It Fl f
131cc95c 1834Treat exported or foreign devices as inactive.
cda0317e
GM
1835.El
1836.It Xo
1837.Nm
1838.Cm list
1839.Op Fl HgLpPv
1840.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1841.Op Fl T Sy u Ns | Ns Sy d
1842.Oo Ar pool Oc Ns ...
1843.Op Ar interval Op Ar count
1844.Xc
1845Lists the given pools along with a health status and space usage.
1846If no
1847.Ar pool Ns s
1848are specified, all pools in the system are listed.
1849When given an
1850.Ar interval ,
1851the information is printed every
1852.Ar interval
1853seconds until ^C is pressed.
1854If
1855.Ar count
1856is specified, the command exits after
1857.Ar count
1858reports are printed.
1859.Bl -tag -width Ds
1860.It Fl g
1861Display vdev GUIDs instead of the normal device names. These GUIDs
1862can be used in place of device names for the zpool
1863detach/offline/remove/replace commands.
1864.It Fl H
1865Scripted mode.
1866Do not display headers, and separate fields by a single tab instead of arbitrary
1867space.
1868.It Fl o Ar property
1869Comma-separated list of properties to display.
1870See the
1871.Sx Properties
1872section for a list of valid properties.
1873The default list is
fb8a10d5
EA
1874.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1875.Cm capacity , dedupratio , health , altroot .
cda0317e
GM
1876.It Fl L
1877Display real paths for vdevs resolving all symbolic links. This can
1878be used to look up the current block device name regardless of the
1879/dev/disk/ path used to open it.
1880.It Fl p
1881Display numbers in parsable
1882.Pq exact
1883values.
1884.It Fl P
1885Display full paths for vdevs instead of only the last component of
1886the path. This can be used in conjunction with the
85912983 1887.Fl L
1888flag.
cda0317e 1889.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1890Display a time stamp.
cda0317e 1891Specify
f23b0242 1892.Sy u
cda0317e
GM
1893for a printed representation of the internal representation of time.
1894See
1895.Xr time 2 .
1896Specify
f23b0242 1897.Sy d
cda0317e
GM
1898for standard date format.
1899See
1900.Xr date 1 .
1901.It Fl v
1902Verbose statistics.
1903Reports usage statistics for individual vdevs within the pool, in addition to
1904the pool-wise statistics.
1905.El
1906.It Xo
1907.Nm
1908.Cm offline
1909.Op Fl f
1910.Op Fl t
1911.Ar pool Ar device Ns ...
1912.Xc
1913Takes the specified physical device offline.
1914While the
1915.Ar device
1916is offline, no attempt is made to read or write to the device.
1917This command is not applicable to spares.
1918.Bl -tag -width Ds
1919.It Fl f
1920Force fault. Instead of offlining the disk, put it into a faulted
1921state. The fault will persist across imports unless the
1922.Fl t
1923flag was specified.
1924.It Fl t
1925Temporary.
1926Upon reboot, the specified physical device reverts to its previous state.
1927.El
1928.It Xo
1929.Nm
1930.Cm online
1931.Op Fl e
1932.Ar pool Ar device Ns ...
1933.Xc
058ac9ba 1934Brings the specified physical device online.
7c9abcf8 1935This command is not applicable to spares.
cda0317e
GM
1936.Bl -tag -width Ds
1937.It Fl e
1938Expand the device to use all available space.
1939If the device is part of a mirror or raidz then all devices must be expanded
1940before the new space will become available to the pool.
1941.El
1942.It Xo
1943.Nm
1944.Cm reguid
1945.Ar pool
1946.Xc
1947Generates a new unique identifier for the pool.
1948You must ensure that all devices in this pool are online and healthy before
1949performing this action.
1950.It Xo
1951.Nm
1952.Cm reopen
d3f2cd7e 1953.Op Fl n
cda0317e
GM
1954.Ar pool
1955.Xc
5853fe79 1956Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
1957.Bl -tag -width Ds
1958.It Fl n
1959Do not restart an in-progress scrub operation. This is not recommended and can
1960result in partially resilvered devices unless a second scrub is performed.
a94d38c0 1961.El
cda0317e
GM
1962.It Xo
1963.Nm
1964.Cm remove
a1d477c2 1965.Op Fl np
cda0317e
GM
1966.Ar pool Ar device Ns ...
1967.Xc
1968Removes the specified device from the pool.
2ced3cf0
BB
1969This command supports removing hot spare, cache, log, and both mirrored and
1970non-redundant primary top-level vdevs, including dedup and special vdevs.
1971When the primary pool storage includes a top-level raidz vdev only hot spare,
1972cache, and log devices can be removed.
a1d477c2
MA
1973.sp
1974Removing a top-level vdev reduces the total amount of space in the storage pool.
1975The specified device will be evacuated by copying all allocated space from it to
1976the other devices in the pool.
1977In this case, the
1978.Nm zpool Cm remove
1979command initiates the removal and returns, while the evacuation continues in
1980the background.
1981The removal progress can be monitored with
7c9a4292
BB
1982.Nm zpool Cm status .
1983If an IO error is encountered during the removal process it will be
1984cancelled. The
2ced3cf0
BB
1985.Sy device_removal
1986feature flag must be enabled to remove a top-level vdev, see
1987.Xr zpool-features 5 .
a1d477c2
MA
1988.Pp
1989A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1990same.
1991Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
1992the
1993.Nm zpool Cm detach
1994command.
a1d477c2
MA
1995.Bl -tag -width Ds
1996.It Fl n
1997Do not actually perform the removal ("no-op").
1998Instead, print the estimated amount of memory that will be used by the
1999mapping table after the removal completes.
2000This is nonzero only for top-level vdevs.
2001.El
2002.Bl -tag -width Ds
2003.It Fl p
2004Used in conjunction with the
2005.Fl n
2006flag, displays numbers as parsable (exact) values.
2007.El
2008.It Xo
2009.Nm
2010.Cm remove
2011.Fl s
2012.Ar pool
2013.Xc
2014Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
2015.It Xo
2016.Nm
2017.Cm replace
2018.Op Fl f
2019.Op Fl o Ar property Ns = Ns Ar value
2020.Ar pool Ar device Op Ar new_device
2021.Xc
2022Replaces
2023.Ar old_device
2024with
2025.Ar new_device .
2026This is equivalent to attaching
2027.Ar new_device ,
2028waiting for it to resilver, and then detaching
2029.Ar old_device .
2030.Pp
2031The size of
2032.Ar new_device
2033must be greater than or equal to the minimum size of all the devices in a mirror
2034or raidz configuration.
2035.Pp
2036.Ar new_device
2037is required if the pool is not redundant.
2038If
2039.Ar new_device
2040is not specified, it defaults to
2041.Ar old_device .
2042This form of replacement is useful after an existing disk has failed and has
2043been physically replaced.
2044In this case, the new disk may have the same
2045.Pa /dev
2046path as the old device, even though it is actually a different disk.
2047ZFS recognizes this.
2048.Bl -tag -width Ds
2049.It Fl f
2050Forces use of
2051.Ar new_device ,
2052even if its appears to be in use.
2053Not all devices can be overridden in this manner.
2054.It Fl o Ar property Ns = Ns Ar value
2055Sets the given pool properties. See the
2056.Sx Properties
2057section for a list of valid properties that can be set.
2058The only property supported at the moment is
2059.Sy ashift .
2060.El
2061.It Xo
2062.Nm
2063.Cm scrub
0ea05c64 2064.Op Fl s | Fl p
cda0317e
GM
2065.Ar pool Ns ...
2066.Xc
0ea05c64 2067Begins a scrub or resumes a paused scrub.
cda0317e
GM
2068The scrub examines all data in the specified pools to verify that it checksums
2069correctly.
2070For replicated
2071.Pq mirror or raidz
2072devices, ZFS automatically repairs any damage discovered during the scrub.
2073The
2074.Nm zpool Cm status
2075command reports the progress of the scrub and summarizes the results of the
2076scrub upon completion.
2077.Pp
2078Scrubbing and resilvering are very similar operations.
2079The difference is that resilvering only examines data that ZFS knows to be out
2080of date
2081.Po
2082for example, when attaching a new device to a mirror or replacing an existing
2083device
2084.Pc ,
2085whereas scrubbing examines all data to discover silent errors due to hardware
2086faults or disk failure.
2087.Pp
2088Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2089one at a time.
0ea05c64 2090If a scrub is paused, the
cda0317e 2091.Nm zpool Cm scrub
0ea05c64 2092resumes it.
cda0317e
GM
2093If a resilver is in progress, ZFS does not allow a scrub to be started until the
2094resilver completes.
2095.Bl -tag -width Ds
2096.It Fl s
058ac9ba 2097Stop scrubbing.
cda0317e 2098.El
0ea05c64
AP
2099.Bl -tag -width Ds
2100.It Fl p
2101Pause scrubbing.
e4b6b2db
AP
2102Scrub pause state and progress are periodically synced to disk.
2103If the system is restarted or pool is exported during a paused scrub,
2104even after import, scrub will remain paused until it is resumed.
2105Once resumed the scrub will pick up from the place where it was last
2106checkpointed to disk.
0ea05c64
AP
2107To resume a paused scrub issue
2108.Nm zpool Cm scrub
2109again.
2110.El
cda0317e
GM
2111.It Xo
2112.Nm
80a91e74
TC
2113.Cm resilver
2114.Ar pool Ns ...
2115.Xc
2116Starts a resilver. If an existing resilver is already running it will be
2117restarted from the beginning. Any drives that were scheduled for a deferred
2118resilver will be added to the new one.
2119.It Xo
2120.Nm
cda0317e
GM
2121.Cm set
2122.Ar property Ns = Ns Ar value
2123.Ar pool
2124.Xc
2125Sets the given property on the specified pool.
2126See the
2127.Sx Properties
2128section for more information on what properties can be set and acceptable
2129values.
2130.It Xo
2131.Nm
2132.Cm split
b5256303 2133.Op Fl gLlnP
cda0317e
GM
2134.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2135.Op Fl R Ar root
2136.Ar pool newpool
2137.Op Ar device ...
2138.Xc
2139Splits devices off
2140.Ar pool
2141creating
2142.Ar newpool .
2143All vdevs in
2144.Ar pool
2145must be mirrors and the pool must not be in the process of resilvering.
2146At the time of the split,
2147.Ar newpool
2148will be a replica of
2149.Ar pool .
2150By default, the
2151last device in each mirror is split from
2152.Ar pool
2153to create
2154.Ar newpool .
2155.Pp
2156The optional device specification causes the specified device(s) to be
2157included in the new
2158.Ar pool
2159and, should any devices remain unspecified,
2160the last device in each mirror is used as would be by default.
2161.Bl -tag -width Ds
2162.It Fl g
2163Display vdev GUIDs instead of the normal device names. These GUIDs
2164can be used in place of device names for the zpool
2165detach/offline/remove/replace commands.
2166.It Fl L
2167Display real paths for vdevs resolving all symbolic links. This can
2168be used to look up the current block device name regardless of the
2169.Pa /dev/disk/
2170path used to open it.
b5256303
TC
2171.It Fl l
2172Indicates that this command will request encryption keys for all encrypted
2173datasets it attempts to mount as it is bringing the new pool online. Note that
2174if any datasets have a
2175.Sy keylocation
2176of
2177.Sy prompt
2178this command will block waiting for the keys to be entered. Without this flag
2179encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
2180.It Fl n
2181Do dry run, do not actually perform the split.
2182Print out the expected configuration of
2183.Ar newpool .
2184.It Fl P
2185Display full paths for vdevs instead of only the last component of
2186the path. This can be used in conjunction with the
85912983 2187.Fl L
2188flag.
cda0317e
GM
2189.It Fl o Ar property Ns = Ns Ar value
2190Sets the specified property for
2191.Ar newpool .
2192See the
2193.Sx Properties
2194section for more information on the available pool properties.
2195.It Fl R Ar root
2196Set
2197.Sy altroot
2198for
2199.Ar newpool
2200to
2201.Ar root
2202and automatically import it.
2203.El
2204.It Xo
2205.Nm
2206.Cm status
7a8ed6b8 2207.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
a769fb53 2208.Op Fl DigLpPsvx
cda0317e
GM
2209.Op Fl T Sy u Ns | Ns Sy d
2210.Oo Ar pool Oc Ns ...
2211.Op Ar interval Op Ar count
2212.Xc
2213Displays the detailed health status for the given pools.
2214If no
2215.Ar pool
2216is specified, then the status of each pool in the system is displayed.
2217For more information on pool and device health, see the
2218.Sx Device Failure and Recovery
2219section.
2220.Pp
2221If a scrub or resilver is in progress, this command reports the percentage done
2222and the estimated time to completion.
2223Both of these are only approximate, because the amount of data in the pool and
2224the other workloads on the system can change.
2225.Bl -tag -width Ds
7a8ed6b8 2226.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2227Run a script (or scripts) on each vdev and include the output as a new column
2228in the
2229.Nm zpool Cm status
2230output. See the
2231.Fl c
2232option of
2233.Nm zpool Cm iostat
2234for complete details.
a769fb53
BB
2235.It Fl i
2236Display vdev initialization status.
cda0317e
GM
2237.It Fl g
2238Display vdev GUIDs instead of the normal device names. These GUIDs
2239can be used in place of device names for the zpool
2240detach/offline/remove/replace commands.
2241.It Fl L
2242Display real paths for vdevs resolving all symbolic links. This can
2243be used to look up the current block device name regardless of the
2244.Pa /dev/disk/
2245path used to open it.
ad796b8a
TH
2246.It Fl p
2247Display numbers in parsable (exact) values.
f4ae39a1
BB
2248.It Fl P
2249Display full paths for vdevs instead of only the last component of
2250the path. This can be used in conjunction with the
85912983 2251.Fl L
2252flag.
cda0317e
GM
2253.It Fl D
2254Display a histogram of deduplication statistics, showing the allocated
2255.Pq physically present on disk
2256and referenced
2257.Pq logically referenced in the pool
2258block counts and sizes by reference count.
ad796b8a
TH
2259.It Fl s
2260Display the number of leaf VDEV slow IOs. This is the number of IOs that
2261didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2262This does not necessarily mean the IOs failed to complete, just took an
2263unreasonably long amount of time. This may indicate a problem with the
2264underlying storage.
cda0317e 2265.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2266Display a time stamp.
cda0317e 2267Specify
f23b0242 2268.Sy u
cda0317e
GM
2269for a printed representation of the internal representation of time.
2270See
2271.Xr time 2 .
2272Specify
f23b0242 2273.Sy d
cda0317e
GM
2274for standard date format.
2275See
2276.Xr date 1 .
2277.It Fl v
2278Displays verbose data error information, printing out a complete list of all
2279data errors since the last complete pool scrub.
2280.It Fl x
2281Only display status for pools that are exhibiting errors or are otherwise
2282unavailable.
2283Warnings about pools not using the latest on-disk format will not be included.
2284.El
2285.It Xo
2286.Nm
2287.Cm sync
2288.Op Ar pool ...
2289.Xc
2290This command forces all in-core dirty data to be written to the primary
2291pool storage and not the ZIL. It will also update administrative
2292information including quota reporting. Without arguments,
2293.Sy zpool sync
2294will sync all pools on the system. Otherwise, it will sync only the
2295specified pool(s).
2296.It Xo
2297.Nm
2298.Cm upgrade
2299.Xc
2300Displays pools which do not have all supported features enabled and pools
2301formatted using a legacy ZFS version number.
2302These pools can continue to be used, but some features may not be available.
2303Use
2304.Nm zpool Cm upgrade Fl a
2305to enable all features on all pools.
2306.It Xo
2307.Nm
2308.Cm upgrade
2309.Fl v
2310.Xc
2311Displays legacy ZFS versions supported by the current software.
2312See
2313.Xr zpool-features 5
2314for a description of feature flags features supported by the current software.
2315.It Xo
2316.Nm
2317.Cm upgrade
2318.Op Fl V Ar version
2319.Fl a Ns | Ns Ar pool Ns ...
2320.Xc
2321Enables all supported features on the given pool.
2322Once this is done, the pool will no longer be accessible on systems that do not
2323support feature flags.
2324See
9d489ab3 2325.Xr zpool-features 5
cda0317e
GM
2326for details on compatibility with systems that support feature flags, but do not
2327support all features enabled on the pool.
2328.Bl -tag -width Ds
2329.It Fl a
b9b24bb4 2330Enables all supported features on all pools.
cda0317e
GM
2331.It Fl V Ar version
2332Upgrade to the specified legacy version.
2333If the
2334.Fl V
2335flag is specified, no features will be enabled on the pool.
2336This option can only be used to increase the version number up to the last
2337supported legacy version number.
2338.El
2339.El
2340.Sh EXIT STATUS
2341The following exit values are returned:
2342.Bl -tag -width Ds
2343.It Sy 0
2344Successful completion.
2345.It Sy 1
2346An error occurred.
2347.It Sy 2
2348Invalid command line options were specified.
2349.El
2350.Sh EXAMPLES
2351.Bl -tag -width Ds
2352.It Sy Example 1 No Creating a RAID-Z Storage Pool
2353The following command creates a pool with a single raidz root vdev that
2354consists of six disks.
2355.Bd -literal
2356# zpool create tank raidz sda sdb sdc sdd sde sdf
2357.Ed
2358.It Sy Example 2 No Creating a Mirrored Storage Pool
2359The following command creates a pool with two mirrors, where each mirror
2360contains two disks.
2361.Bd -literal
2362# zpool create tank mirror sda sdb mirror sdc sdd
2363.Ed
2364.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2365The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2366.Bd -literal
2367# zpool create tank sda1 sdb2
2368.Ed
2369.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2370The following command creates an unmirrored pool using files.
2371While not recommended, a pool based on files can be useful for experimental
2372purposes.
2373.Bd -literal
2374# zpool create tank /path/to/file/a /path/to/file/b
2375.Ed
2376.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2377The following command adds two mirrored disks to the pool
2378.Em tank ,
2379assuming the pool is already made up of two-way mirrors.
2380The additional space is immediately available to any datasets within the pool.
2381.Bd -literal
2382# zpool add tank mirror sda sdb
2383.Ed
2384.It Sy Example 6 No Listing Available ZFS Storage Pools
2385The following command lists all available pools on the system.
2386In this case, the pool
2387.Em zion
2388is faulted due to a missing device.
058ac9ba 2389The results from this command are similar to the following:
cda0317e
GM
2390.Bd -literal
2391# zpool list
d72cd017
TK
2392NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2393rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2394tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2395zion - - - - - - - FAULTED -
cda0317e
GM
2396.Ed
2397.It Sy Example 7 No Destroying a ZFS Storage Pool
2398The following command destroys the pool
2399.Em tank
2400and any datasets contained within.
2401.Bd -literal
2402# zpool destroy -f tank
2403.Ed
2404.It Sy Example 8 No Exporting a ZFS Storage Pool
2405The following command exports the devices in pool
2406.Em tank
2407so that they can be relocated or later imported.
2408.Bd -literal
2409# zpool export tank
2410.Ed
2411.It Sy Example 9 No Importing a ZFS Storage Pool
2412The following command displays available pools, and then imports the pool
2413.Em tank
2414for use on the system.
058ac9ba 2415The results from this command are similar to the following:
cda0317e
GM
2416.Bd -literal
2417# zpool import
058ac9ba
BB
2418 pool: tank
2419 id: 15451357997522795478
2420 state: ONLINE
2421action: The pool can be imported using its name or numeric identifier.
2422config:
2423
2424 tank ONLINE
2425 mirror ONLINE
54e5f226
RL
2426 sda ONLINE
2427 sdb ONLINE
058ac9ba 2428
cda0317e
GM
2429# zpool import tank
2430.Ed
2431.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2432The following command upgrades all ZFS Storage pools to the current version of
2433the software.
2434.Bd -literal
2435# zpool upgrade -a
2436This system is currently running ZFS version 2.
2437.Ed
2438.It Sy Example 11 No Managing Hot Spares
058ac9ba 2439The following command creates a new pool with an available hot spare:
cda0317e
GM
2440.Bd -literal
2441# zpool create tank mirror sda sdb spare sdc
2442.Ed
2443.Pp
2444If one of the disks were to fail, the pool would be reduced to the degraded
2445state.
2446The failed device can be replaced using the following command:
2447.Bd -literal
2448# zpool replace tank sda sdd
2449.Ed
2450.Pp
2451Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2452made available for use should another device fail.
cda0317e
GM
2453The hot spare can be permanently removed from the pool using the following
2454command:
2455.Bd -literal
2456# zpool remove tank sdc
2457.Ed
2458.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2459The following command creates a ZFS storage pool consisting of two, two-way
2460mirrors and mirrored log devices:
2461.Bd -literal
2462# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2463 sde sdf
2464.Ed
2465.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2466The following command adds two disks for use as cache devices to a ZFS storage
2467pool:
2468.Bd -literal
2469# zpool add pool cache sdc sdd
2470.Ed
2471.Pp
2472Once added, the cache devices gradually fill with content from main memory.
2473Depending on the size of your cache devices, it could take over an hour for
2474them to fill.
2475Capacity and reads can be monitored using the
2476.Cm iostat
2477option as follows:
2478.Bd -literal
2479# zpool iostat -v pool 5
2480.Ed
a1d477c2
MA
2481.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2482The following commands remove the mirrored log device
2483.Sy mirror-2
2484and mirrored top-level data device
2485.Sy mirror-1 .
2486.Pp
058ac9ba 2487Given this configuration:
cda0317e
GM
2488.Bd -literal
2489 pool: tank
2490 state: ONLINE
2491 scrub: none requested
058ac9ba
BB
2492config:
2493
2494 NAME STATE READ WRITE CKSUM
2495 tank ONLINE 0 0 0
2496 mirror-0 ONLINE 0 0 0
54e5f226
RL
2497 sda ONLINE 0 0 0
2498 sdb ONLINE 0 0 0
058ac9ba 2499 mirror-1 ONLINE 0 0 0
54e5f226
RL
2500 sdc ONLINE 0 0 0
2501 sdd ONLINE 0 0 0
058ac9ba
BB
2502 logs
2503 mirror-2 ONLINE 0 0 0
54e5f226
RL
2504 sde ONLINE 0 0 0
2505 sdf ONLINE 0 0 0
cda0317e
GM
2506.Ed
2507.Pp
2508The command to remove the mirrored log
2509.Sy mirror-2
2510is:
2511.Bd -literal
2512# zpool remove tank mirror-2
2513.Ed
a1d477c2
MA
2514.Pp
2515The command to remove the mirrored data
2516.Sy mirror-1
2517is:
2518.Bd -literal
2519# zpool remove tank mirror-1
2520.Ed
cda0317e
GM
2521.It Sy Example 15 No Displaying expanded space on a device
2522The following command displays the detailed information for the pool
2523.Em data .
2524This pool is comprised of a single raidz vdev where one of its devices
2525increased its capacity by 10GB.
2526In this example, the pool will not be able to utilize this extra capacity until
2527all the devices under the raidz vdev have been expanded.
2528.Bd -literal
2529# zpool list -v data
d72cd017
TK
2530NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2531data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2532 raidz1 23.9G 14.6G 9.30G - 48%
2533 sda - - - - -
2534 sdb - - - 10G -
2535 sdc - - - - -
cda0317e
GM
2536.Ed
2537.It Sy Example 16 No Adding output columns
2538Additional columns can be added to the
2539.Nm zpool Cm status
2540and
2541.Nm zpool Cm iostat
2542output with
2543.Fl c
2544option.
2545.Bd -literal
2546# zpool status -c vendor,model,size
2547 NAME STATE READ WRITE CKSUM vendor model size
2548 tank ONLINE 0 0 0
2549 mirror-0 ONLINE 0 0 0
2550 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2551 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2552 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2553 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2554 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2555 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2556
2557# zpool iostat -vc slaves
2558 capacity operations bandwidth
2559 pool alloc free read write read write slaves
2560 ---------- ----- ----- ----- ----- ----- ----- ---------
2561 tank 20.4G 7.23T 26 152 20.7M 21.6M
2562 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2563 U1 - - 0 31 1.46K 20.6M sdb sdff
2564 U10 - - 0 1 3.77K 13.3K sdas sdgw
2565 U11 - - 0 1 288K 13.3K sdat sdgx
2566 U12 - - 0 1 78.4K 13.3K sdau sdgy
2567 U13 - - 0 1 128K 13.3K sdav sdgz
2568 U14 - - 0 1 63.2K 13.3K sdfk sdg
2569.Ed
2570.El
2571.Sh ENVIRONMENT VARIABLES
2572.Bl -tag -width "ZFS_ABORT"
2573.It Ev ZFS_ABORT
2574Cause
2575.Nm zpool
2576to dump core on exit for the purposes of running
90cdf283 2577.Sy ::findleaks .
cda0317e
GM
2578.El
2579.Bl -tag -width "ZPOOL_IMPORT_PATH"
2580.It Ev ZPOOL_IMPORT_PATH
2581The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2582.Nm zpool
2583looks for device nodes and files.
2584Similar to the
2585.Fl d
2586option in
2587.Nm zpool import .
2588.El
2589.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2590.It Ev ZPOOL_VDEV_NAME_GUID
2591Cause
2592.Nm zpool subcommands to output vdev guids by default. This behavior
2593is identical to the
2594.Nm zpool status -g
2595command line option.
2596.El
2597.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2598.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2599Cause
2600.Nm zpool
2601subcommands to follow links for vdev names by default. This behavior is identical to the
2602.Nm zpool status -L
2603command line option.
2604.El
2605.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2606.It Ev ZPOOL_VDEV_NAME_PATH
2607Cause
2608.Nm zpool
2609subcommands to output full vdev path names by default. This
2610behavior is identical to the
2611.Nm zpool status -p
2612command line option.
2613.El
2614.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2615.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2616Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2617config VDEV names if a
2618.Sy devid
2619NVP value is present in the pool's config.
2620.Pp
39fc0cb5 2621For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2622value in the config and
2623.Nm zpool status
2624would fail when listing the config.
39fc0cb5 2625This would also be true for future Linux based pools.
cda0317e
GM
2626.Pp
2627A pool can be stripped of any
2628.Sy devid
2629values on import or prevented from adding
2630them on
2631.Nm zpool create
2632or
2633.Nm zpool add
2634by setting
2635.Sy ZFS_VDEV_DEVID_OPT_OUT .
2636.El
2637.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2638.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2639Allow a privileged user to run the
cda0317e
GM
2640.Nm zpool status/iostat
2641with the
2642.Fl c
7a8ed6b8 2643option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2644.Fl c .
2645.El
2646.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2647.It Ev ZPOOL_SCRIPTS_PATH
2648The search path for scripts when running
2649.Nm zpool status/iostat
2650with the
2651.Fl c
099700d9 2652option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2653.Pa ~/.zpool.d
2654and
2655.Pa /etc/zfs/zpool.d
2656search paths.
2657.El
2658.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2659.It Ev ZPOOL_SCRIPTS_ENABLED
2660Allow a user to run
2661.Nm zpool status/iostat
2662with the
2663.Fl c
2664option. If
2665.Sy ZPOOL_SCRIPTS_ENABLED
2666is not set, it is assumed that the user is allowed to run
2667.Nm zpool status/iostat -c .
90cdf283 2668.El
cda0317e
GM
2669.Sh INTERFACE STABILITY
2670.Sy Evolving
2671.Sh SEE ALSO
cda0317e
GM
2672.Xr zfs-events 5 ,
2673.Xr zfs-module-parameters 5 ,
90cdf283 2674.Xr zpool-features 5 ,
2675.Xr zed 8 ,
2676.Xr zfs 8