]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Do not resume a pool if multihost is enabled
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a448a255 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
eb201f50 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
7c9a4292 30.Dd November 29, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
d2734cce
SD
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
cda0317e
GM
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl m Ar mountpoint
61.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64.Op Fl R Ar root
65.Ar pool vdev Ns ...
66.Nm
67.Cm destroy
68.Op Fl f
69.Ar pool
70.Nm
71.Cm detach
72.Ar pool device
73.Nm
74.Cm events
88f9c939 75.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
76.Nm
77.Cm export
78.Op Fl a
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 86.Oo Ar pool Oc Ns ...
cda0317e
GM
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
522db292 94.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
95.Nm
96.Cm import
97.Fl a
b5256303 98.Op Fl DflmN
cda0317e 99.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 100.Op Fl -rewind-to-checkpoint
522db292 101.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
102.Op Fl o Ar mntopts
103.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104.Op Fl R Ar root
105.Nm
106.Cm import
b5256303 107.Op Fl Dflm
cda0317e 108.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
d2734cce 109.Op Fl -rewind-to-checkpoint
522db292 110.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
111.Op Fl o Ar mntopts
112.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113.Op Fl R Ar root
114.Op Fl s
115.Ar pool Ns | Ns Ar id
116.Op Ar newpool Oo Fl t Oc
117.Nm
619f0976 118.Cm initialize
a769fb53 119.Op Fl c | Fl s
619f0976
GW
120.Ar pool
121.Op Ar device Ns ...
122.Nm
cda0317e
GM
123.Cm iostat
124.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 126.Op Fl ghHLnpPvy
cda0317e
GM
127.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128.Op Ar interval Op Ar count
129.Nm
130.Cm labelclear
131.Op Fl f
132.Ar device
133.Nm
134.Cm list
135.Op Fl HgLpPv
136.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137.Op Fl T Sy u Ns | Ns Sy d
138.Oo Ar pool Oc Ns ...
139.Op Ar interval Op Ar count
140.Nm
141.Cm offline
142.Op Fl f
143.Op Fl t
144.Ar pool Ar device Ns ...
145.Nm
146.Cm online
147.Op Fl e
148.Ar pool Ar device Ns ...
149.Nm
150.Cm reguid
151.Ar pool
152.Nm
153.Cm reopen
d3f2cd7e 154.Op Fl n
cda0317e
GM
155.Ar pool
156.Nm
157.Cm remove
a1d477c2 158.Op Fl np
cda0317e
GM
159.Ar pool Ar device Ns ...
160.Nm
a1d477c2
MA
161.Cm remove
162.Fl s
163.Ar pool
164.Nm
cda0317e
GM
165.Cm replace
166.Op Fl f
167.Oo Fl o Ar property Ns = Ns Ar value Oc
168.Ar pool Ar device Op Ar new_device
169.Nm
80a91e74
TC
170.Cm resilver
171.Ar pool Ns ...
172.Nm
cda0317e 173.Cm scrub
0ea05c64 174.Op Fl s | Fl p
cda0317e
GM
175.Ar pool Ns ...
176.Nm
177.Cm set
178.Ar property Ns = Ns Ar value
179.Ar pool
180.Nm
181.Cm split
b5256303 182.Op Fl gLlnP
cda0317e
GM
183.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
184.Op Fl R Ar root
185.Ar pool newpool
186.Oo Ar device Oc Ns ...
187.Nm
188.Cm status
189.Oo Fl c Ar SCRIPT Oc
a769fb53 190.Op Fl DigLpPsvx
cda0317e
GM
191.Op Fl T Sy u Ns | Ns Sy d
192.Oo Ar pool Oc Ns ...
193.Op Ar interval Op Ar count
194.Nm
195.Cm sync
196.Oo Ar pool Oc Ns ...
197.Nm
198.Cm upgrade
199.Nm
200.Cm upgrade
201.Fl v
202.Nm
203.Cm upgrade
204.Op Fl V Ar version
205.Fl a Ns | Ns Ar pool Ns ...
206.Sh DESCRIPTION
207The
208.Nm
209command configures ZFS storage pools.
210A storage pool is a collection of devices that provides physical storage and
211data replication for ZFS datasets.
212All datasets within a storage pool share the same space.
213See
214.Xr zfs 8
215for information on managing datasets.
216.Ss Virtual Devices (vdevs)
217A "virtual device" describes a single device or a collection of devices
218organized according to certain performance and fault characteristics.
219The following virtual devices are supported:
220.Bl -tag -width Ds
221.It Sy disk
222A block device, typically located under
223.Pa /dev .
224ZFS can use individual slices or partitions, though the recommended mode of
225operation is to use whole disks.
226A disk can be specified by a full path, or it can be a shorthand name
227.Po the relative portion of the path under
228.Pa /dev
229.Pc .
230A whole disk can be specified by omitting the slice or partition designation.
231For example,
232.Pa sda
233is equivalent to
234.Pa /dev/sda .
235When given a whole disk, ZFS automatically labels the disk, if necessary.
236.It Sy file
237A regular file.
238The use of files as a backing store is strongly discouraged.
239It is designed primarily for experimental purposes, as the fault tolerance of a
240file is only as good as the file system of which it is a part.
241A file must be specified by a full path.
242.It Sy mirror
243A mirror of two or more devices.
244Data is replicated in an identical fashion across all components of a mirror.
245A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
246failing before data integrity is compromised.
247.It Sy raidz , raidz1 , raidz2 , raidz3
248A variation on RAID-5 that allows for better distribution of parity and
249eliminates the RAID-5
250.Qq write hole
251.Pq in which data and parity become inconsistent after a power loss .
252Data and parity is striped across all disks within a raidz group.
253.Pp
254A raidz group can have single-, double-, or triple-parity, meaning that the
255raidz group can sustain one, two, or three failures, respectively, without
256losing any data.
257The
258.Sy raidz1
259vdev type specifies a single-parity raidz group; the
260.Sy raidz2
261vdev type specifies a double-parity raidz group; and the
262.Sy raidz3
263vdev type specifies a triple-parity raidz group.
264The
265.Sy raidz
266vdev type is an alias for
267.Sy raidz1 .
268.Pp
269A raidz group with N disks of size X with P parity disks can hold approximately
270(N-P)*X bytes and can withstand P device(s) failing before data integrity is
271compromised.
272The minimum number of devices in a raidz group is one more than the number of
273parity disks.
274The recommended number is between 3 and 9 to help increase performance.
275.It Sy spare
276A special pseudo-vdev which keeps track of available hot spares for a pool.
277For more information, see the
278.Sx Hot Spares
279section.
280.It Sy log
281A separate intent log device.
282If more than one log device is specified, then writes are load-balanced between
283devices.
284Log devices can be mirrored.
285However, raidz vdev types are not supported for the intent log.
286For more information, see the
287.Sx Intent Log
288section.
cc99f275
DB
289.It Sy dedup
290A device dedicated solely for allocating dedup data.
291The redundancy of this device should match the redundancy of the other normal
292devices in the pool. If more than one dedup device is specified, then
293allocations are load-balanced between devices.
294.It Sy special
295A device dedicated solely for allocating various kinds of internal metadata,
296and optionally small file data.
297The redundancy of this device should match the redundancy of the other normal
298devices in the pool. If more than one special device is specified, then
299allocations are load-balanced between devices.
300.Pp
301For more information on special allocations, see the
302.Sx Special Allocation Class
303section.
cda0317e
GM
304.It Sy cache
305A device used to cache storage pool data.
306A cache device cannot be configured as a mirror or raidz group.
307For more information, see the
308.Sx Cache Devices
309section.
310.El
311.Pp
312Virtual devices cannot be nested, so a mirror or raidz virtual device can only
313contain files or disks.
314Mirrors of mirrors
315.Pq or other combinations
316are not allowed.
317.Pp
318A pool can have any number of virtual devices at the top of the configuration
319.Po known as
320.Qq root vdevs
321.Pc .
322Data is dynamically distributed across all top-level devices to balance data
323among devices.
324As new virtual devices are added, ZFS automatically places data on the newly
325available devices.
326.Pp
327Virtual devices are specified one at a time on the command line, separated by
328whitespace.
329The keywords
330.Sy mirror
331and
332.Sy raidz
333are used to distinguish where a group ends and another begins.
334For example, the following creates two root vdevs, each a mirror of two disks:
335.Bd -literal
336# zpool create mypool mirror sda sdb mirror sdc sdd
337.Ed
338.Ss Device Failure and Recovery
339ZFS supports a rich set of mechanisms for handling device failure and data
340corruption.
341All metadata and data is checksummed, and ZFS automatically repairs bad data
342from a good copy when corruption is detected.
343.Pp
344In order to take advantage of these features, a pool must make use of some form
345of redundancy, using either mirrored or raidz groups.
346While ZFS supports running in a non-redundant configuration, where each root
347vdev is simply a disk or file, this is strongly discouraged.
348A single case of bit corruption can render some or all of your data unavailable.
349.Pp
350A pool's health status is described by one of three states: online, degraded,
351or faulted.
352An online pool has all devices operating normally.
353A degraded pool is one in which one or more devices have failed, but the data is
354still available due to a redundant configuration.
355A faulted pool has corrupted metadata, or one or more faulted devices, and
356insufficient replicas to continue functioning.
357.Pp
358The health of the top-level vdev, such as mirror or raidz device, is
359potentially impacted by the state of its associated vdevs, or component
360devices.
361A top-level vdev or component device is in one of the following states:
362.Bl -tag -width "DEGRADED"
363.It Sy DEGRADED
364One or more top-level vdevs is in the degraded state because one or more
365component devices are offline.
366Sufficient replicas exist to continue functioning.
367.Pp
368One or more component devices is in the degraded or faulted state, but
369sufficient replicas exist to continue functioning.
370The underlying conditions are as follows:
371.Bl -bullet
372.It
373The number of checksum errors exceeds acceptable levels and the device is
374degraded as an indication that something may be wrong.
375ZFS continues to use the device as necessary.
376.It
377The number of I/O errors exceeds acceptable levels.
378The device could not be marked as faulted because there are insufficient
379replicas to continue functioning.
380.El
381.It Sy FAULTED
382One or more top-level vdevs is in the faulted state because one or more
383component devices are offline.
384Insufficient replicas exist to continue functioning.
385.Pp
386One or more component devices is in the faulted state, and insufficient
387replicas exist to continue functioning.
388The underlying conditions are as follows:
389.Bl -bullet
390.It
6b4e21c6 391The device could be opened, but the contents did not match expected values.
cda0317e
GM
392.It
393The number of I/O errors exceeds acceptable levels and the device is faulted to
394prevent further use of the device.
395.El
396.It Sy OFFLINE
397The device was explicitly taken offline by the
398.Nm zpool Cm offline
399command.
400.It Sy ONLINE
058ac9ba 401The device is online and functioning.
cda0317e
GM
402.It Sy REMOVED
403The device was physically removed while the system was running.
404Device removal detection is hardware-dependent and may not be supported on all
405platforms.
406.It Sy UNAVAIL
407The device could not be opened.
408If a pool is imported when a device was unavailable, then the device will be
409identified by a unique identifier instead of its path since the path was never
410correct in the first place.
411.El
412.Pp
413If a device is removed and later re-attached to the system, ZFS attempts
414to put the device online automatically.
415Device attach detection is hardware-dependent and might not be supported on all
416platforms.
417.Ss Hot Spares
418ZFS allows devices to be associated with pools as
419.Qq hot spares .
420These devices are not actively used in the pool, but when an active device
421fails, it is automatically replaced by a hot spare.
422To create a pool with hot spares, specify a
423.Sy spare
424vdev with any number of devices.
425For example,
426.Bd -literal
54e5f226 427# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
428.Ed
429.Pp
430Spares can be shared across multiple pools, and can be added with the
431.Nm zpool Cm add
432command and removed with the
433.Nm zpool Cm remove
434command.
435Once a spare replacement is initiated, a new
436.Sy spare
437vdev is created within the configuration that will remain there until the
438original device is replaced.
439At this point, the hot spare becomes available again if another device fails.
440.Pp
441If a pool has a shared spare that is currently being used, the pool can not be
442exported since other pools may use this shared spare, which may lead to
443potential data corruption.
444.Pp
4f3218ae
OF
445Shared spares add some risk. If the pools are imported on different hosts, and
446both pools suffer a device failure at the same time, both could attempt to use
447the spare at the same time. This may not be detected, resulting in data
448corruption.
449.Pp
7c9abcf8 450An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
451If the original faulted device is detached, then the hot spare assumes its
452place in the configuration, and is removed from the spare list of all active
453pools.
454.Pp
058ac9ba 455Spares cannot replace log devices.
cda0317e
GM
456.Ss Intent Log
457The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
458transactions.
459For instance, databases often require their transactions to be on stable storage
460devices when returning from a system call.
461NFS and other applications can also use
462.Xr fsync 2
463to ensure data stability.
464By default, the intent log is allocated from blocks within the main pool.
465However, it might be possible to get better performance using separate intent
466log devices such as NVRAM or a dedicated disk.
467For example:
468.Bd -literal
469# zpool create pool sda sdb log sdc
470.Ed
471.Pp
472Multiple log devices can also be specified, and they can be mirrored.
473See the
474.Sx EXAMPLES
475section for an example of mirroring multiple log devices.
476.Pp
910f3ce7
PA
477Log devices can be added, replaced, attached, detached and removed. In
478addition, log devices are imported and exported as part of the pool
479that contains them.
a1d477c2 480Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
481.Ss Cache Devices
482Devices can be added to a storage pool as
483.Qq cache devices .
484These devices provide an additional layer of caching between main memory and
485disk.
486For read-heavy workloads, where the working set size is much larger than what
487can be cached in main memory, using cache devices allow much more of this
488working set to be served from low latency media.
489Using cache devices provides the greatest performance improvement for random
490read-workloads of mostly static content.
491.Pp
492To create a pool with cache devices, specify a
493.Sy cache
494vdev with any number of devices.
495For example:
496.Bd -literal
497# zpool create pool sda sdb cache sdc sdd
498.Ed
499.Pp
500Cache devices cannot be mirrored or part of a raidz configuration.
501If a read error is encountered on a cache device, that read I/O is reissued to
502the original storage pool device, which might be part of a mirrored or raidz
503configuration.
504.Pp
505The content of the cache devices is considered volatile, as is the case with
506other system caches.
d2734cce
SD
507.Ss Pool checkpoint
508Before starting critical procedures that include destructive actions (e.g
509.Nm zfs Cm destroy
510), an administrator can checkpoint the pool's state and in the case of a
511mistake or failure, rewind the entire pool back to the checkpoint.
512Otherwise, the checkpoint can be discarded when the procedure has completed
513successfully.
514.Pp
515A pool checkpoint can be thought of as a pool-wide snapshot and should be used
516with care as it contains every part of the pool's state, from properties to vdev
517configuration.
518Thus, while a pool has a checkpoint certain operations are not allowed.
519Specifically, vdev removal/attach/detach, mirror splitting, and
520changing the pool's guid.
521Adding a new vdev is supported but in the case of a rewind it will have to be
522added again.
523Finally, users of this feature should keep in mind that scrubs in a pool that
524has a checkpoint do not repair checkpointed data.
525.Pp
526To create a checkpoint for a pool:
527.Bd -literal
528# zpool checkpoint pool
529.Ed
530.Pp
531To later rewind to its checkpointed state, you need to first export it and
532then rewind it during import:
533.Bd -literal
534# zpool export pool
535# zpool import --rewind-to-checkpoint pool
536.Ed
537.Pp
538To discard the checkpoint from a pool:
539.Bd -literal
540# zpool checkpoint -d pool
541.Ed
542.Pp
543Dataset reservations (controlled by the
544.Nm reservation
545or
546.Nm refreservation
547zfs properties) may be unenforceable while a checkpoint exists, because the
548checkpoint is allowed to consume the dataset's reservation.
549Finally, data that is part of the checkpoint but has been freed in the
550current state of the pool won't be scanned during a scrub.
cc99f275
DB
551.Ss Special Allocation Class
552The allocations in the special class are dedicated to specific block types.
553By default this includes all metadata, the indirect blocks of user data, and
554any dedup data. The class can also be provisioned to accept a limited
555percentage of small file data blocks.
556.Pp
557A pool must always have at least one general (non-specified) vdev before
558other devices can be assigned to the special class. If the special class
559becomes full, then allocations intended for it will spill back into the
560normal class.
561.Pp
562Dedup data can be excluded from the special class by setting the
563.Sy zfs_ddt_data_is_special
564zfs module parameter to false (0).
565.Pp
566Inclusion of small file blocks in the special class is opt-in. Each dataset
567can control the size of small file blocks allowed in the special class by
568setting the
569.Sy special_small_blocks
570dataset property. It defaults to zero so you must opt-in by setting it to a
571non-zero value. See
572.Xr zfs 8
573for more info on setting this property.
cda0317e
GM
574.Ss Properties
575Each pool has several properties associated with it.
576Some properties are read-only statistics while others are configurable and
577change the behavior of the pool.
578.Pp
579The following are read-only properties:
580.Bl -tag -width Ds
6df9f8eb
YP
581.It Cm allocated
582Amount of storage used within the pool.
cda0317e
GM
583.It Sy capacity
584Percentage of pool space used.
585This property can also be referred to by its shortened column name,
586.Sy cap .
587.It Sy expandsize
9ae529ec 588Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
589increase the total capacity of the pool.
590Uninitialized space consists of any space on an EFI labeled vdev which has not
591been brought online
592.Po e.g, using
593.Nm zpool Cm online Fl e
594.Pc .
595This space occurs when a LUN is dynamically expanded.
596.It Sy fragmentation
f3a7f661 597The amount of fragmentation in the pool.
cda0317e 598.It Sy free
9ae529ec 599The amount of free space available in the pool.
cda0317e 600.It Sy freeing
9ae529ec 601After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
602returned to the pool asynchronously.
603.Sy freeing
604is the amount of space remaining to be reclaimed.
605Over time
606.Sy freeing
607will decrease while
608.Sy free
609increases.
610.It Sy health
611The current health of the pool.
612Health can be one of
613.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
614.It Sy guid
058ac9ba 615A unique identifier for the pool.
a448a255
SD
616.It Sy load_guid
617A unique identifier for the pool.
618Unlike the
619.Sy guid
620property, this identifier is generated every time we load the pool (e.g. does
621not persist across imports/exports) and never changes while the pool is loaded
622(even if a
623.Sy reguid
624operation takes place).
cda0317e 625.It Sy size
058ac9ba 626Total size of the storage pool.
cda0317e
GM
627.It Sy unsupported@ Ns Em feature_guid
628Information about unsupported features that are enabled on the pool.
629See
630.Xr zpool-features 5
631for details.
cda0317e
GM
632.El
633.Pp
634The space usage properties report actual physical space available to the
635storage pool.
636The physical space can be different from the total amount of space that any
637contained datasets can actually use.
638The amount of space used in a raidz configuration depends on the characteristics
639of the data being written.
640In addition, ZFS reserves some space for internal accounting that the
641.Xr zfs 8
642command takes into account, but the
643.Nm
644command does not.
645For non-full pools of a reasonable size, these effects should be invisible.
646For small pools, or pools that are close to being completely full, these
647discrepancies may become more noticeable.
648.Pp
058ac9ba 649The following property can be set at creation time and import time:
cda0317e
GM
650.Bl -tag -width Ds
651.It Sy altroot
652Alternate root directory.
653If set, this directory is prepended to any mount points within the pool.
654This can be used when examining an unknown pool where the mount points cannot be
655trusted, or in an alternate boot environment, where the typical paths are not
656valid.
657.Sy altroot
658is not a persistent property.
659It is valid only while the system is up.
660Setting
661.Sy altroot
662defaults to using
663.Sy cachefile Ns = Ns Sy none ,
664though this may be overridden using an explicit setting.
665.El
666.Pp
667The following property can be set only at import time:
668.Bl -tag -width Ds
669.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
670If set to
671.Sy on ,
672the pool will be imported in read-only mode.
673This property can also be referred to by its shortened column name,
674.Sy rdonly .
675.El
676.Pp
677The following properties can be set at creation time and import time, and later
678changed with the
679.Nm zpool Cm set
680command:
681.Bl -tag -width Ds
682.It Sy ashift Ns = Ns Sy ashift
683Pool sector size exponent, to the power of
684.Sy 2
685(internally referred to as
686.Sy ashift
687). Values from 9 to 16, inclusive, are valid; also, the special
688value 0 (the default) means to auto-detect using the kernel's block
689layer and a ZFS internal exception list. I/O operations will be aligned
690to the specified size boundaries. Additionally, the minimum (disk)
691write size will be set to the specified size, so this represents a
692space vs. performance trade-off. For optimal performance, the pool
693sector size should be greater than or equal to the sector size of the
694underlying disks. The typical case for setting this property is when
695performance is important and the underlying disks use 4KiB sectors but
696report 512B sectors to the OS (for compatibility reasons); in that
697case, set
698.Sy ashift=12
699(which is 1<<12 = 4096). When set, this property is
700used as the default hint value in subsequent vdev operations (add,
701attach and replace). Changing this value will not modify any existing
702vdev, not even on disk replacement; however it can be used, for
703instance, to replace a dying 512B sectors disk with a newer 4KiB
704sectors device: this will probably result in bad performance but at the
705same time could prevent loss of data.
706.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
707Controls automatic pool expansion when the underlying LUN is grown.
708If set to
709.Sy on ,
710the pool will be resized according to the size of the expanded device.
711If the device is part of a mirror or raidz then all devices within that
712mirror/raidz group must be expanded before the new space is made available to
713the pool.
714The default behavior is
715.Sy off .
716This property can also be referred to by its shortened column name,
717.Sy expand .
718.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
719Controls automatic device replacement.
720If set to
721.Sy off ,
722device replacement must be initiated by the administrator by using the
723.Nm zpool Cm replace
724command.
725If set to
726.Sy on ,
727any new device, found in the same physical location as a device that previously
728belonged to the pool, is automatically formatted and replaced.
729The default behavior is
730.Sy off .
731This property can also be referred to by its shortened column name,
732.Sy replace .
733Autoreplace can also be used with virtual disks (like device
734mapper) provided that you use the /dev/disk/by-vdev paths setup by
735vdev_id.conf. See the
736.Xr vdev_id 8
737man page for more details.
738Autoreplace and autoonline require the ZFS Event Daemon be configured and
739running. See the
740.Xr zed 8
741man page for more details.
742.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
743Identifies the default bootable dataset for the root pool. This property is
744expected to be set mainly by the installation and upgrade programs.
745Not all Linux distribution boot processes use the bootfs property.
746.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
747Controls the location of where the pool configuration is cached.
748Discovering all pools on system startup requires a cached copy of the
749configuration data that is stored on the root file system.
750All pools in this cache are automatically imported when the system boots.
751Some environments, such as install and clustering, need to cache this
752information in a different location so that pools are not automatically
753imported.
754Setting this property caches the pool configuration in a different location that
755can later be imported with
756.Nm zpool Cm import Fl c .
757Setting it to the special value
758.Sy none
759creates a temporary pool that is never cached, and the special value
760.Qq
761.Pq empty string
762uses the default location.
763.Pp
764Multiple pools can share the same cache file.
765Because the kernel destroys and recreates this file when pools are added and
766removed, care should be taken when attempting to access this file.
767When the last pool using a
768.Sy cachefile
bbf1ad67 769is exported or destroyed, the file will be empty.
cda0317e
GM
770.It Sy comment Ns = Ns Ar text
771A text string consisting of printable ASCII characters that will be stored
772such that it is available even if the pool becomes faulted.
773An administrator can provide additional information about a pool using this
774property.
775.It Sy dedupditto Ns = Ns Ar number
776Threshold for the number of block ditto copies.
777If the reference count for a deduplicated block increases above this number, a
778new ditto copy of this block is automatically stored.
779The default setting is
780.Sy 0
781which causes no ditto copies to be created for deduplicated blocks.
782The minimum legal nonzero setting is
783.Sy 100 .
784.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
785Controls whether a non-privileged user is granted access based on the dataset
786permissions defined on the dataset.
787See
788.Xr zfs 8
789for more information on ZFS delegated administration.
790.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
791Controls the system behavior in the event of catastrophic pool failure.
792This condition is typically a result of a loss of connectivity to the underlying
793storage device(s) or a failure of all devices within the pool.
794The behavior of such an event is determined as follows:
795.Bl -tag -width "continue"
796.It Sy wait
797Blocks all I/O access until the device connectivity is recovered and the errors
798are cleared.
799This is the default behavior.
800.It Sy continue
801Returns
802.Er EIO
803to any new write I/O requests but allows reads to any of the remaining healthy
804devices.
805Any write requests that have yet to be committed to disk would be blocked.
806.It Sy panic
058ac9ba 807Prints out a message to the console and generates a system crash dump.
cda0317e
GM
808.El
809.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
810The value of this property is the current state of
811.Ar feature_name .
812The only valid value when setting this property is
813.Sy enabled
814which moves
815.Ar feature_name
816to the enabled state.
817See
818.Xr zpool-features 5
819for details on feature states.
820.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
821Controls whether information about snapshots associated with this pool is
822output when
823.Nm zfs Cm list
824is run without the
825.Fl t
826option.
827The default value is
828.Sy off .
829This property can also be referred to by its shortened name,
830.Sy listsnaps .
379ca9cf
OF
831.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
832Controls whether a pool activity check should be performed during
833.Nm zpool Cm import .
834When a pool is determined to be active it cannot be imported, even with the
835.Fl f
836option. This property is intended to be used in failover configurations
4f3218ae
OF
837where multiple hosts have access to a pool on shared storage.
838
839Multihost provides protection on import only. It does not protect against an
840individual device being used in multiple pools, regardless of the type of vdev.
841See the discussion under
842.Sy zpool create.
843
844When this property is on, periodic writes to storage occur to show the pool is
845in use. See
379ca9cf
OF
846.Sy zfs_multihost_interval
847in the
848.Xr zfs-module-parameters 5
849man page. In order to enable this property each host must set a unique hostid.
850See
851.Xr genhostid 1
b9373170 852.Xr zgenhostid 8
d699aaef 853.Xr spl-module-parameters 5
379ca9cf
OF
854for additional details. The default value is
855.Sy off .
cda0317e
GM
856.It Sy version Ns = Ns Ar version
857The current on-disk version of the pool.
858This can be increased, but never decreased.
859The preferred method of updating pools is with the
860.Nm zpool Cm upgrade
861command, though this property can be used when a specific version is needed for
862backwards compatibility.
863Once feature flags are enabled on a pool this property will no longer have a
864value.
865.El
866.Ss Subcommands
867All subcommands that modify state are logged persistently to the pool in their
868original form.
869.Pp
870The
871.Nm
872command provides subcommands to create and destroy storage pools, add capacity
873to storage pools, and provide information about the storage pools.
874The following subcommands are supported:
875.Bl -tag -width Ds
876.It Xo
877.Nm
878.Fl ?
879.Xc
058ac9ba 880Displays a help message.
cda0317e
GM
881.It Xo
882.Nm
883.Cm add
884.Op Fl fgLnP
885.Oo Fl o Ar property Ns = Ns Ar value Oc
886.Ar pool vdev Ns ...
887.Xc
888Adds the specified virtual devices to the given pool.
889The
890.Ar vdev
891specification is described in the
892.Sx Virtual Devices
893section.
894The behavior of the
895.Fl f
896option, and the device checks performed are described in the
897.Nm zpool Cm create
898subcommand.
899.Bl -tag -width Ds
900.It Fl f
901Forces use of
902.Ar vdev Ns s ,
903even if they appear in use or specify a conflicting replication level.
904Not all devices can be overridden in this manner.
905.It Fl g
906Display
907.Ar vdev ,
908GUIDs instead of the normal device names. These GUIDs can be used in place of
909device names for the zpool detach/offline/remove/replace commands.
910.It Fl L
911Display real paths for
912.Ar vdev Ns s
913resolving all symbolic links. This can be used to look up the current block
914device name regardless of the /dev/disk/ path used to open it.
915.It Fl n
916Displays the configuration that would be used without actually adding the
917.Ar vdev Ns s .
918The actual pool creation can still fail due to insufficient privileges or
919device sharing.
920.It Fl P
921Display real paths for
922.Ar vdev Ns s
923instead of only the last component of the path. This can be used in
85912983 924conjunction with the
925.Fl L
926flag.
cda0317e
GM
927.It Fl o Ar property Ns = Ns Ar value
928Sets the given pool properties. See the
929.Sx Properties
930section for a list of valid properties that can be set. The only property
931supported at the moment is ashift.
932.El
933.It Xo
934.Nm
935.Cm attach
936.Op Fl f
937.Oo Fl o Ar property Ns = Ns Ar value Oc
938.Ar pool device new_device
939.Xc
940Attaches
941.Ar new_device
942to the existing
943.Ar device .
944The existing device cannot be part of a raidz configuration.
945If
946.Ar device
947is not currently part of a mirrored configuration,
948.Ar device
949automatically transforms into a two-way mirror of
950.Ar device
951and
952.Ar new_device .
953If
954.Ar device
955is part of a two-way mirror, attaching
956.Ar new_device
957creates a three-way mirror, and so on.
958In either case,
959.Ar new_device
960begins to resilver immediately.
961.Bl -tag -width Ds
962.It Fl f
963Forces use of
964.Ar new_device ,
965even if its appears to be in use.
966Not all devices can be overridden in this manner.
967.It Fl o Ar property Ns = Ns Ar value
968Sets the given pool properties. See the
969.Sx Properties
970section for a list of valid properties that can be set. The only property
971supported at the moment is ashift.
972.El
973.It Xo
974.Nm
d2734cce
SD
975.Cm checkpoint
976.Op Fl d, -discard
977.Ar pool
978.Xc
979Checkpoints the current state of
980.Ar pool
981, which can be later restored by
982.Nm zpool Cm import --rewind-to-checkpoint .
983The existence of a checkpoint in a pool prohibits the following
984.Nm zpool
985commands:
986.Cm remove ,
987.Cm attach ,
988.Cm detach ,
989.Cm split ,
990and
991.Cm reguid .
992In addition, it may break reservation boundaries if the pool lacks free
993space.
994The
995.Nm zpool Cm status
996command indicates the existence of a checkpoint or the progress of discarding a
997checkpoint from a pool.
998The
999.Nm zpool Cm list
1000command reports how much space the checkpoint takes from the pool.
1001.Bl -tag -width Ds
1002.It Fl d, -discard
1003Discards an existing checkpoint from
1004.Ar pool .
1005.El
1006.It Xo
1007.Nm
cda0317e
GM
1008.Cm clear
1009.Ar pool
1010.Op Ar device
1011.Xc
1012Clears device errors in a pool.
1013If no arguments are specified, all device errors within the pool are cleared.
1014If one or more devices is specified, only those errors associated with the
1015specified device or devices are cleared.
8133679f
OF
1016If multihost is enabled, and the pool has been suspended, this will not
1017resume I/O. While the pool was suspended, it may have been imported on
1018another host, and resuming I/O could result in pool damage.
cda0317e
GM
1019.It Xo
1020.Nm
1021.Cm create
1022.Op Fl dfn
1023.Op Fl m Ar mountpoint
1024.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1025.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1026.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1027.Op Fl R Ar root
1028.Op Fl t Ar tname
1029.Ar pool vdev Ns ...
1030.Xc
1031Creates a new storage pool containing the virtual devices specified on the
1032command line.
1033The pool name must begin with a letter, and can only contain
1034alphanumeric characters as well as underscore
1035.Pq Qq Sy _ ,
1036dash
90cdf283 1037.Pq Qq Sy \&- ,
cda0317e
GM
1038colon
1039.Pq Qq Sy \&: ,
1040space
90cdf283 1041.Pq Qq Sy \&\ ,
cda0317e
GM
1042and period
1043.Pq Qq Sy \&. .
1044The pool names
1045.Sy mirror ,
1046.Sy raidz ,
1047.Sy spare
1048and
1049.Sy log
5ee220ba
TK
1050are reserved, as are names beginning with
1051.Sy mirror ,
1052.Sy raidz ,
1053.Sy spare ,
1054and the pattern
cda0317e
GM
1055.Sy c[0-9] .
1056The
1057.Ar vdev
1058specification is described in the
1059.Sx Virtual Devices
1060section.
1061.Pp
4f3218ae
OF
1062The command attempts to verify that each device specified is accessible and not
1063currently in use by another subsystem. However this check is not robust enough
1064to detect simultaneous attempts to use a new device in different pools, even if
1065.Sy multihost
1066is
1067.Sy enabled.
1068The
1069administrator must ensure that simultaneous invocations of any combination of
1070.Sy zpool replace ,
1071.Sy zpool create ,
1072.Sy zpool add ,
1073or
1074.Sy zpool labelclear ,
1075do not refer to the same device. Using the same device in two pools will
1076result in pool corruption.
1077
cda0317e
GM
1078There are some uses, such as being currently mounted, or specified as the
1079dedicated dump device, that prevents a device from ever being used by ZFS.
1080Other uses, such as having a preexisting UFS file system, can be overridden with
1081the
1082.Fl f
1083option.
1084.Pp
1085The command also checks that the replication strategy for the pool is
1086consistent.
1087An attempt to combine redundant and non-redundant storage in a single pool, or
1088to mix disks and files, results in an error unless
1089.Fl f
1090is specified.
1091The use of differently sized devices within a single raidz or mirror group is
1092also flagged as an error unless
1093.Fl f
1094is specified.
1095.Pp
1096Unless the
1097.Fl R
1098option is specified, the default mount point is
1099.Pa / Ns Ar pool .
1100The mount point must not exist or must be empty, or else the root dataset
1101cannot be mounted.
1102This can be overridden with the
1103.Fl m
1104option.
1105.Pp
1106By default all supported features are enabled on the new pool unless the
1107.Fl d
1108option is specified.
1109.Bl -tag -width Ds
1110.It Fl d
1111Do not enable any features on the new pool.
1112Individual features can be enabled by setting their corresponding properties to
1113.Sy enabled
1114with the
1115.Fl o
1116option.
1117See
1118.Xr zpool-features 5
1119for details about feature properties.
1120.It Fl f
1121Forces use of
1122.Ar vdev Ns s ,
1123even if they appear in use or specify a conflicting replication level.
1124Not all devices can be overridden in this manner.
1125.It Fl m Ar mountpoint
1126Sets the mount point for the root dataset.
1127The default mount point is
1128.Pa /pool
1129or
1130.Pa altroot/pool
1131if
1132.Ar altroot
1133is specified.
1134The mount point must be an absolute path,
1135.Sy legacy ,
1136or
1137.Sy none .
1138For more information on dataset mount points, see
1139.Xr zfs 8 .
1140.It Fl n
1141Displays the configuration that would be used without actually creating the
1142pool.
1143The actual pool creation can still fail due to insufficient privileges or
1144device sharing.
1145.It Fl o Ar property Ns = Ns Ar value
1146Sets the given pool properties.
1147See the
1148.Sx Properties
1149section for a list of valid properties that can be set.
1150.It Fl o Ar feature@feature Ns = Ns Ar value
1151Sets the given pool feature. See the
1152.Xr zpool-features 5
1153section for a list of valid features that can be set.
1154Value can be either disabled or enabled.
1155.It Fl O Ar file-system-property Ns = Ns Ar value
1156Sets the given file system properties in the root file system of the pool.
1157See the
1158.Sx Properties
1159section of
1160.Xr zfs 8
1161for a list of valid properties that can be set.
1162.It Fl R Ar root
1163Equivalent to
1164.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1165.It Fl t Ar tname
1166Sets the in-core pool name to
1167.Sy tname
1168while the on-disk name will be the name specified as the pool name
1169.Sy pool .
1170This will set the default cachefile property to none. This is intended
1171to handle name space collisions when creating pools for other systems,
1172such as virtual machines or physical machines whose pools live on network
1173block devices.
1174.El
1175.It Xo
1176.Nm
1177.Cm destroy
1178.Op Fl f
1179.Ar pool
1180.Xc
1181Destroys the given pool, freeing up any devices for other use.
1182This command tries to unmount any active datasets before destroying the pool.
1183.Bl -tag -width Ds
1184.It Fl f
058ac9ba 1185Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1186.El
1187.It Xo
1188.Nm
1189.Cm detach
1190.Ar pool device
1191.Xc
1192Detaches
1193.Ar device
1194from a mirror.
1195The operation is refused if there are no other valid replicas of the data.
1196If device may be re-added to the pool later on then consider the
1197.Sy zpool offline
1198command instead.
1199.It Xo
1200.Nm
1201.Cm events
88f9c939 1202.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1203.Xc
1204Lists all recent events generated by the ZFS kernel modules. These events
1205are consumed by the
1206.Xr zed 8
1207and used to automate administrative tasks such as replacing a failed device
1208with a hot spare. For more information about the subclasses and event payloads
1209that can be generated see the
1210.Xr zfs-events 5
1211man page.
1212.Bl -tag -width Ds
1213.It Fl c
d050c627 1214Clear all previous events.
cda0317e
GM
1215.It Fl f
1216Follow mode.
1217.It Fl H
1218Scripted mode. Do not display headers, and separate fields by a
1219single tab instead of arbitrary space.
1220.It Fl v
1221Print the entire payload for each event.
1222.El
1223.It Xo
1224.Nm
1225.Cm export
1226.Op Fl a
1227.Op Fl f
1228.Ar pool Ns ...
1229.Xc
1230Exports the given pools from the system.
1231All devices are marked as exported, but are still considered in use by other
1232subsystems.
1233The devices can be moved between systems
1234.Pq even those of different endianness
1235and imported as long as a sufficient number of devices are present.
1236.Pp
1237Before exporting the pool, all datasets within the pool are unmounted.
1238A pool can not be exported if it has a shared spare that is currently being
1239used.
1240.Pp
1241For pools to be portable, you must give the
1242.Nm
1243command whole disks, not just partitions, so that ZFS can label the disks with
1244portable EFI labels.
1245Otherwise, disk drivers on platforms of different endianness will not recognize
1246the disks.
1247.Bl -tag -width Ds
1248.It Fl a
859735c0 1249Exports all pools imported on the system.
cda0317e
GM
1250.It Fl f
1251Forcefully unmount all datasets, using the
1252.Nm unmount Fl f
1253command.
1254.Pp
1255This command will forcefully export the pool even if it has a shared spare that
1256is currently being used.
1257This may lead to potential data corruption.
1258.El
1259.It Xo
1260.Nm
1261.Cm get
1262.Op Fl Hp
1263.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1264.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
48b0b649 1265.Oo Ar pool Oc Ns ...
cda0317e
GM
1266.Xc
1267Retrieves the given list of properties
1268.Po
1269or all properties if
1270.Sy all
1271is used
1272.Pc
1273for the specified storage pool(s).
1274These properties are displayed with the following fields:
1275.Bd -literal
2a8b84b7 1276 name Name of storage pool
058ac9ba
BB
1277 property Property name
1278 value Property value
1279 source Property source, either 'default' or 'local'.
cda0317e
GM
1280.Ed
1281.Pp
1282See the
1283.Sx Properties
1284section for more information on the available pool properties.
1285.Bl -tag -width Ds
1286.It Fl H
1287Scripted mode.
1288Do not display headers, and separate fields by a single tab instead of arbitrary
1289space.
1290.It Fl o Ar field
1291A comma-separated list of columns to display.
d7323e79 1292.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1293is the default value.
cda0317e
GM
1294.It Fl p
1295Display numbers in parsable (exact) values.
1296.El
1297.It Xo
1298.Nm
1299.Cm history
1300.Op Fl il
1301.Oo Ar pool Oc Ns ...
1302.Xc
1303Displays the command history of the specified pool(s) or all pools if no pool is
1304specified.
1305.Bl -tag -width Ds
1306.It Fl i
1307Displays internally logged ZFS events in addition to user initiated events.
1308.It Fl l
1309Displays log records in long format, which in addition to standard format
1310includes, the user name, the hostname, and the zone in which the operation was
1311performed.
1312.El
1313.It Xo
1314.Nm
1315.Cm import
1316.Op Fl D
522db292 1317.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1318.Xc
1319Lists pools available to import.
1320If the
1321.Fl d
1322option is not specified, this command searches for devices in
1323.Pa /dev .
1324The
1325.Fl d
1326option can be specified multiple times, and all directories are searched.
1327If the device appears to be part of an exported pool, this command displays a
1328summary of the pool with the name of the pool, a numeric identifier, as well as
1329the vdev layout and current health of the device for each device or file.
1330Destroyed pools, pools that were previously destroyed with the
1331.Nm zpool Cm destroy
1332command, are not listed unless the
1333.Fl D
1334option is specified.
1335.Pp
1336The numeric identifier is unique, and can be used instead of the pool name when
1337multiple exported pools of the same name are available.
1338.Bl -tag -width Ds
1339.It Fl c Ar cachefile
1340Reads configuration from the given
1341.Ar cachefile
1342that was created with the
1343.Sy cachefile
1344pool property.
1345This
1346.Ar cachefile
1347is used instead of searching for devices.
522db292
CC
1348.It Fl d Ar dir Ns | Ns Ar device
1349Uses
1350.Ar device
1351or searches for devices or files in
cda0317e
GM
1352.Ar dir .
1353The
1354.Fl d
1355option can be specified multiple times.
1356.It Fl D
058ac9ba 1357Lists destroyed pools only.
cda0317e
GM
1358.El
1359.It Xo
1360.Nm
1361.Cm import
1362.Fl a
b5256303 1363.Op Fl DflmN
cda0317e 1364.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1365.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1366.Op Fl o Ar mntopts
1367.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1368.Op Fl R Ar root
1369.Op Fl s
1370.Xc
1371Imports all pools found in the search directories.
1372Identical to the previous command, except that all pools with a sufficient
1373number of devices available are imported.
1374Destroyed pools, pools that were previously destroyed with the
1375.Nm zpool Cm destroy
1376command, will not be imported unless the
1377.Fl D
1378option is specified.
1379.Bl -tag -width Ds
1380.It Fl a
6b4e21c6 1381Searches for and imports all pools found.
cda0317e
GM
1382.It Fl c Ar cachefile
1383Reads configuration from the given
1384.Ar cachefile
1385that was created with the
1386.Sy cachefile
1387pool property.
1388This
1389.Ar cachefile
1390is used instead of searching for devices.
522db292
CC
1391.It Fl d Ar dir Ns | Ns Ar device
1392Uses
1393.Ar device
1394or searches for devices or files in
cda0317e
GM
1395.Ar dir .
1396The
1397.Fl d
1398option can be specified multiple times.
1399This option is incompatible with the
1400.Fl c
1401option.
1402.It Fl D
1403Imports destroyed pools only.
1404The
1405.Fl f
1406option is also required.
1407.It Fl f
1408Forces import, even if the pool appears to be potentially active.
1409.It Fl F
1410Recovery mode for a non-importable pool.
1411Attempt to return the pool to an importable state by discarding the last few
1412transactions.
1413Not all damaged pools can be recovered by using this option.
1414If successful, the data from the discarded transactions is irretrievably lost.
1415This option is ignored if the pool is importable or already imported.
b5256303
TC
1416.It Fl l
1417Indicates that this command will request encryption keys for all encrypted
1418datasets it attempts to mount as it is bringing the pool online. Note that if
1419any datasets have a
1420.Sy keylocation
1421of
1422.Sy prompt
1423this command will block waiting for the keys to be entered. Without this flag
1424encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1425.It Fl m
7f9d9946 1426Allows a pool to import when there is a missing log device.
cda0317e
GM
1427Recent transactions can be lost because the log device will be discarded.
1428.It Fl n
1429Used with the
1430.Fl F
1431recovery option.
1432Determines whether a non-importable pool can be made importable again, but does
1433not actually perform the pool recovery.
1434For more details about pool recovery mode, see the
1435.Fl F
1436option, above.
1437.It Fl N
7f9d9946 1438Import the pool without mounting any file systems.
cda0317e
GM
1439.It Fl o Ar mntopts
1440Comma-separated list of mount options to use when mounting datasets within the
1441pool.
1442See
1443.Xr zfs 8
1444for a description of dataset properties and mount options.
1445.It Fl o Ar property Ns = Ns Ar value
1446Sets the specified property on the imported pool.
1447See the
1448.Sx Properties
1449section for more information on the available pool properties.
1450.It Fl R Ar root
1451Sets the
1452.Sy cachefile
1453property to
1454.Sy none
1455and the
1456.Sy altroot
1457property to
1458.Ar root .
d2734cce
SD
1459.It Fl -rewind-to-checkpoint
1460Rewinds pool to the checkpointed state.
1461Once the pool is imported with this flag there is no way to undo the rewind.
1462All changes and data that were written after the checkpoint are lost!
1463The only exception is when the
1464.Sy readonly
1465mounting option is enabled.
1466In this case, the checkpointed state of the pool is opened and an
1467administrator can see how the pool would look like if they were
1468to fully rewind.
cda0317e
GM
1469.It Fl s
1470Scan using the default search path, the libblkid cache will not be
1471consulted. A custom search path may be specified by setting the
1472ZPOOL_IMPORT_PATH environment variable.
1473.It Fl X
1474Used with the
1475.Fl F
1476recovery option. Determines whether extreme
1477measures to find a valid txg should take place. This allows the pool to
1478be rolled back to a txg which is no longer guaranteed to be consistent.
1479Pools imported at an inconsistent txg may contain uncorrectable
1480checksum errors. For more details about pool recovery mode, see the
1481.Fl F
1482option, above. WARNING: This option can be extremely hazardous to the
1483health of your pool and should only be used as a last resort.
1484.It Fl T
1485Specify the txg to use for rollback. Implies
1486.Fl FX .
1487For more details
1488about pool recovery mode, see the
1489.Fl X
1490option, above. WARNING: This option can be extremely hazardous to the
1491health of your pool and should only be used as a last resort.
1492.El
1493.It Xo
1494.Nm
1495.Cm import
b5256303 1496.Op Fl Dflm
cda0317e 1497.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1498.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1499.Op Fl o Ar mntopts
1500.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1501.Op Fl R Ar root
1502.Op Fl s
1503.Ar pool Ns | Ns Ar id
1504.Op Ar newpool
1505.Xc
1506Imports a specific pool.
1507A pool can be identified by its name or the numeric identifier.
1508If
1509.Ar newpool
1510is specified, the pool is imported using the name
1511.Ar newpool .
1512Otherwise, it is imported with the same name as its exported name.
1513.Pp
1514If a device is removed from a system without running
1515.Nm zpool Cm export
1516first, the device appears as potentially active.
1517It cannot be determined if this was a failed export, or whether the device is
1518really in use from another host.
1519To import a pool in this state, the
1520.Fl f
1521option is required.
1522.Bl -tag -width Ds
1523.It Fl c Ar cachefile
1524Reads configuration from the given
1525.Ar cachefile
1526that was created with the
1527.Sy cachefile
1528pool property.
1529This
1530.Ar cachefile
1531is used instead of searching for devices.
522db292
CC
1532.It Fl d Ar dir Ns | Ns Ar device
1533Uses
1534.Ar device
1535or searches for devices or files in
cda0317e
GM
1536.Ar dir .
1537The
1538.Fl d
1539option can be specified multiple times.
1540This option is incompatible with the
1541.Fl c
1542option.
1543.It Fl D
1544Imports destroyed pool.
1545The
1546.Fl f
1547option is also required.
1548.It Fl f
058ac9ba 1549Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1550.It Fl F
1551Recovery mode for a non-importable pool.
1552Attempt to return the pool to an importable state by discarding the last few
1553transactions.
1554Not all damaged pools can be recovered by using this option.
1555If successful, the data from the discarded transactions is irretrievably lost.
1556This option is ignored if the pool is importable or already imported.
b5256303
TC
1557.It Fl l
1558Indicates that this command will request encryption keys for all encrypted
1559datasets it attempts to mount as it is bringing the pool online. Note that if
1560any datasets have a
1561.Sy keylocation
1562of
1563.Sy prompt
1564this command will block waiting for the keys to be entered. Without this flag
1565encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1566.It Fl m
7f9d9946 1567Allows a pool to import when there is a missing log device.
cda0317e
GM
1568Recent transactions can be lost because the log device will be discarded.
1569.It Fl n
1570Used with the
1571.Fl F
1572recovery option.
1573Determines whether a non-importable pool can be made importable again, but does
1574not actually perform the pool recovery.
1575For more details about pool recovery mode, see the
1576.Fl F
1577option, above.
1578.It Fl o Ar mntopts
1579Comma-separated list of mount options to use when mounting datasets within the
1580pool.
1581See
1582.Xr zfs 8
1583for a description of dataset properties and mount options.
1584.It Fl o Ar property Ns = Ns Ar value
1585Sets the specified property on the imported pool.
1586See the
1587.Sx Properties
1588section for more information on the available pool properties.
1589.It Fl R Ar root
1590Sets the
1591.Sy cachefile
1592property to
1593.Sy none
1594and the
1595.Sy altroot
1596property to
1597.Ar root .
1598.It Fl s
1599Scan using the default search path, the libblkid cache will not be
1600consulted. A custom search path may be specified by setting the
1601ZPOOL_IMPORT_PATH environment variable.
1602.It Fl X
1603Used with the
1604.Fl F
1605recovery option. Determines whether extreme
1606measures to find a valid txg should take place. This allows the pool to
1607be rolled back to a txg which is no longer guaranteed to be consistent.
1608Pools imported at an inconsistent txg may contain uncorrectable
1609checksum errors. For more details about pool recovery mode, see the
1610.Fl F
1611option, above. WARNING: This option can be extremely hazardous to the
1612health of your pool and should only be used as a last resort.
1613.It Fl T
1614Specify the txg to use for rollback. Implies
1615.Fl FX .
1616For more details
1617about pool recovery mode, see the
1618.Fl X
1619option, above. WARNING: This option can be extremely hazardous to the
1620health of your pool and should only be used as a last resort.
1c68856b 1621.It Fl t
cda0317e
GM
1622Used with
1623.Sy newpool .
1624Specifies that
1625.Sy newpool
1626is temporary. Temporary pool names last until export. Ensures that
1627the original pool name will be used in all label updates and therefore
1628is retained upon export.
1629Will also set -o cachefile=none when not explicitly specified.
1630.El
1631.It Xo
1632.Nm
619f0976 1633.Cm initialize
a769fb53 1634.Op Fl c | Fl s
619f0976
GW
1635.Ar pool
1636.Op Ar device Ns ...
1637.Xc
1638Begins initializing by writing to all unallocated regions on the specified
1639devices, or all eligible devices in the pool if no individual devices are
1640specified.
1641Only leaf data or log devices may be initialized.
1642.Bl -tag -width Ds
1643.It Fl c, -cancel
1644Cancel initializing on the specified devices, or all eligible devices if none
1645are specified.
1646If one or more target devices are invalid or are not currently being
1647initialized, the command will fail and no cancellation will occur on any device.
1648.It Fl s -suspend
1649Suspend initializing on the specified devices, or all eligible devices if none
1650are specified.
1651If one or more target devices are invalid or are not currently being
1652initialized, the command will fail and no suspension will occur on any device.
1653Initializing can then be resumed by running
1654.Nm zpool Cm initialize
1655with no flags on the relevant target devices.
1656.El
1657.It Xo
1658.Nm
cda0317e
GM
1659.Cm iostat
1660.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1661.Op Fl T Sy u Ns | Ns Sy d
8fccfa8e 1662.Op Fl ghHLnpPvy
cda0317e
GM
1663.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1664.Op Ar interval Op Ar count
1665.Xc
f8bb2a7e
KP
1666Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1667be observed via
1668.Xr iostat 1 .
1669If writes are located nearby, they may be merged into a single
1670larger operation. Additional I/O may be generated depending on the level of
1671vdev redundancy.
1672To filter output, you may pass in a list of pools, a pool and list of vdevs
1673in that pool, or a list of any vdevs from any pool. If no items are specified,
1674statistics for every pool in the system are shown.
cda0317e
GM
1675When given an
1676.Ar interval ,
1677the statistics are printed every
1678.Ar interval
8fccfa8e
DW
1679seconds until ^C is pressed. If
1680.Fl n
1681flag is specified the headers are displayed only once, otherwise they are
1682displayed periodically. If count is specified, the command exits
cda0317e
GM
1683after count reports are printed. The first report printed is always
1684the statistics since boot regardless of whether
1685.Ar interval
1686and
1687.Ar count
1688are passed. However, this behavior can be suppressed with the
1689.Fl y
1690flag. Also note that the units of
1691.Sy K ,
1692.Sy M ,
1693.Sy G ...
1694that are printed in the report are in base 1024. To get the raw
1695values, use the
1696.Fl p
1697flag.
1698.Bl -tag -width Ds
7a8ed6b8 1699.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1700Run a script (or scripts) on each vdev and include the output as a new column
1701in the
1702.Nm zpool Cm iostat
1703output. Users can run any script found in their
1704.Pa ~/.zpool.d
1705directory or from the system
1706.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1707directory. Script names containing the slash (/) character are not allowed.
1708The default search path can be overridden by setting the
cda0317e
GM
1709ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1710.Fl c
1711if they have the ZPOOL_SCRIPTS_AS_ROOT
1712environment variable set. If a script requires the use of a privileged
1713command, like
7a8ed6b8
NB
1714.Xr smartctl 8 ,
1715then it's recommended you allow the user access to it in
cda0317e
GM
1716.Pa /etc/sudoers
1717or add the user to the
1718.Pa /etc/sudoers.d/zfs
1719file.
1720.Pp
1721If
1722.Fl c
1723is passed without a script name, it prints a list of all scripts.
1724.Fl c
7a8ed6b8 1725also sets verbose mode
90cdf283 1726.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1727.Pp
1728Script output should be in the form of "name=value". The column name is
1729set to "name" and the value is set to "value". Multiple lines can be
1730used to output multiple columns. The first line of output not in the
1731"name=value" format is displayed without a column title, and no more
1732output after that is displayed. This can be useful for printing error
1733messages. Blank or NULL values are printed as a '-' to make output
1734awk-able.
1735.Pp
d6418de0 1736The following environment variables are set before running each script:
cda0317e
GM
1737.Bl -tag -width "VDEV_PATH"
1738.It Sy VDEV_PATH
1739Full path to the vdev
1740.El
1741.Bl -tag -width "VDEV_UPATH"
1742.It Sy VDEV_UPATH
1743Underlying path to the vdev (/dev/sd*). For use with device mapper,
1744multipath, or partitioned vdevs.
1745.El
1746.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1747.It Sy VDEV_ENC_SYSFS_PATH
1748The sysfs path to the enclosure for the vdev (if any).
1749.El
1750.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1751Display a time stamp.
cda0317e
GM
1752Specify
1753.Sy u
1754for a printed representation of the internal representation of time.
1755See
1756.Xr time 2 .
1757Specify
1758.Sy d
1759for standard date format.
1760See
1761.Xr date 1 .
1762.It Fl g
1763Display vdev GUIDs instead of the normal device names. These GUIDs
1764can be used in place of device names for the zpool
1765detach/offline/remove/replace commands.
1766.It Fl H
1767Scripted mode. Do not display headers, and separate fields by a
1768single tab instead of arbitrary space.
1769.It Fl L
1770Display real paths for vdevs resolving all symbolic links. This can
1771be used to look up the current block device name regardless of the
1772.Pa /dev/disk/
1773path used to open it.
8fccfa8e
DW
1774.It Fl n
1775Print headers only once when passed
cda0317e
GM
1776.It Fl p
1777Display numbers in parsable (exact) values. Time values are in
1778nanoseconds.
1779.It Fl P
1780Display full paths for vdevs instead of only the last component of
1781the path. This can be used in conjunction with the
1782.Fl L
1783flag.
1784.It Fl r
1785Print request size histograms for the leaf ZIOs. This includes
1786histograms of individual ZIOs (
1787.Ar ind )
1788and aggregate ZIOs (
1789.Ar agg ).
1790These stats can be useful for seeing how well the ZFS IO aggregator is
1791working. Do not confuse these request size stats with the block layer
1792requests; it's possible ZIOs can be broken up before being sent to the
1793block device.
1794.It Fl v
1795Verbose statistics Reports usage statistics for individual vdevs within the
1796pool, in addition to the pool-wide statistics.
1797.It Fl y
eb201f50
GM
1798Omit statistics since boot.
1799Normally the first line of output reports the statistics since boot.
1800This option suppresses that first line of output.
8fccfa8e 1801.Ar interval
cda0317e 1802.It Fl w
eb201f50
GM
1803Display latency histograms:
1804.Pp
1805.Ar total_wait :
1806Total IO time (queuing + disk IO time).
1807.Ar disk_wait :
1808Disk IO time (time reading/writing the disk).
1809.Ar syncq_wait :
1810Amount of time IO spent in synchronous priority queues. Does not include
1811disk time.
1812.Ar asyncq_wait :
1813Amount of time IO spent in asynchronous priority queues. Does not include
1814disk time.
1815.Ar scrub :
1816Amount of time IO spent in scrub queue. Does not include disk time.
cda0317e 1817.It Fl l
193a37cb 1818Include average latency statistics:
cda0317e
GM
1819.Pp
1820.Ar total_wait :
193a37cb 1821Average total IO time (queuing + disk IO time).
cda0317e 1822.Ar disk_wait :
193a37cb 1823Average disk IO time (time reading/writing the disk).
cda0317e
GM
1824.Ar syncq_wait :
1825Average amount of time IO spent in synchronous priority queues. Does
1826not include disk time.
1827.Ar asyncq_wait :
1828Average amount of time IO spent in asynchronous priority queues.
1829Does not include disk time.
1830.Ar scrub :
1831Average queuing time in scrub queue. Does not include disk time.
1832.It Fl q
1833Include active queue statistics. Each priority queue has both
1834pending (
1835.Ar pend )
1836and active (
1837.Ar activ )
1838IOs. Pending IOs are waiting to
1839be issued to the disk, and active IOs have been issued to disk and are
1840waiting for completion. These stats are broken out by priority queue:
1841.Pp
1842.Ar syncq_read/write :
1843Current number of entries in synchronous priority
1844queues.
1845.Ar asyncq_read/write :
193a37cb 1846Current number of entries in asynchronous priority queues.
cda0317e 1847.Ar scrubq_read :
193a37cb 1848Current number of entries in scrub queue.
cda0317e
GM
1849.Pp
1850All queue statistics are instantaneous measurements of the number of
1851entries in the queues. If you specify an interval, the measurements
1852will be sampled from the end of the interval.
1853.El
1854.It Xo
1855.Nm
1856.Cm labelclear
1857.Op Fl f
1858.Ar device
1859.Xc
1860Removes ZFS label information from the specified
1861.Ar device .
1862The
1863.Ar device
1864must not be part of an active pool configuration.
1865.Bl -tag -width Ds
1866.It Fl f
131cc95c 1867Treat exported or foreign devices as inactive.
cda0317e
GM
1868.El
1869.It Xo
1870.Nm
1871.Cm list
1872.Op Fl HgLpPv
1873.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1874.Op Fl T Sy u Ns | Ns Sy d
1875.Oo Ar pool Oc Ns ...
1876.Op Ar interval Op Ar count
1877.Xc
1878Lists the given pools along with a health status and space usage.
1879If no
1880.Ar pool Ns s
1881are specified, all pools in the system are listed.
1882When given an
1883.Ar interval ,
1884the information is printed every
1885.Ar interval
1886seconds until ^C is pressed.
1887If
1888.Ar count
1889is specified, the command exits after
1890.Ar count
1891reports are printed.
1892.Bl -tag -width Ds
1893.It Fl g
1894Display vdev GUIDs instead of the normal device names. These GUIDs
1895can be used in place of device names for the zpool
1896detach/offline/remove/replace commands.
1897.It Fl H
1898Scripted mode.
1899Do not display headers, and separate fields by a single tab instead of arbitrary
1900space.
1901.It Fl o Ar property
1902Comma-separated list of properties to display.
1903See the
1904.Sx Properties
1905section for a list of valid properties.
1906The default list is
fb8a10d5
EA
1907.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1908.Cm capacity , dedupratio , health , altroot .
cda0317e
GM
1909.It Fl L
1910Display real paths for vdevs resolving all symbolic links. This can
1911be used to look up the current block device name regardless of the
1912/dev/disk/ path used to open it.
1913.It Fl p
1914Display numbers in parsable
1915.Pq exact
1916values.
1917.It Fl P
1918Display full paths for vdevs instead of only the last component of
1919the path. This can be used in conjunction with the
85912983 1920.Fl L
1921flag.
cda0317e 1922.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1923Display a time stamp.
cda0317e 1924Specify
f23b0242 1925.Sy u
cda0317e
GM
1926for a printed representation of the internal representation of time.
1927See
1928.Xr time 2 .
1929Specify
f23b0242 1930.Sy d
cda0317e
GM
1931for standard date format.
1932See
1933.Xr date 1 .
1934.It Fl v
1935Verbose statistics.
1936Reports usage statistics for individual vdevs within the pool, in addition to
1937the pool-wise statistics.
1938.El
1939.It Xo
1940.Nm
1941.Cm offline
1942.Op Fl f
1943.Op Fl t
1944.Ar pool Ar device Ns ...
1945.Xc
1946Takes the specified physical device offline.
1947While the
1948.Ar device
1949is offline, no attempt is made to read or write to the device.
1950This command is not applicable to spares.
1951.Bl -tag -width Ds
1952.It Fl f
1953Force fault. Instead of offlining the disk, put it into a faulted
1954state. The fault will persist across imports unless the
1955.Fl t
1956flag was specified.
1957.It Fl t
1958Temporary.
1959Upon reboot, the specified physical device reverts to its previous state.
1960.El
1961.It Xo
1962.Nm
1963.Cm online
1964.Op Fl e
1965.Ar pool Ar device Ns ...
1966.Xc
058ac9ba 1967Brings the specified physical device online.
7c9abcf8 1968This command is not applicable to spares.
cda0317e
GM
1969.Bl -tag -width Ds
1970.It Fl e
1971Expand the device to use all available space.
1972If the device is part of a mirror or raidz then all devices must be expanded
1973before the new space will become available to the pool.
1974.El
1975.It Xo
1976.Nm
1977.Cm reguid
1978.Ar pool
1979.Xc
1980Generates a new unique identifier for the pool.
1981You must ensure that all devices in this pool are online and healthy before
1982performing this action.
1983.It Xo
1984.Nm
1985.Cm reopen
d3f2cd7e 1986.Op Fl n
cda0317e
GM
1987.Ar pool
1988.Xc
5853fe79 1989Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
1990.Bl -tag -width Ds
1991.It Fl n
1992Do not restart an in-progress scrub operation. This is not recommended and can
1993result in partially resilvered devices unless a second scrub is performed.
a94d38c0 1994.El
cda0317e
GM
1995.It Xo
1996.Nm
1997.Cm remove
a1d477c2 1998.Op Fl np
cda0317e
GM
1999.Ar pool Ar device Ns ...
2000.Xc
2001Removes the specified device from the pool.
2ced3cf0
BB
2002This command supports removing hot spare, cache, log, and both mirrored and
2003non-redundant primary top-level vdevs, including dedup and special vdevs.
2004When the primary pool storage includes a top-level raidz vdev only hot spare,
2005cache, and log devices can be removed.
a1d477c2
MA
2006.sp
2007Removing a top-level vdev reduces the total amount of space in the storage pool.
2008The specified device will be evacuated by copying all allocated space from it to
2009the other devices in the pool.
2010In this case, the
2011.Nm zpool Cm remove
2012command initiates the removal and returns, while the evacuation continues in
2013the background.
2014The removal progress can be monitored with
7c9a4292
BB
2015.Nm zpool Cm status .
2016If an IO error is encountered during the removal process it will be
2017cancelled. The
2ced3cf0
BB
2018.Sy device_removal
2019feature flag must be enabled to remove a top-level vdev, see
2020.Xr zpool-features 5 .
a1d477c2
MA
2021.Pp
2022A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2023same.
2024Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
2025the
2026.Nm zpool Cm detach
2027command.
a1d477c2
MA
2028.Bl -tag -width Ds
2029.It Fl n
2030Do not actually perform the removal ("no-op").
2031Instead, print the estimated amount of memory that will be used by the
2032mapping table after the removal completes.
2033This is nonzero only for top-level vdevs.
2034.El
2035.Bl -tag -width Ds
2036.It Fl p
2037Used in conjunction with the
2038.Fl n
2039flag, displays numbers as parsable (exact) values.
2040.El
2041.It Xo
2042.Nm
2043.Cm remove
2044.Fl s
2045.Ar pool
2046.Xc
2047Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
2048.It Xo
2049.Nm
2050.Cm replace
2051.Op Fl f
2052.Op Fl o Ar property Ns = Ns Ar value
2053.Ar pool Ar device Op Ar new_device
2054.Xc
2055Replaces
2056.Ar old_device
2057with
2058.Ar new_device .
2059This is equivalent to attaching
2060.Ar new_device ,
2061waiting for it to resilver, and then detaching
2062.Ar old_device .
2063.Pp
2064The size of
2065.Ar new_device
2066must be greater than or equal to the minimum size of all the devices in a mirror
2067or raidz configuration.
2068.Pp
2069.Ar new_device
2070is required if the pool is not redundant.
2071If
2072.Ar new_device
2073is not specified, it defaults to
2074.Ar old_device .
2075This form of replacement is useful after an existing disk has failed and has
2076been physically replaced.
2077In this case, the new disk may have the same
2078.Pa /dev
2079path as the old device, even though it is actually a different disk.
2080ZFS recognizes this.
2081.Bl -tag -width Ds
2082.It Fl f
2083Forces use of
2084.Ar new_device ,
2085even if its appears to be in use.
2086Not all devices can be overridden in this manner.
2087.It Fl o Ar property Ns = Ns Ar value
2088Sets the given pool properties. See the
2089.Sx Properties
2090section for a list of valid properties that can be set.
2091The only property supported at the moment is
2092.Sy ashift .
2093.El
2094.It Xo
2095.Nm
2096.Cm scrub
0ea05c64 2097.Op Fl s | Fl p
cda0317e
GM
2098.Ar pool Ns ...
2099.Xc
0ea05c64 2100Begins a scrub or resumes a paused scrub.
cda0317e
GM
2101The scrub examines all data in the specified pools to verify that it checksums
2102correctly.
2103For replicated
2104.Pq mirror or raidz
2105devices, ZFS automatically repairs any damage discovered during the scrub.
2106The
2107.Nm zpool Cm status
2108command reports the progress of the scrub and summarizes the results of the
2109scrub upon completion.
2110.Pp
2111Scrubbing and resilvering are very similar operations.
2112The difference is that resilvering only examines data that ZFS knows to be out
2113of date
2114.Po
2115for example, when attaching a new device to a mirror or replacing an existing
2116device
2117.Pc ,
2118whereas scrubbing examines all data to discover silent errors due to hardware
2119faults or disk failure.
2120.Pp
2121Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2122one at a time.
0ea05c64 2123If a scrub is paused, the
cda0317e 2124.Nm zpool Cm scrub
0ea05c64 2125resumes it.
cda0317e
GM
2126If a resilver is in progress, ZFS does not allow a scrub to be started until the
2127resilver completes.
2128.Bl -tag -width Ds
2129.It Fl s
058ac9ba 2130Stop scrubbing.
cda0317e 2131.El
0ea05c64
AP
2132.Bl -tag -width Ds
2133.It Fl p
2134Pause scrubbing.
e4b6b2db
AP
2135Scrub pause state and progress are periodically synced to disk.
2136If the system is restarted or pool is exported during a paused scrub,
2137even after import, scrub will remain paused until it is resumed.
2138Once resumed the scrub will pick up from the place where it was last
2139checkpointed to disk.
0ea05c64
AP
2140To resume a paused scrub issue
2141.Nm zpool Cm scrub
2142again.
2143.El
cda0317e
GM
2144.It Xo
2145.Nm
80a91e74
TC
2146.Cm resilver
2147.Ar pool Ns ...
2148.Xc
2149Starts a resilver. If an existing resilver is already running it will be
2150restarted from the beginning. Any drives that were scheduled for a deferred
2151resilver will be added to the new one.
2152.It Xo
2153.Nm
cda0317e
GM
2154.Cm set
2155.Ar property Ns = Ns Ar value
2156.Ar pool
2157.Xc
2158Sets the given property on the specified pool.
2159See the
2160.Sx Properties
2161section for more information on what properties can be set and acceptable
2162values.
2163.It Xo
2164.Nm
2165.Cm split
b5256303 2166.Op Fl gLlnP
cda0317e
GM
2167.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2168.Op Fl R Ar root
2169.Ar pool newpool
2170.Op Ar device ...
2171.Xc
2172Splits devices off
2173.Ar pool
2174creating
2175.Ar newpool .
2176All vdevs in
2177.Ar pool
2178must be mirrors and the pool must not be in the process of resilvering.
2179At the time of the split,
2180.Ar newpool
2181will be a replica of
2182.Ar pool .
2183By default, the
2184last device in each mirror is split from
2185.Ar pool
2186to create
2187.Ar newpool .
2188.Pp
2189The optional device specification causes the specified device(s) to be
2190included in the new
2191.Ar pool
2192and, should any devices remain unspecified,
2193the last device in each mirror is used as would be by default.
2194.Bl -tag -width Ds
2195.It Fl g
2196Display vdev GUIDs instead of the normal device names. These GUIDs
2197can be used in place of device names for the zpool
2198detach/offline/remove/replace commands.
2199.It Fl L
2200Display real paths for vdevs resolving all symbolic links. This can
2201be used to look up the current block device name regardless of the
2202.Pa /dev/disk/
2203path used to open it.
b5256303
TC
2204.It Fl l
2205Indicates that this command will request encryption keys for all encrypted
2206datasets it attempts to mount as it is bringing the new pool online. Note that
2207if any datasets have a
2208.Sy keylocation
2209of
2210.Sy prompt
2211this command will block waiting for the keys to be entered. Without this flag
2212encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
2213.It Fl n
2214Do dry run, do not actually perform the split.
2215Print out the expected configuration of
2216.Ar newpool .
2217.It Fl P
2218Display full paths for vdevs instead of only the last component of
2219the path. This can be used in conjunction with the
85912983 2220.Fl L
2221flag.
cda0317e
GM
2222.It Fl o Ar property Ns = Ns Ar value
2223Sets the specified property for
2224.Ar newpool .
2225See the
2226.Sx Properties
2227section for more information on the available pool properties.
2228.It Fl R Ar root
2229Set
2230.Sy altroot
2231for
2232.Ar newpool
2233to
2234.Ar root
2235and automatically import it.
2236.El
2237.It Xo
2238.Nm
2239.Cm status
7a8ed6b8 2240.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
a769fb53 2241.Op Fl DigLpPsvx
cda0317e
GM
2242.Op Fl T Sy u Ns | Ns Sy d
2243.Oo Ar pool Oc Ns ...
2244.Op Ar interval Op Ar count
2245.Xc
2246Displays the detailed health status for the given pools.
2247If no
2248.Ar pool
2249is specified, then the status of each pool in the system is displayed.
2250For more information on pool and device health, see the
2251.Sx Device Failure and Recovery
2252section.
2253.Pp
2254If a scrub or resilver is in progress, this command reports the percentage done
2255and the estimated time to completion.
2256Both of these are only approximate, because the amount of data in the pool and
2257the other workloads on the system can change.
2258.Bl -tag -width Ds
7a8ed6b8 2259.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2260Run a script (or scripts) on each vdev and include the output as a new column
2261in the
2262.Nm zpool Cm status
2263output. See the
2264.Fl c
2265option of
2266.Nm zpool Cm iostat
2267for complete details.
a769fb53
BB
2268.It Fl i
2269Display vdev initialization status.
cda0317e
GM
2270.It Fl g
2271Display vdev GUIDs instead of the normal device names. These GUIDs
2272can be used in place of device names for the zpool
2273detach/offline/remove/replace commands.
2274.It Fl L
2275Display real paths for vdevs resolving all symbolic links. This can
2276be used to look up the current block device name regardless of the
2277.Pa /dev/disk/
2278path used to open it.
ad796b8a
TH
2279.It Fl p
2280Display numbers in parsable (exact) values.
f4ae39a1
BB
2281.It Fl P
2282Display full paths for vdevs instead of only the last component of
2283the path. This can be used in conjunction with the
85912983 2284.Fl L
2285flag.
cda0317e
GM
2286.It Fl D
2287Display a histogram of deduplication statistics, showing the allocated
2288.Pq physically present on disk
2289and referenced
2290.Pq logically referenced in the pool
2291block counts and sizes by reference count.
ad796b8a
TH
2292.It Fl s
2293Display the number of leaf VDEV slow IOs. This is the number of IOs that
2294didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2295This does not necessarily mean the IOs failed to complete, just took an
2296unreasonably long amount of time. This may indicate a problem with the
2297underlying storage.
cda0317e 2298.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2299Display a time stamp.
cda0317e 2300Specify
f23b0242 2301.Sy u
cda0317e
GM
2302for a printed representation of the internal representation of time.
2303See
2304.Xr time 2 .
2305Specify
f23b0242 2306.Sy d
cda0317e
GM
2307for standard date format.
2308See
2309.Xr date 1 .
2310.It Fl v
2311Displays verbose data error information, printing out a complete list of all
2312data errors since the last complete pool scrub.
2313.It Fl x
2314Only display status for pools that are exhibiting errors or are otherwise
2315unavailable.
2316Warnings about pools not using the latest on-disk format will not be included.
2317.El
2318.It Xo
2319.Nm
2320.Cm sync
2321.Op Ar pool ...
2322.Xc
2323This command forces all in-core dirty data to be written to the primary
2324pool storage and not the ZIL. It will also update administrative
2325information including quota reporting. Without arguments,
2326.Sy zpool sync
2327will sync all pools on the system. Otherwise, it will sync only the
2328specified pool(s).
2329.It Xo
2330.Nm
2331.Cm upgrade
2332.Xc
2333Displays pools which do not have all supported features enabled and pools
2334formatted using a legacy ZFS version number.
2335These pools can continue to be used, but some features may not be available.
2336Use
2337.Nm zpool Cm upgrade Fl a
2338to enable all features on all pools.
2339.It Xo
2340.Nm
2341.Cm upgrade
2342.Fl v
2343.Xc
2344Displays legacy ZFS versions supported by the current software.
2345See
2346.Xr zpool-features 5
2347for a description of feature flags features supported by the current software.
2348.It Xo
2349.Nm
2350.Cm upgrade
2351.Op Fl V Ar version
2352.Fl a Ns | Ns Ar pool Ns ...
2353.Xc
2354Enables all supported features on the given pool.
2355Once this is done, the pool will no longer be accessible on systems that do not
2356support feature flags.
2357See
9d489ab3 2358.Xr zpool-features 5
cda0317e
GM
2359for details on compatibility with systems that support feature flags, but do not
2360support all features enabled on the pool.
2361.Bl -tag -width Ds
2362.It Fl a
b9b24bb4 2363Enables all supported features on all pools.
cda0317e
GM
2364.It Fl V Ar version
2365Upgrade to the specified legacy version.
2366If the
2367.Fl V
2368flag is specified, no features will be enabled on the pool.
2369This option can only be used to increase the version number up to the last
2370supported legacy version number.
2371.El
2372.El
2373.Sh EXIT STATUS
2374The following exit values are returned:
2375.Bl -tag -width Ds
2376.It Sy 0
2377Successful completion.
2378.It Sy 1
2379An error occurred.
2380.It Sy 2
2381Invalid command line options were specified.
2382.El
2383.Sh EXAMPLES
2384.Bl -tag -width Ds
2385.It Sy Example 1 No Creating a RAID-Z Storage Pool
2386The following command creates a pool with a single raidz root vdev that
2387consists of six disks.
2388.Bd -literal
2389# zpool create tank raidz sda sdb sdc sdd sde sdf
2390.Ed
2391.It Sy Example 2 No Creating a Mirrored Storage Pool
2392The following command creates a pool with two mirrors, where each mirror
2393contains two disks.
2394.Bd -literal
2395# zpool create tank mirror sda sdb mirror sdc sdd
2396.Ed
2397.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2398The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2399.Bd -literal
2400# zpool create tank sda1 sdb2
2401.Ed
2402.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2403The following command creates an unmirrored pool using files.
2404While not recommended, a pool based on files can be useful for experimental
2405purposes.
2406.Bd -literal
2407# zpool create tank /path/to/file/a /path/to/file/b
2408.Ed
2409.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2410The following command adds two mirrored disks to the pool
2411.Em tank ,
2412assuming the pool is already made up of two-way mirrors.
2413The additional space is immediately available to any datasets within the pool.
2414.Bd -literal
2415# zpool add tank mirror sda sdb
2416.Ed
2417.It Sy Example 6 No Listing Available ZFS Storage Pools
2418The following command lists all available pools on the system.
2419In this case, the pool
2420.Em zion
2421is faulted due to a missing device.
058ac9ba 2422The results from this command are similar to the following:
cda0317e
GM
2423.Bd -literal
2424# zpool list
d72cd017
TK
2425NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2426rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2427tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2428zion - - - - - - - FAULTED -
cda0317e
GM
2429.Ed
2430.It Sy Example 7 No Destroying a ZFS Storage Pool
2431The following command destroys the pool
2432.Em tank
2433and any datasets contained within.
2434.Bd -literal
2435# zpool destroy -f tank
2436.Ed
2437.It Sy Example 8 No Exporting a ZFS Storage Pool
2438The following command exports the devices in pool
2439.Em tank
2440so that they can be relocated or later imported.
2441.Bd -literal
2442# zpool export tank
2443.Ed
2444.It Sy Example 9 No Importing a ZFS Storage Pool
2445The following command displays available pools, and then imports the pool
2446.Em tank
2447for use on the system.
058ac9ba 2448The results from this command are similar to the following:
cda0317e
GM
2449.Bd -literal
2450# zpool import
058ac9ba
BB
2451 pool: tank
2452 id: 15451357997522795478
2453 state: ONLINE
2454action: The pool can be imported using its name or numeric identifier.
2455config:
2456
2457 tank ONLINE
2458 mirror ONLINE
54e5f226
RL
2459 sda ONLINE
2460 sdb ONLINE
058ac9ba 2461
cda0317e
GM
2462# zpool import tank
2463.Ed
2464.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2465The following command upgrades all ZFS Storage pools to the current version of
2466the software.
2467.Bd -literal
2468# zpool upgrade -a
2469This system is currently running ZFS version 2.
2470.Ed
2471.It Sy Example 11 No Managing Hot Spares
058ac9ba 2472The following command creates a new pool with an available hot spare:
cda0317e
GM
2473.Bd -literal
2474# zpool create tank mirror sda sdb spare sdc
2475.Ed
2476.Pp
2477If one of the disks were to fail, the pool would be reduced to the degraded
2478state.
2479The failed device can be replaced using the following command:
2480.Bd -literal
2481# zpool replace tank sda sdd
2482.Ed
2483.Pp
2484Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2485made available for use should another device fail.
cda0317e
GM
2486The hot spare can be permanently removed from the pool using the following
2487command:
2488.Bd -literal
2489# zpool remove tank sdc
2490.Ed
2491.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2492The following command creates a ZFS storage pool consisting of two, two-way
2493mirrors and mirrored log devices:
2494.Bd -literal
2495# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2496 sde sdf
2497.Ed
2498.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2499The following command adds two disks for use as cache devices to a ZFS storage
2500pool:
2501.Bd -literal
2502# zpool add pool cache sdc sdd
2503.Ed
2504.Pp
2505Once added, the cache devices gradually fill with content from main memory.
2506Depending on the size of your cache devices, it could take over an hour for
2507them to fill.
2508Capacity and reads can be monitored using the
2509.Cm iostat
2510option as follows:
2511.Bd -literal
2512# zpool iostat -v pool 5
2513.Ed
a1d477c2
MA
2514.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2515The following commands remove the mirrored log device
2516.Sy mirror-2
2517and mirrored top-level data device
2518.Sy mirror-1 .
2519.Pp
058ac9ba 2520Given this configuration:
cda0317e
GM
2521.Bd -literal
2522 pool: tank
2523 state: ONLINE
2524 scrub: none requested
058ac9ba
BB
2525config:
2526
2527 NAME STATE READ WRITE CKSUM
2528 tank ONLINE 0 0 0
2529 mirror-0 ONLINE 0 0 0
54e5f226
RL
2530 sda ONLINE 0 0 0
2531 sdb ONLINE 0 0 0
058ac9ba 2532 mirror-1 ONLINE 0 0 0
54e5f226
RL
2533 sdc ONLINE 0 0 0
2534 sdd ONLINE 0 0 0
058ac9ba
BB
2535 logs
2536 mirror-2 ONLINE 0 0 0
54e5f226
RL
2537 sde ONLINE 0 0 0
2538 sdf ONLINE 0 0 0
cda0317e
GM
2539.Ed
2540.Pp
2541The command to remove the mirrored log
2542.Sy mirror-2
2543is:
2544.Bd -literal
2545# zpool remove tank mirror-2
2546.Ed
a1d477c2
MA
2547.Pp
2548The command to remove the mirrored data
2549.Sy mirror-1
2550is:
2551.Bd -literal
2552# zpool remove tank mirror-1
2553.Ed
cda0317e
GM
2554.It Sy Example 15 No Displaying expanded space on a device
2555The following command displays the detailed information for the pool
2556.Em data .
2557This pool is comprised of a single raidz vdev where one of its devices
2558increased its capacity by 10GB.
2559In this example, the pool will not be able to utilize this extra capacity until
2560all the devices under the raidz vdev have been expanded.
2561.Bd -literal
2562# zpool list -v data
d72cd017
TK
2563NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2564data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2565 raidz1 23.9G 14.6G 9.30G - 48%
2566 sda - - - - -
2567 sdb - - - 10G -
2568 sdc - - - - -
cda0317e
GM
2569.Ed
2570.It Sy Example 16 No Adding output columns
2571Additional columns can be added to the
2572.Nm zpool Cm status
2573and
2574.Nm zpool Cm iostat
2575output with
2576.Fl c
2577option.
2578.Bd -literal
2579# zpool status -c vendor,model,size
2580 NAME STATE READ WRITE CKSUM vendor model size
2581 tank ONLINE 0 0 0
2582 mirror-0 ONLINE 0 0 0
2583 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2584 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2585 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2586 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2587 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2588 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2589
2590# zpool iostat -vc slaves
2591 capacity operations bandwidth
2592 pool alloc free read write read write slaves
2593 ---------- ----- ----- ----- ----- ----- ----- ---------
2594 tank 20.4G 7.23T 26 152 20.7M 21.6M
2595 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2596 U1 - - 0 31 1.46K 20.6M sdb sdff
2597 U10 - - 0 1 3.77K 13.3K sdas sdgw
2598 U11 - - 0 1 288K 13.3K sdat sdgx
2599 U12 - - 0 1 78.4K 13.3K sdau sdgy
2600 U13 - - 0 1 128K 13.3K sdav sdgz
2601 U14 - - 0 1 63.2K 13.3K sdfk sdg
2602.Ed
2603.El
2604.Sh ENVIRONMENT VARIABLES
2605.Bl -tag -width "ZFS_ABORT"
2606.It Ev ZFS_ABORT
2607Cause
2608.Nm zpool
2609to dump core on exit for the purposes of running
90cdf283 2610.Sy ::findleaks .
cda0317e
GM
2611.El
2612.Bl -tag -width "ZPOOL_IMPORT_PATH"
2613.It Ev ZPOOL_IMPORT_PATH
2614The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2615.Nm zpool
2616looks for device nodes and files.
2617Similar to the
2618.Fl d
2619option in
2620.Nm zpool import .
2621.El
2622.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2623.It Ev ZPOOL_VDEV_NAME_GUID
2624Cause
2625.Nm zpool subcommands to output vdev guids by default. This behavior
2626is identical to the
2627.Nm zpool status -g
2628command line option.
2629.El
2630.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2631.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2632Cause
2633.Nm zpool
2634subcommands to follow links for vdev names by default. This behavior is identical to the
2635.Nm zpool status -L
2636command line option.
2637.El
2638.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2639.It Ev ZPOOL_VDEV_NAME_PATH
2640Cause
2641.Nm zpool
2642subcommands to output full vdev path names by default. This
2643behavior is identical to the
2644.Nm zpool status -p
2645command line option.
2646.El
2647.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2648.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2649Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2650config VDEV names if a
2651.Sy devid
2652NVP value is present in the pool's config.
2653.Pp
39fc0cb5 2654For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2655value in the config and
2656.Nm zpool status
2657would fail when listing the config.
39fc0cb5 2658This would also be true for future Linux based pools.
cda0317e
GM
2659.Pp
2660A pool can be stripped of any
2661.Sy devid
2662values on import or prevented from adding
2663them on
2664.Nm zpool create
2665or
2666.Nm zpool add
2667by setting
2668.Sy ZFS_VDEV_DEVID_OPT_OUT .
2669.El
2670.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2671.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2672Allow a privileged user to run the
cda0317e
GM
2673.Nm zpool status/iostat
2674with the
2675.Fl c
7a8ed6b8 2676option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2677.Fl c .
2678.El
2679.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2680.It Ev ZPOOL_SCRIPTS_PATH
2681The search path for scripts when running
2682.Nm zpool status/iostat
2683with the
2684.Fl c
099700d9 2685option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2686.Pa ~/.zpool.d
2687and
2688.Pa /etc/zfs/zpool.d
2689search paths.
2690.El
2691.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2692.It Ev ZPOOL_SCRIPTS_ENABLED
2693Allow a user to run
2694.Nm zpool status/iostat
2695with the
2696.Fl c
2697option. If
2698.Sy ZPOOL_SCRIPTS_ENABLED
2699is not set, it is assumed that the user is allowed to run
2700.Nm zpool status/iostat -c .
90cdf283 2701.El
cda0317e
GM
2702.Sh INTERFACE STABILITY
2703.Sy Evolving
2704.Sh SEE ALSO
cda0317e
GM
2705.Xr zfs-events 5 ,
2706.Xr zfs-module-parameters 5 ,
90cdf283 2707.Xr zpool-features 5 ,
2708.Xr zed 8 ,
2709.Xr zfs 8