]> git.proxmox.com Git - mirror_zfs.git/blame - man/man8/zpool.8
Add test with two kinds of file creation orders
[mirror_zfs.git] / man / man8 / zpool.8
CommitLineData
cda0317e
GM
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
058ac9ba 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
a1d477c2 23.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
df831108 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
bec1067d 25.\" Copyright (c) 2017 Datto Inc.
cda0317e 26.\" Copyright (c) 2017 George Melikov. All Rights Reserved.
d7323e79 27.\" Copyright 2017 Nexenta Systems, Inc.
d3f2cd7e 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
9ae529ec 29.\"
5ee220ba 30.Dd March 9, 2018
cda0317e
GM
31.Dt ZPOOL 8 SMM
32.Os Linux
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
50.Cm clear
51.Ar pool
52.Op Ar device
53.Nm
54.Cm create
55.Op Fl dfn
56.Op Fl m Ar mountpoint
57.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
58.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
59.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
60.Op Fl R Ar root
61.Ar pool vdev Ns ...
62.Nm
63.Cm destroy
64.Op Fl f
65.Ar pool
66.Nm
67.Cm detach
68.Ar pool device
69.Nm
70.Cm events
88f9c939 71.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
72.Nm
73.Cm export
74.Op Fl a
75.Op Fl f
76.Ar pool Ns ...
77.Nm
78.Cm get
79.Op Fl Hp
80.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
81.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
82.Ar pool Ns ...
83.Nm
84.Cm history
85.Op Fl il
86.Oo Ar pool Oc Ns ...
87.Nm
88.Cm import
89.Op Fl D
522db292 90.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
91.Nm
92.Cm import
93.Fl a
b5256303 94.Op Fl DflmN
cda0317e 95.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 96.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
97.Op Fl o Ar mntopts
98.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99.Op Fl R Ar root
100.Nm
101.Cm import
b5256303 102.Op Fl Dflm
cda0317e 103.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 104.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
105.Op Fl o Ar mntopts
106.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
107.Op Fl R Ar root
108.Op Fl s
109.Ar pool Ns | Ns Ar id
110.Op Ar newpool Oo Fl t Oc
111.Nm
112.Cm iostat
113.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
114.Op Fl T Sy u Ns | Ns Sy d
115.Op Fl ghHLpPvy
116.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
117.Op Ar interval Op Ar count
118.Nm
119.Cm labelclear
120.Op Fl f
121.Ar device
122.Nm
123.Cm list
124.Op Fl HgLpPv
125.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
126.Op Fl T Sy u Ns | Ns Sy d
127.Oo Ar pool Oc Ns ...
128.Op Ar interval Op Ar count
129.Nm
130.Cm offline
131.Op Fl f
132.Op Fl t
133.Ar pool Ar device Ns ...
134.Nm
135.Cm online
136.Op Fl e
137.Ar pool Ar device Ns ...
138.Nm
139.Cm reguid
140.Ar pool
141.Nm
142.Cm reopen
d3f2cd7e 143.Op Fl n
cda0317e
GM
144.Ar pool
145.Nm
146.Cm remove
a1d477c2 147.Op Fl np
cda0317e
GM
148.Ar pool Ar device Ns ...
149.Nm
a1d477c2
MA
150.Cm remove
151.Fl s
152.Ar pool
153.Nm
cda0317e
GM
154.Cm replace
155.Op Fl f
156.Oo Fl o Ar property Ns = Ns Ar value Oc
157.Ar pool Ar device Op Ar new_device
158.Nm
159.Cm scrub
0ea05c64 160.Op Fl s | Fl p
cda0317e
GM
161.Ar pool Ns ...
162.Nm
163.Cm set
164.Ar property Ns = Ns Ar value
165.Ar pool
166.Nm
167.Cm split
b5256303 168.Op Fl gLlnP
cda0317e
GM
169.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
170.Op Fl R Ar root
171.Ar pool newpool
172.Oo Ar device Oc Ns ...
173.Nm
174.Cm status
175.Oo Fl c Ar SCRIPT Oc
176.Op Fl gLPvxD
177.Op Fl T Sy u Ns | Ns Sy d
178.Oo Ar pool Oc Ns ...
179.Op Ar interval Op Ar count
180.Nm
181.Cm sync
182.Oo Ar pool Oc Ns ...
183.Nm
184.Cm upgrade
185.Nm
186.Cm upgrade
187.Fl v
188.Nm
189.Cm upgrade
190.Op Fl V Ar version
191.Fl a Ns | Ns Ar pool Ns ...
192.Sh DESCRIPTION
193The
194.Nm
195command configures ZFS storage pools.
196A storage pool is a collection of devices that provides physical storage and
197data replication for ZFS datasets.
198All datasets within a storage pool share the same space.
199See
200.Xr zfs 8
201for information on managing datasets.
202.Ss Virtual Devices (vdevs)
203A "virtual device" describes a single device or a collection of devices
204organized according to certain performance and fault characteristics.
205The following virtual devices are supported:
206.Bl -tag -width Ds
207.It Sy disk
208A block device, typically located under
209.Pa /dev .
210ZFS can use individual slices or partitions, though the recommended mode of
211operation is to use whole disks.
212A disk can be specified by a full path, or it can be a shorthand name
213.Po the relative portion of the path under
214.Pa /dev
215.Pc .
216A whole disk can be specified by omitting the slice or partition designation.
217For example,
218.Pa sda
219is equivalent to
220.Pa /dev/sda .
221When given a whole disk, ZFS automatically labels the disk, if necessary.
222.It Sy file
223A regular file.
224The use of files as a backing store is strongly discouraged.
225It is designed primarily for experimental purposes, as the fault tolerance of a
226file is only as good as the file system of which it is a part.
227A file must be specified by a full path.
228.It Sy mirror
229A mirror of two or more devices.
230Data is replicated in an identical fashion across all components of a mirror.
231A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
232failing before data integrity is compromised.
233.It Sy raidz , raidz1 , raidz2 , raidz3
234A variation on RAID-5 that allows for better distribution of parity and
235eliminates the RAID-5
236.Qq write hole
237.Pq in which data and parity become inconsistent after a power loss .
238Data and parity is striped across all disks within a raidz group.
239.Pp
240A raidz group can have single-, double-, or triple-parity, meaning that the
241raidz group can sustain one, two, or three failures, respectively, without
242losing any data.
243The
244.Sy raidz1
245vdev type specifies a single-parity raidz group; the
246.Sy raidz2
247vdev type specifies a double-parity raidz group; and the
248.Sy raidz3
249vdev type specifies a triple-parity raidz group.
250The
251.Sy raidz
252vdev type is an alias for
253.Sy raidz1 .
254.Pp
255A raidz group with N disks of size X with P parity disks can hold approximately
256(N-P)*X bytes and can withstand P device(s) failing before data integrity is
257compromised.
258The minimum number of devices in a raidz group is one more than the number of
259parity disks.
260The recommended number is between 3 and 9 to help increase performance.
261.It Sy spare
262A special pseudo-vdev which keeps track of available hot spares for a pool.
263For more information, see the
264.Sx Hot Spares
265section.
266.It Sy log
267A separate intent log device.
268If more than one log device is specified, then writes are load-balanced between
269devices.
270Log devices can be mirrored.
271However, raidz vdev types are not supported for the intent log.
272For more information, see the
273.Sx Intent Log
274section.
275.It Sy cache
276A device used to cache storage pool data.
277A cache device cannot be configured as a mirror or raidz group.
278For more information, see the
279.Sx Cache Devices
280section.
281.El
282.Pp
283Virtual devices cannot be nested, so a mirror or raidz virtual device can only
284contain files or disks.
285Mirrors of mirrors
286.Pq or other combinations
287are not allowed.
288.Pp
289A pool can have any number of virtual devices at the top of the configuration
290.Po known as
291.Qq root vdevs
292.Pc .
293Data is dynamically distributed across all top-level devices to balance data
294among devices.
295As new virtual devices are added, ZFS automatically places data on the newly
296available devices.
297.Pp
298Virtual devices are specified one at a time on the command line, separated by
299whitespace.
300The keywords
301.Sy mirror
302and
303.Sy raidz
304are used to distinguish where a group ends and another begins.
305For example, the following creates two root vdevs, each a mirror of two disks:
306.Bd -literal
307# zpool create mypool mirror sda sdb mirror sdc sdd
308.Ed
309.Ss Device Failure and Recovery
310ZFS supports a rich set of mechanisms for handling device failure and data
311corruption.
312All metadata and data is checksummed, and ZFS automatically repairs bad data
313from a good copy when corruption is detected.
314.Pp
315In order to take advantage of these features, a pool must make use of some form
316of redundancy, using either mirrored or raidz groups.
317While ZFS supports running in a non-redundant configuration, where each root
318vdev is simply a disk or file, this is strongly discouraged.
319A single case of bit corruption can render some or all of your data unavailable.
320.Pp
321A pool's health status is described by one of three states: online, degraded,
322or faulted.
323An online pool has all devices operating normally.
324A degraded pool is one in which one or more devices have failed, but the data is
325still available due to a redundant configuration.
326A faulted pool has corrupted metadata, or one or more faulted devices, and
327insufficient replicas to continue functioning.
328.Pp
329The health of the top-level vdev, such as mirror or raidz device, is
330potentially impacted by the state of its associated vdevs, or component
331devices.
332A top-level vdev or component device is in one of the following states:
333.Bl -tag -width "DEGRADED"
334.It Sy DEGRADED
335One or more top-level vdevs is in the degraded state because one or more
336component devices are offline.
337Sufficient replicas exist to continue functioning.
338.Pp
339One or more component devices is in the degraded or faulted state, but
340sufficient replicas exist to continue functioning.
341The underlying conditions are as follows:
342.Bl -bullet
343.It
344The number of checksum errors exceeds acceptable levels and the device is
345degraded as an indication that something may be wrong.
346ZFS continues to use the device as necessary.
347.It
348The number of I/O errors exceeds acceptable levels.
349The device could not be marked as faulted because there are insufficient
350replicas to continue functioning.
351.El
352.It Sy FAULTED
353One or more top-level vdevs is in the faulted state because one or more
354component devices are offline.
355Insufficient replicas exist to continue functioning.
356.Pp
357One or more component devices is in the faulted state, and insufficient
358replicas exist to continue functioning.
359The underlying conditions are as follows:
360.Bl -bullet
361.It
6b4e21c6 362The device could be opened, but the contents did not match expected values.
cda0317e
GM
363.It
364The number of I/O errors exceeds acceptable levels and the device is faulted to
365prevent further use of the device.
366.El
367.It Sy OFFLINE
368The device was explicitly taken offline by the
369.Nm zpool Cm offline
370command.
371.It Sy ONLINE
058ac9ba 372The device is online and functioning.
cda0317e
GM
373.It Sy REMOVED
374The device was physically removed while the system was running.
375Device removal detection is hardware-dependent and may not be supported on all
376platforms.
377.It Sy UNAVAIL
378The device could not be opened.
379If a pool is imported when a device was unavailable, then the device will be
380identified by a unique identifier instead of its path since the path was never
381correct in the first place.
382.El
383.Pp
384If a device is removed and later re-attached to the system, ZFS attempts
385to put the device online automatically.
386Device attach detection is hardware-dependent and might not be supported on all
387platforms.
388.Ss Hot Spares
389ZFS allows devices to be associated with pools as
390.Qq hot spares .
391These devices are not actively used in the pool, but when an active device
392fails, it is automatically replaced by a hot spare.
393To create a pool with hot spares, specify a
394.Sy spare
395vdev with any number of devices.
396For example,
397.Bd -literal
54e5f226 398# zpool create pool mirror sda sdb spare sdc sdd
cda0317e
GM
399.Ed
400.Pp
401Spares can be shared across multiple pools, and can be added with the
402.Nm zpool Cm add
403command and removed with the
404.Nm zpool Cm remove
405command.
406Once a spare replacement is initiated, a new
407.Sy spare
408vdev is created within the configuration that will remain there until the
409original device is replaced.
410At this point, the hot spare becomes available again if another device fails.
411.Pp
412If a pool has a shared spare that is currently being used, the pool can not be
413exported since other pools may use this shared spare, which may lead to
414potential data corruption.
415.Pp
7c9abcf8 416An in-progress spare replacement can be cancelled by detaching the hot spare.
cda0317e
GM
417If the original faulted device is detached, then the hot spare assumes its
418place in the configuration, and is removed from the spare list of all active
419pools.
420.Pp
058ac9ba 421Spares cannot replace log devices.
cda0317e
GM
422.Ss Intent Log
423The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
424transactions.
425For instance, databases often require their transactions to be on stable storage
426devices when returning from a system call.
427NFS and other applications can also use
428.Xr fsync 2
429to ensure data stability.
430By default, the intent log is allocated from blocks within the main pool.
431However, it might be possible to get better performance using separate intent
432log devices such as NVRAM or a dedicated disk.
433For example:
434.Bd -literal
435# zpool create pool sda sdb log sdc
436.Ed
437.Pp
438Multiple log devices can also be specified, and they can be mirrored.
439See the
440.Sx EXAMPLES
441section for an example of mirroring multiple log devices.
442.Pp
910f3ce7
PA
443Log devices can be added, replaced, attached, detached and removed. In
444addition, log devices are imported and exported as part of the pool
445that contains them.
a1d477c2 446Mirrored devices can be removed by specifying the top-level mirror vdev.
cda0317e
GM
447.Ss Cache Devices
448Devices can be added to a storage pool as
449.Qq cache devices .
450These devices provide an additional layer of caching between main memory and
451disk.
452For read-heavy workloads, where the working set size is much larger than what
453can be cached in main memory, using cache devices allow much more of this
454working set to be served from low latency media.
455Using cache devices provides the greatest performance improvement for random
456read-workloads of mostly static content.
457.Pp
458To create a pool with cache devices, specify a
459.Sy cache
460vdev with any number of devices.
461For example:
462.Bd -literal
463# zpool create pool sda sdb cache sdc sdd
464.Ed
465.Pp
466Cache devices cannot be mirrored or part of a raidz configuration.
467If a read error is encountered on a cache device, that read I/O is reissued to
468the original storage pool device, which might be part of a mirrored or raidz
469configuration.
470.Pp
471The content of the cache devices is considered volatile, as is the case with
472other system caches.
473.Ss Properties
474Each pool has several properties associated with it.
475Some properties are read-only statistics while others are configurable and
476change the behavior of the pool.
477.Pp
478The following are read-only properties:
479.Bl -tag -width Ds
6df9f8eb
YP
480.It Cm allocated
481Amount of storage used within the pool.
cda0317e
GM
482.It Sy capacity
483Percentage of pool space used.
484This property can also be referred to by its shortened column name,
485.Sy cap .
486.It Sy expandsize
9ae529ec 487Amount of uninitialized space within the pool or device that can be used to
cda0317e
GM
488increase the total capacity of the pool.
489Uninitialized space consists of any space on an EFI labeled vdev which has not
490been brought online
491.Po e.g, using
492.Nm zpool Cm online Fl e
493.Pc .
494This space occurs when a LUN is dynamically expanded.
495.It Sy fragmentation
f3a7f661 496The amount of fragmentation in the pool.
cda0317e 497.It Sy free
9ae529ec 498The amount of free space available in the pool.
cda0317e 499.It Sy freeing
9ae529ec 500After a file system or snapshot is destroyed, the space it was using is
cda0317e
GM
501returned to the pool asynchronously.
502.Sy freeing
503is the amount of space remaining to be reclaimed.
504Over time
505.Sy freeing
506will decrease while
507.Sy free
508increases.
509.It Sy health
510The current health of the pool.
511Health can be one of
512.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
513.It Sy guid
058ac9ba 514A unique identifier for the pool.
cda0317e 515.It Sy size
058ac9ba 516Total size of the storage pool.
cda0317e
GM
517.It Sy unsupported@ Ns Em feature_guid
518Information about unsupported features that are enabled on the pool.
519See
520.Xr zpool-features 5
521for details.
cda0317e
GM
522.El
523.Pp
524The space usage properties report actual physical space available to the
525storage pool.
526The physical space can be different from the total amount of space that any
527contained datasets can actually use.
528The amount of space used in a raidz configuration depends on the characteristics
529of the data being written.
530In addition, ZFS reserves some space for internal accounting that the
531.Xr zfs 8
532command takes into account, but the
533.Nm
534command does not.
535For non-full pools of a reasonable size, these effects should be invisible.
536For small pools, or pools that are close to being completely full, these
537discrepancies may become more noticeable.
538.Pp
058ac9ba 539The following property can be set at creation time and import time:
cda0317e
GM
540.Bl -tag -width Ds
541.It Sy altroot
542Alternate root directory.
543If set, this directory is prepended to any mount points within the pool.
544This can be used when examining an unknown pool where the mount points cannot be
545trusted, or in an alternate boot environment, where the typical paths are not
546valid.
547.Sy altroot
548is not a persistent property.
549It is valid only while the system is up.
550Setting
551.Sy altroot
552defaults to using
553.Sy cachefile Ns = Ns Sy none ,
554though this may be overridden using an explicit setting.
555.El
556.Pp
557The following property can be set only at import time:
558.Bl -tag -width Ds
559.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
560If set to
561.Sy on ,
562the pool will be imported in read-only mode.
563This property can also be referred to by its shortened column name,
564.Sy rdonly .
565.El
566.Pp
567The following properties can be set at creation time and import time, and later
568changed with the
569.Nm zpool Cm set
570command:
571.Bl -tag -width Ds
572.It Sy ashift Ns = Ns Sy ashift
573Pool sector size exponent, to the power of
574.Sy 2
575(internally referred to as
576.Sy ashift
577). Values from 9 to 16, inclusive, are valid; also, the special
578value 0 (the default) means to auto-detect using the kernel's block
579layer and a ZFS internal exception list. I/O operations will be aligned
580to the specified size boundaries. Additionally, the minimum (disk)
581write size will be set to the specified size, so this represents a
582space vs. performance trade-off. For optimal performance, the pool
583sector size should be greater than or equal to the sector size of the
584underlying disks. The typical case for setting this property is when
585performance is important and the underlying disks use 4KiB sectors but
586report 512B sectors to the OS (for compatibility reasons); in that
587case, set
588.Sy ashift=12
589(which is 1<<12 = 4096). When set, this property is
590used as the default hint value in subsequent vdev operations (add,
591attach and replace). Changing this value will not modify any existing
592vdev, not even on disk replacement; however it can be used, for
593instance, to replace a dying 512B sectors disk with a newer 4KiB
594sectors device: this will probably result in bad performance but at the
595same time could prevent loss of data.
596.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
597Controls automatic pool expansion when the underlying LUN is grown.
598If set to
599.Sy on ,
600the pool will be resized according to the size of the expanded device.
601If the device is part of a mirror or raidz then all devices within that
602mirror/raidz group must be expanded before the new space is made available to
603the pool.
604The default behavior is
605.Sy off .
606This property can also be referred to by its shortened column name,
607.Sy expand .
608.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
609Controls automatic device replacement.
610If set to
611.Sy off ,
612device replacement must be initiated by the administrator by using the
613.Nm zpool Cm replace
614command.
615If set to
616.Sy on ,
617any new device, found in the same physical location as a device that previously
618belonged to the pool, is automatically formatted and replaced.
619The default behavior is
620.Sy off .
621This property can also be referred to by its shortened column name,
622.Sy replace .
623Autoreplace can also be used with virtual disks (like device
624mapper) provided that you use the /dev/disk/by-vdev paths setup by
625vdev_id.conf. See the
626.Xr vdev_id 8
627man page for more details.
628Autoreplace and autoonline require the ZFS Event Daemon be configured and
629running. See the
630.Xr zed 8
631man page for more details.
632.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
633Identifies the default bootable dataset for the root pool. This property is
634expected to be set mainly by the installation and upgrade programs.
635Not all Linux distribution boot processes use the bootfs property.
636.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
637Controls the location of where the pool configuration is cached.
638Discovering all pools on system startup requires a cached copy of the
639configuration data that is stored on the root file system.
640All pools in this cache are automatically imported when the system boots.
641Some environments, such as install and clustering, need to cache this
642information in a different location so that pools are not automatically
643imported.
644Setting this property caches the pool configuration in a different location that
645can later be imported with
646.Nm zpool Cm import Fl c .
647Setting it to the special value
648.Sy none
649creates a temporary pool that is never cached, and the special value
650.Qq
651.Pq empty string
652uses the default location.
653.Pp
654Multiple pools can share the same cache file.
655Because the kernel destroys and recreates this file when pools are added and
656removed, care should be taken when attempting to access this file.
657When the last pool using a
658.Sy cachefile
bbf1ad67 659is exported or destroyed, the file will be empty.
cda0317e
GM
660.It Sy comment Ns = Ns Ar text
661A text string consisting of printable ASCII characters that will be stored
662such that it is available even if the pool becomes faulted.
663An administrator can provide additional information about a pool using this
664property.
665.It Sy dedupditto Ns = Ns Ar number
666Threshold for the number of block ditto copies.
667If the reference count for a deduplicated block increases above this number, a
668new ditto copy of this block is automatically stored.
669The default setting is
670.Sy 0
671which causes no ditto copies to be created for deduplicated blocks.
672The minimum legal nonzero setting is
673.Sy 100 .
674.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
675Controls whether a non-privileged user is granted access based on the dataset
676permissions defined on the dataset.
677See
678.Xr zfs 8
679for more information on ZFS delegated administration.
680.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
681Controls the system behavior in the event of catastrophic pool failure.
682This condition is typically a result of a loss of connectivity to the underlying
683storage device(s) or a failure of all devices within the pool.
684The behavior of such an event is determined as follows:
685.Bl -tag -width "continue"
686.It Sy wait
687Blocks all I/O access until the device connectivity is recovered and the errors
688are cleared.
689This is the default behavior.
690.It Sy continue
691Returns
692.Er EIO
693to any new write I/O requests but allows reads to any of the remaining healthy
694devices.
695Any write requests that have yet to be committed to disk would be blocked.
696.It Sy panic
058ac9ba 697Prints out a message to the console and generates a system crash dump.
cda0317e
GM
698.El
699.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
700The value of this property is the current state of
701.Ar feature_name .
702The only valid value when setting this property is
703.Sy enabled
704which moves
705.Ar feature_name
706to the enabled state.
707See
708.Xr zpool-features 5
709for details on feature states.
710.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
711Controls whether information about snapshots associated with this pool is
712output when
713.Nm zfs Cm list
714is run without the
715.Fl t
716option.
717The default value is
718.Sy off .
719This property can also be referred to by its shortened name,
720.Sy listsnaps .
379ca9cf
OF
721.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
722Controls whether a pool activity check should be performed during
723.Nm zpool Cm import .
724When a pool is determined to be active it cannot be imported, even with the
725.Fl f
726option. This property is intended to be used in failover configurations
727where multiple hosts have access to a pool on shared storage. When this
728property is on, periodic writes to storage occur to show the pool is in use.
729See
730.Sy zfs_multihost_interval
731in the
732.Xr zfs-module-parameters 5
733man page. In order to enable this property each host must set a unique hostid.
734See
735.Xr genhostid 1
b9373170 736.Xr zgenhostid 8
d699aaef 737.Xr spl-module-parameters 5
379ca9cf
OF
738for additional details. The default value is
739.Sy off .
cda0317e
GM
740.It Sy version Ns = Ns Ar version
741The current on-disk version of the pool.
742This can be increased, but never decreased.
743The preferred method of updating pools is with the
744.Nm zpool Cm upgrade
745command, though this property can be used when a specific version is needed for
746backwards compatibility.
747Once feature flags are enabled on a pool this property will no longer have a
748value.
749.El
750.Ss Subcommands
751All subcommands that modify state are logged persistently to the pool in their
752original form.
753.Pp
754The
755.Nm
756command provides subcommands to create and destroy storage pools, add capacity
757to storage pools, and provide information about the storage pools.
758The following subcommands are supported:
759.Bl -tag -width Ds
760.It Xo
761.Nm
762.Fl ?
763.Xc
058ac9ba 764Displays a help message.
cda0317e
GM
765.It Xo
766.Nm
767.Cm add
768.Op Fl fgLnP
769.Oo Fl o Ar property Ns = Ns Ar value Oc
770.Ar pool vdev Ns ...
771.Xc
772Adds the specified virtual devices to the given pool.
773The
774.Ar vdev
775specification is described in the
776.Sx Virtual Devices
777section.
778The behavior of the
779.Fl f
780option, and the device checks performed are described in the
781.Nm zpool Cm create
782subcommand.
783.Bl -tag -width Ds
784.It Fl f
785Forces use of
786.Ar vdev Ns s ,
787even if they appear in use or specify a conflicting replication level.
788Not all devices can be overridden in this manner.
789.It Fl g
790Display
791.Ar vdev ,
792GUIDs instead of the normal device names. These GUIDs can be used in place of
793device names for the zpool detach/offline/remove/replace commands.
794.It Fl L
795Display real paths for
796.Ar vdev Ns s
797resolving all symbolic links. This can be used to look up the current block
798device name regardless of the /dev/disk/ path used to open it.
799.It Fl n
800Displays the configuration that would be used without actually adding the
801.Ar vdev Ns s .
802The actual pool creation can still fail due to insufficient privileges or
803device sharing.
804.It Fl P
805Display real paths for
806.Ar vdev Ns s
807instead of only the last component of the path. This can be used in
808conjunction with the -L flag.
809.It Fl o Ar property Ns = Ns Ar value
810Sets the given pool properties. See the
811.Sx Properties
812section for a list of valid properties that can be set. The only property
813supported at the moment is ashift.
814.El
815.It Xo
816.Nm
817.Cm attach
818.Op Fl f
819.Oo Fl o Ar property Ns = Ns Ar value Oc
820.Ar pool device new_device
821.Xc
822Attaches
823.Ar new_device
824to the existing
825.Ar device .
826The existing device cannot be part of a raidz configuration.
827If
828.Ar device
829is not currently part of a mirrored configuration,
830.Ar device
831automatically transforms into a two-way mirror of
832.Ar device
833and
834.Ar new_device .
835If
836.Ar device
837is part of a two-way mirror, attaching
838.Ar new_device
839creates a three-way mirror, and so on.
840In either case,
841.Ar new_device
842begins to resilver immediately.
843.Bl -tag -width Ds
844.It Fl f
845Forces use of
846.Ar new_device ,
847even if its appears to be in use.
848Not all devices can be overridden in this manner.
849.It Fl o Ar property Ns = Ns Ar value
850Sets the given pool properties. See the
851.Sx Properties
852section for a list of valid properties that can be set. The only property
853supported at the moment is ashift.
854.El
855.It Xo
856.Nm
857.Cm clear
858.Ar pool
859.Op Ar device
860.Xc
861Clears device errors in a pool.
862If no arguments are specified, all device errors within the pool are cleared.
863If one or more devices is specified, only those errors associated with the
864specified device or devices are cleared.
865.It Xo
866.Nm
867.Cm create
868.Op Fl dfn
869.Op Fl m Ar mountpoint
870.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
871.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
872.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
873.Op Fl R Ar root
874.Op Fl t Ar tname
875.Ar pool vdev Ns ...
876.Xc
877Creates a new storage pool containing the virtual devices specified on the
878command line.
879The pool name must begin with a letter, and can only contain
880alphanumeric characters as well as underscore
881.Pq Qq Sy _ ,
882dash
90cdf283 883.Pq Qq Sy \&- ,
cda0317e
GM
884colon
885.Pq Qq Sy \&: ,
886space
90cdf283 887.Pq Qq Sy \&\ ,
cda0317e
GM
888and period
889.Pq Qq Sy \&. .
890The pool names
891.Sy mirror ,
892.Sy raidz ,
893.Sy spare
894and
895.Sy log
5ee220ba
TK
896are reserved, as are names beginning with
897.Sy mirror ,
898.Sy raidz ,
899.Sy spare ,
900and the pattern
cda0317e
GM
901.Sy c[0-9] .
902The
903.Ar vdev
904specification is described in the
905.Sx Virtual Devices
906section.
907.Pp
908The command verifies that each device specified is accessible and not currently
909in use by another subsystem.
910There are some uses, such as being currently mounted, or specified as the
911dedicated dump device, that prevents a device from ever being used by ZFS.
912Other uses, such as having a preexisting UFS file system, can be overridden with
913the
914.Fl f
915option.
916.Pp
917The command also checks that the replication strategy for the pool is
918consistent.
919An attempt to combine redundant and non-redundant storage in a single pool, or
920to mix disks and files, results in an error unless
921.Fl f
922is specified.
923The use of differently sized devices within a single raidz or mirror group is
924also flagged as an error unless
925.Fl f
926is specified.
927.Pp
928Unless the
929.Fl R
930option is specified, the default mount point is
931.Pa / Ns Ar pool .
932The mount point must not exist or must be empty, or else the root dataset
933cannot be mounted.
934This can be overridden with the
935.Fl m
936option.
937.Pp
938By default all supported features are enabled on the new pool unless the
939.Fl d
940option is specified.
941.Bl -tag -width Ds
942.It Fl d
943Do not enable any features on the new pool.
944Individual features can be enabled by setting their corresponding properties to
945.Sy enabled
946with the
947.Fl o
948option.
949See
950.Xr zpool-features 5
951for details about feature properties.
952.It Fl f
953Forces use of
954.Ar vdev Ns s ,
955even if they appear in use or specify a conflicting replication level.
956Not all devices can be overridden in this manner.
957.It Fl m Ar mountpoint
958Sets the mount point for the root dataset.
959The default mount point is
960.Pa /pool
961or
962.Pa altroot/pool
963if
964.Ar altroot
965is specified.
966The mount point must be an absolute path,
967.Sy legacy ,
968or
969.Sy none .
970For more information on dataset mount points, see
971.Xr zfs 8 .
972.It Fl n
973Displays the configuration that would be used without actually creating the
974pool.
975The actual pool creation can still fail due to insufficient privileges or
976device sharing.
977.It Fl o Ar property Ns = Ns Ar value
978Sets the given pool properties.
979See the
980.Sx Properties
981section for a list of valid properties that can be set.
982.It Fl o Ar feature@feature Ns = Ns Ar value
983Sets the given pool feature. See the
984.Xr zpool-features 5
985section for a list of valid features that can be set.
986Value can be either disabled or enabled.
987.It Fl O Ar file-system-property Ns = Ns Ar value
988Sets the given file system properties in the root file system of the pool.
989See the
990.Sx Properties
991section of
992.Xr zfs 8
993for a list of valid properties that can be set.
994.It Fl R Ar root
995Equivalent to
996.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
997.It Fl t Ar tname
998Sets the in-core pool name to
999.Sy tname
1000while the on-disk name will be the name specified as the pool name
1001.Sy pool .
1002This will set the default cachefile property to none. This is intended
1003to handle name space collisions when creating pools for other systems,
1004such as virtual machines or physical machines whose pools live on network
1005block devices.
1006.El
1007.It Xo
1008.Nm
1009.Cm destroy
1010.Op Fl f
1011.Ar pool
1012.Xc
1013Destroys the given pool, freeing up any devices for other use.
1014This command tries to unmount any active datasets before destroying the pool.
1015.Bl -tag -width Ds
1016.It Fl f
058ac9ba 1017Forces any active datasets contained within the pool to be unmounted.
cda0317e
GM
1018.El
1019.It Xo
1020.Nm
1021.Cm detach
1022.Ar pool device
1023.Xc
1024Detaches
1025.Ar device
1026from a mirror.
1027The operation is refused if there are no other valid replicas of the data.
1028If device may be re-added to the pool later on then consider the
1029.Sy zpool offline
1030command instead.
1031.It Xo
1032.Nm
1033.Cm events
88f9c939 1034.Op Fl vHf Oo Ar pool Oc | Fl c
cda0317e
GM
1035.Xc
1036Lists all recent events generated by the ZFS kernel modules. These events
1037are consumed by the
1038.Xr zed 8
1039and used to automate administrative tasks such as replacing a failed device
1040with a hot spare. For more information about the subclasses and event payloads
1041that can be generated see the
1042.Xr zfs-events 5
1043man page.
1044.Bl -tag -width Ds
1045.It Fl c
d050c627 1046Clear all previous events.
cda0317e
GM
1047.It Fl f
1048Follow mode.
1049.It Fl H
1050Scripted mode. Do not display headers, and separate fields by a
1051single tab instead of arbitrary space.
1052.It Fl v
1053Print the entire payload for each event.
1054.El
1055.It Xo
1056.Nm
1057.Cm export
1058.Op Fl a
1059.Op Fl f
1060.Ar pool Ns ...
1061.Xc
1062Exports the given pools from the system.
1063All devices are marked as exported, but are still considered in use by other
1064subsystems.
1065The devices can be moved between systems
1066.Pq even those of different endianness
1067and imported as long as a sufficient number of devices are present.
1068.Pp
1069Before exporting the pool, all datasets within the pool are unmounted.
1070A pool can not be exported if it has a shared spare that is currently being
1071used.
1072.Pp
1073For pools to be portable, you must give the
1074.Nm
1075command whole disks, not just partitions, so that ZFS can label the disks with
1076portable EFI labels.
1077Otherwise, disk drivers on platforms of different endianness will not recognize
1078the disks.
1079.Bl -tag -width Ds
1080.It Fl a
859735c0 1081Exports all pools imported on the system.
cda0317e
GM
1082.It Fl f
1083Forcefully unmount all datasets, using the
1084.Nm unmount Fl f
1085command.
1086.Pp
1087This command will forcefully export the pool even if it has a shared spare that
1088is currently being used.
1089This may lead to potential data corruption.
1090.El
1091.It Xo
1092.Nm
1093.Cm get
1094.Op Fl Hp
1095.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1096.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1097.Ar pool Ns ...
1098.Xc
1099Retrieves the given list of properties
1100.Po
1101or all properties if
1102.Sy all
1103is used
1104.Pc
1105for the specified storage pool(s).
1106These properties are displayed with the following fields:
1107.Bd -literal
2a8b84b7 1108 name Name of storage pool
058ac9ba
BB
1109 property Property name
1110 value Property value
1111 source Property source, either 'default' or 'local'.
cda0317e
GM
1112.Ed
1113.Pp
1114See the
1115.Sx Properties
1116section for more information on the available pool properties.
1117.Bl -tag -width Ds
1118.It Fl H
1119Scripted mode.
1120Do not display headers, and separate fields by a single tab instead of arbitrary
1121space.
1122.It Fl o Ar field
1123A comma-separated list of columns to display.
d7323e79 1124.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
2a8b84b7 1125is the default value.
cda0317e
GM
1126.It Fl p
1127Display numbers in parsable (exact) values.
1128.El
1129.It Xo
1130.Nm
1131.Cm history
1132.Op Fl il
1133.Oo Ar pool Oc Ns ...
1134.Xc
1135Displays the command history of the specified pool(s) or all pools if no pool is
1136specified.
1137.Bl -tag -width Ds
1138.It Fl i
1139Displays internally logged ZFS events in addition to user initiated events.
1140.It Fl l
1141Displays log records in long format, which in addition to standard format
1142includes, the user name, the hostname, and the zone in which the operation was
1143performed.
1144.El
1145.It Xo
1146.Nm
1147.Cm import
1148.Op Fl D
522db292 1149.Op Fl d Ar dir Ns | Ns device
cda0317e
GM
1150.Xc
1151Lists pools available to import.
1152If the
1153.Fl d
1154option is not specified, this command searches for devices in
1155.Pa /dev .
1156The
1157.Fl d
1158option can be specified multiple times, and all directories are searched.
1159If the device appears to be part of an exported pool, this command displays a
1160summary of the pool with the name of the pool, a numeric identifier, as well as
1161the vdev layout and current health of the device for each device or file.
1162Destroyed pools, pools that were previously destroyed with the
1163.Nm zpool Cm destroy
1164command, are not listed unless the
1165.Fl D
1166option is specified.
1167.Pp
1168The numeric identifier is unique, and can be used instead of the pool name when
1169multiple exported pools of the same name are available.
1170.Bl -tag -width Ds
1171.It Fl c Ar cachefile
1172Reads configuration from the given
1173.Ar cachefile
1174that was created with the
1175.Sy cachefile
1176pool property.
1177This
1178.Ar cachefile
1179is used instead of searching for devices.
522db292
CC
1180.It Fl d Ar dir Ns | Ns Ar device
1181Uses
1182.Ar device
1183or searches for devices or files in
cda0317e
GM
1184.Ar dir .
1185The
1186.Fl d
1187option can be specified multiple times.
1188.It Fl D
058ac9ba 1189Lists destroyed pools only.
cda0317e
GM
1190.El
1191.It Xo
1192.Nm
1193.Cm import
1194.Fl a
b5256303 1195.Op Fl DflmN
cda0317e 1196.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
522db292 1197.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1198.Op Fl o Ar mntopts
1199.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1200.Op Fl R Ar root
1201.Op Fl s
1202.Xc
1203Imports all pools found in the search directories.
1204Identical to the previous command, except that all pools with a sufficient
1205number of devices available are imported.
1206Destroyed pools, pools that were previously destroyed with the
1207.Nm zpool Cm destroy
1208command, will not be imported unless the
1209.Fl D
1210option is specified.
1211.Bl -tag -width Ds
1212.It Fl a
6b4e21c6 1213Searches for and imports all pools found.
cda0317e
GM
1214.It Fl c Ar cachefile
1215Reads configuration from the given
1216.Ar cachefile
1217that was created with the
1218.Sy cachefile
1219pool property.
1220This
1221.Ar cachefile
1222is used instead of searching for devices.
522db292
CC
1223.It Fl d Ar dir Ns | Ns Ar device
1224Uses
1225.Ar device
1226or searches for devices or files in
cda0317e
GM
1227.Ar dir .
1228The
1229.Fl d
1230option can be specified multiple times.
1231This option is incompatible with the
1232.Fl c
1233option.
1234.It Fl D
1235Imports destroyed pools only.
1236The
1237.Fl f
1238option is also required.
1239.It Fl f
1240Forces import, even if the pool appears to be potentially active.
1241.It Fl F
1242Recovery mode for a non-importable pool.
1243Attempt to return the pool to an importable state by discarding the last few
1244transactions.
1245Not all damaged pools can be recovered by using this option.
1246If successful, the data from the discarded transactions is irretrievably lost.
1247This option is ignored if the pool is importable or already imported.
b5256303
TC
1248.It Fl l
1249Indicates that this command will request encryption keys for all encrypted
1250datasets it attempts to mount as it is bringing the pool online. Note that if
1251any datasets have a
1252.Sy keylocation
1253of
1254.Sy prompt
1255this command will block waiting for the keys to be entered. Without this flag
1256encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1257.It Fl m
7f9d9946 1258Allows a pool to import when there is a missing log device.
cda0317e
GM
1259Recent transactions can be lost because the log device will be discarded.
1260.It Fl n
1261Used with the
1262.Fl F
1263recovery option.
1264Determines whether a non-importable pool can be made importable again, but does
1265not actually perform the pool recovery.
1266For more details about pool recovery mode, see the
1267.Fl F
1268option, above.
1269.It Fl N
7f9d9946 1270Import the pool without mounting any file systems.
cda0317e
GM
1271.It Fl o Ar mntopts
1272Comma-separated list of mount options to use when mounting datasets within the
1273pool.
1274See
1275.Xr zfs 8
1276for a description of dataset properties and mount options.
1277.It Fl o Ar property Ns = Ns Ar value
1278Sets the specified property on the imported pool.
1279See the
1280.Sx Properties
1281section for more information on the available pool properties.
1282.It Fl R Ar root
1283Sets the
1284.Sy cachefile
1285property to
1286.Sy none
1287and the
1288.Sy altroot
1289property to
1290.Ar root .
1291.It Fl s
1292Scan using the default search path, the libblkid cache will not be
1293consulted. A custom search path may be specified by setting the
1294ZPOOL_IMPORT_PATH environment variable.
1295.It Fl X
1296Used with the
1297.Fl F
1298recovery option. Determines whether extreme
1299measures to find a valid txg should take place. This allows the pool to
1300be rolled back to a txg which is no longer guaranteed to be consistent.
1301Pools imported at an inconsistent txg may contain uncorrectable
1302checksum errors. For more details about pool recovery mode, see the
1303.Fl F
1304option, above. WARNING: This option can be extremely hazardous to the
1305health of your pool and should only be used as a last resort.
1306.It Fl T
1307Specify the txg to use for rollback. Implies
1308.Fl FX .
1309For more details
1310about pool recovery mode, see the
1311.Fl X
1312option, above. WARNING: This option can be extremely hazardous to the
1313health of your pool and should only be used as a last resort.
1314.El
1315.It Xo
1316.Nm
1317.Cm import
b5256303 1318.Op Fl Dflm
cda0317e 1319.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
522db292 1320.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
cda0317e
GM
1321.Op Fl o Ar mntopts
1322.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1323.Op Fl R Ar root
1324.Op Fl s
1325.Ar pool Ns | Ns Ar id
1326.Op Ar newpool
1327.Xc
1328Imports a specific pool.
1329A pool can be identified by its name or the numeric identifier.
1330If
1331.Ar newpool
1332is specified, the pool is imported using the name
1333.Ar newpool .
1334Otherwise, it is imported with the same name as its exported name.
1335.Pp
1336If a device is removed from a system without running
1337.Nm zpool Cm export
1338first, the device appears as potentially active.
1339It cannot be determined if this was a failed export, or whether the device is
1340really in use from another host.
1341To import a pool in this state, the
1342.Fl f
1343option is required.
1344.Bl -tag -width Ds
1345.It Fl c Ar cachefile
1346Reads configuration from the given
1347.Ar cachefile
1348that was created with the
1349.Sy cachefile
1350pool property.
1351This
1352.Ar cachefile
1353is used instead of searching for devices.
522db292
CC
1354.It Fl d Ar dir Ns | Ns Ar device
1355Uses
1356.Ar device
1357or searches for devices or files in
cda0317e
GM
1358.Ar dir .
1359The
1360.Fl d
1361option can be specified multiple times.
1362This option is incompatible with the
1363.Fl c
1364option.
1365.It Fl D
1366Imports destroyed pool.
1367The
1368.Fl f
1369option is also required.
1370.It Fl f
058ac9ba 1371Forces import, even if the pool appears to be potentially active.
cda0317e
GM
1372.It Fl F
1373Recovery mode for a non-importable pool.
1374Attempt to return the pool to an importable state by discarding the last few
1375transactions.
1376Not all damaged pools can be recovered by using this option.
1377If successful, the data from the discarded transactions is irretrievably lost.
1378This option is ignored if the pool is importable or already imported.
b5256303
TC
1379.It Fl l
1380Indicates that this command will request encryption keys for all encrypted
1381datasets it attempts to mount as it is bringing the pool online. Note that if
1382any datasets have a
1383.Sy keylocation
1384of
1385.Sy prompt
1386this command will block waiting for the keys to be entered. Without this flag
1387encrypted datasets will be left unavailable until the keys are loaded.
cda0317e 1388.It Fl m
7f9d9946 1389Allows a pool to import when there is a missing log device.
cda0317e
GM
1390Recent transactions can be lost because the log device will be discarded.
1391.It Fl n
1392Used with the
1393.Fl F
1394recovery option.
1395Determines whether a non-importable pool can be made importable again, but does
1396not actually perform the pool recovery.
1397For more details about pool recovery mode, see the
1398.Fl F
1399option, above.
1400.It Fl o Ar mntopts
1401Comma-separated list of mount options to use when mounting datasets within the
1402pool.
1403See
1404.Xr zfs 8
1405for a description of dataset properties and mount options.
1406.It Fl o Ar property Ns = Ns Ar value
1407Sets the specified property on the imported pool.
1408See the
1409.Sx Properties
1410section for more information on the available pool properties.
1411.It Fl R Ar root
1412Sets the
1413.Sy cachefile
1414property to
1415.Sy none
1416and the
1417.Sy altroot
1418property to
1419.Ar root .
1420.It Fl s
1421Scan using the default search path, the libblkid cache will not be
1422consulted. A custom search path may be specified by setting the
1423ZPOOL_IMPORT_PATH environment variable.
1424.It Fl X
1425Used with the
1426.Fl F
1427recovery option. Determines whether extreme
1428measures to find a valid txg should take place. This allows the pool to
1429be rolled back to a txg which is no longer guaranteed to be consistent.
1430Pools imported at an inconsistent txg may contain uncorrectable
1431checksum errors. For more details about pool recovery mode, see the
1432.Fl F
1433option, above. WARNING: This option can be extremely hazardous to the
1434health of your pool and should only be used as a last resort.
1435.It Fl T
1436Specify the txg to use for rollback. Implies
1437.Fl FX .
1438For more details
1439about pool recovery mode, see the
1440.Fl X
1441option, above. WARNING: This option can be extremely hazardous to the
1442health of your pool and should only be used as a last resort.
1c68856b 1443.It Fl t
cda0317e
GM
1444Used with
1445.Sy newpool .
1446Specifies that
1447.Sy newpool
1448is temporary. Temporary pool names last until export. Ensures that
1449the original pool name will be used in all label updates and therefore
1450is retained upon export.
1451Will also set -o cachefile=none when not explicitly specified.
1452.El
1453.It Xo
1454.Nm
1455.Cm iostat
1456.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1457.Op Fl T Sy u Ns | Ns Sy d
1458.Op Fl ghHLpPvy
1459.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1460.Op Ar interval Op Ar count
1461.Xc
1462Displays I/O statistics for the given pools/vdevs. You can pass in a
1463list of pools, a pool and list of vdevs in that pool, or a list of any
1464vdevs from any pool. If no items are specified, statistics for every
1465pool in the system are shown.
1466When given an
1467.Ar interval ,
1468the statistics are printed every
1469.Ar interval
1470seconds until ^C is pressed. If count is specified, the command exits
1471after count reports are printed. The first report printed is always
1472the statistics since boot regardless of whether
1473.Ar interval
1474and
1475.Ar count
1476are passed. However, this behavior can be suppressed with the
1477.Fl y
1478flag. Also note that the units of
1479.Sy K ,
1480.Sy M ,
1481.Sy G ...
1482that are printed in the report are in base 1024. To get the raw
1483values, use the
1484.Fl p
1485flag.
1486.Bl -tag -width Ds
7a8ed6b8 1487.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1488Run a script (or scripts) on each vdev and include the output as a new column
1489in the
1490.Nm zpool Cm iostat
1491output. Users can run any script found in their
1492.Pa ~/.zpool.d
1493directory or from the system
1494.Pa /etc/zfs/zpool.d
d6bcf7ff
GDN
1495directory. Script names containing the slash (/) character are not allowed.
1496The default search path can be overridden by setting the
cda0317e
GM
1497ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1498.Fl c
1499if they have the ZPOOL_SCRIPTS_AS_ROOT
1500environment variable set. If a script requires the use of a privileged
1501command, like
7a8ed6b8
NB
1502.Xr smartctl 8 ,
1503then it's recommended you allow the user access to it in
cda0317e
GM
1504.Pa /etc/sudoers
1505or add the user to the
1506.Pa /etc/sudoers.d/zfs
1507file.
1508.Pp
1509If
1510.Fl c
1511is passed without a script name, it prints a list of all scripts.
1512.Fl c
7a8ed6b8 1513also sets verbose mode
90cdf283 1514.No \&( Ns Fl v Ns No \&).
cda0317e
GM
1515.Pp
1516Script output should be in the form of "name=value". The column name is
1517set to "name" and the value is set to "value". Multiple lines can be
1518used to output multiple columns. The first line of output not in the
1519"name=value" format is displayed without a column title, and no more
1520output after that is displayed. This can be useful for printing error
1521messages. Blank or NULL values are printed as a '-' to make output
1522awk-able.
1523.Pp
d6418de0 1524The following environment variables are set before running each script:
cda0317e
GM
1525.Bl -tag -width "VDEV_PATH"
1526.It Sy VDEV_PATH
1527Full path to the vdev
1528.El
1529.Bl -tag -width "VDEV_UPATH"
1530.It Sy VDEV_UPATH
1531Underlying path to the vdev (/dev/sd*). For use with device mapper,
1532multipath, or partitioned vdevs.
1533.El
1534.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1535.It Sy VDEV_ENC_SYSFS_PATH
1536The sysfs path to the enclosure for the vdev (if any).
1537.El
1538.It Fl T Sy u Ns | Ns Sy d
058ac9ba 1539Display a time stamp.
cda0317e
GM
1540Specify
1541.Sy u
1542for a printed representation of the internal representation of time.
1543See
1544.Xr time 2 .
1545Specify
1546.Sy d
1547for standard date format.
1548See
1549.Xr date 1 .
1550.It Fl g
1551Display vdev GUIDs instead of the normal device names. These GUIDs
1552can be used in place of device names for the zpool
1553detach/offline/remove/replace commands.
1554.It Fl H
1555Scripted mode. Do not display headers, and separate fields by a
1556single tab instead of arbitrary space.
1557.It Fl L
1558Display real paths for vdevs resolving all symbolic links. This can
1559be used to look up the current block device name regardless of the
1560.Pa /dev/disk/
1561path used to open it.
1562.It Fl p
1563Display numbers in parsable (exact) values. Time values are in
1564nanoseconds.
1565.It Fl P
1566Display full paths for vdevs instead of only the last component of
1567the path. This can be used in conjunction with the
1568.Fl L
1569flag.
1570.It Fl r
1571Print request size histograms for the leaf ZIOs. This includes
1572histograms of individual ZIOs (
1573.Ar ind )
1574and aggregate ZIOs (
1575.Ar agg ).
1576These stats can be useful for seeing how well the ZFS IO aggregator is
1577working. Do not confuse these request size stats with the block layer
1578requests; it's possible ZIOs can be broken up before being sent to the
1579block device.
1580.It Fl v
1581Verbose statistics Reports usage statistics for individual vdevs within the
1582pool, in addition to the pool-wide statistics.
1583.It Fl y
1584.It Fl w
1585.It Fl l
193a37cb 1586Include average latency statistics:
cda0317e
GM
1587.Pp
1588.Ar total_wait :
193a37cb 1589Average total IO time (queuing + disk IO time).
cda0317e 1590.Ar disk_wait :
193a37cb 1591Average disk IO time (time reading/writing the disk).
cda0317e
GM
1592.Ar syncq_wait :
1593Average amount of time IO spent in synchronous priority queues. Does
1594not include disk time.
1595.Ar asyncq_wait :
1596Average amount of time IO spent in asynchronous priority queues.
1597Does not include disk time.
1598.Ar scrub :
1599Average queuing time in scrub queue. Does not include disk time.
1600.It Fl q
1601Include active queue statistics. Each priority queue has both
1602pending (
1603.Ar pend )
1604and active (
1605.Ar activ )
1606IOs. Pending IOs are waiting to
1607be issued to the disk, and active IOs have been issued to disk and are
1608waiting for completion. These stats are broken out by priority queue:
1609.Pp
1610.Ar syncq_read/write :
1611Current number of entries in synchronous priority
1612queues.
1613.Ar asyncq_read/write :
193a37cb 1614Current number of entries in asynchronous priority queues.
cda0317e 1615.Ar scrubq_read :
193a37cb 1616Current number of entries in scrub queue.
cda0317e
GM
1617.Pp
1618All queue statistics are instantaneous measurements of the number of
1619entries in the queues. If you specify an interval, the measurements
1620will be sampled from the end of the interval.
1621.El
1622.It Xo
1623.Nm
1624.Cm labelclear
1625.Op Fl f
1626.Ar device
1627.Xc
1628Removes ZFS label information from the specified
1629.Ar device .
1630The
1631.Ar device
1632must not be part of an active pool configuration.
1633.Bl -tag -width Ds
1634.It Fl f
131cc95c 1635Treat exported or foreign devices as inactive.
cda0317e
GM
1636.El
1637.It Xo
1638.Nm
1639.Cm list
1640.Op Fl HgLpPv
1641.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1642.Op Fl T Sy u Ns | Ns Sy d
1643.Oo Ar pool Oc Ns ...
1644.Op Ar interval Op Ar count
1645.Xc
1646Lists the given pools along with a health status and space usage.
1647If no
1648.Ar pool Ns s
1649are specified, all pools in the system are listed.
1650When given an
1651.Ar interval ,
1652the information is printed every
1653.Ar interval
1654seconds until ^C is pressed.
1655If
1656.Ar count
1657is specified, the command exits after
1658.Ar count
1659reports are printed.
1660.Bl -tag -width Ds
1661.It Fl g
1662Display vdev GUIDs instead of the normal device names. These GUIDs
1663can be used in place of device names for the zpool
1664detach/offline/remove/replace commands.
1665.It Fl H
1666Scripted mode.
1667Do not display headers, and separate fields by a single tab instead of arbitrary
1668space.
1669.It Fl o Ar property
1670Comma-separated list of properties to display.
1671See the
1672.Sx Properties
1673section for a list of valid properties.
1674The default list is
6df9f8eb
YP
1675.Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1676.Cm dedupratio , health , altroot .
cda0317e
GM
1677.It Fl L
1678Display real paths for vdevs resolving all symbolic links. This can
1679be used to look up the current block device name regardless of the
1680/dev/disk/ path used to open it.
1681.It Fl p
1682Display numbers in parsable
1683.Pq exact
1684values.
1685.It Fl P
1686Display full paths for vdevs instead of only the last component of
1687the path. This can be used in conjunction with the
1688.Fl L flag.
1689.It Fl T Sy u Ns | Ns Sy d
6e1b9d03 1690Display a time stamp.
cda0317e
GM
1691Specify
1692.Fl u
1693for a printed representation of the internal representation of time.
1694See
1695.Xr time 2 .
1696Specify
1697.Fl d
1698for standard date format.
1699See
1700.Xr date 1 .
1701.It Fl v
1702Verbose statistics.
1703Reports usage statistics for individual vdevs within the pool, in addition to
1704the pool-wise statistics.
1705.El
1706.It Xo
1707.Nm
1708.Cm offline
1709.Op Fl f
1710.Op Fl t
1711.Ar pool Ar device Ns ...
1712.Xc
1713Takes the specified physical device offline.
1714While the
1715.Ar device
1716is offline, no attempt is made to read or write to the device.
1717This command is not applicable to spares.
1718.Bl -tag -width Ds
1719.It Fl f
1720Force fault. Instead of offlining the disk, put it into a faulted
1721state. The fault will persist across imports unless the
1722.Fl t
1723flag was specified.
1724.It Fl t
1725Temporary.
1726Upon reboot, the specified physical device reverts to its previous state.
1727.El
1728.It Xo
1729.Nm
1730.Cm online
1731.Op Fl e
1732.Ar pool Ar device Ns ...
1733.Xc
058ac9ba 1734Brings the specified physical device online.
7c9abcf8 1735This command is not applicable to spares.
cda0317e
GM
1736.Bl -tag -width Ds
1737.It Fl e
1738Expand the device to use all available space.
1739If the device is part of a mirror or raidz then all devices must be expanded
1740before the new space will become available to the pool.
1741.El
1742.It Xo
1743.Nm
1744.Cm reguid
1745.Ar pool
1746.Xc
1747Generates a new unique identifier for the pool.
1748You must ensure that all devices in this pool are online and healthy before
1749performing this action.
1750.It Xo
1751.Nm
1752.Cm reopen
d3f2cd7e 1753.Op Fl n
cda0317e
GM
1754.Ar pool
1755.Xc
5853fe79 1756Reopen all the vdevs associated with the pool.
d3f2cd7e
AB
1757.Bl -tag -width Ds
1758.It Fl n
1759Do not restart an in-progress scrub operation. This is not recommended and can
1760result in partially resilvered devices unless a second scrub is performed.
a94d38c0 1761.El
cda0317e
GM
1762.It Xo
1763.Nm
1764.Cm remove
a1d477c2 1765.Op Fl np
cda0317e
GM
1766.Ar pool Ar device Ns ...
1767.Xc
1768Removes the specified device from the pool.
a1d477c2
MA
1769This command currently only supports removing hot spares, cache, log
1770devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1771.sp
1772Removing a top-level vdev reduces the total amount of space in the storage pool.
1773The specified device will be evacuated by copying all allocated space from it to
1774the other devices in the pool.
1775In this case, the
1776.Nm zpool Cm remove
1777command initiates the removal and returns, while the evacuation continues in
1778the background.
1779The removal progress can be monitored with
1780.Nm zpool Cm status.
1781This feature must be enabled to be used, see
1782.Xr zpool-features 5
1783.Pp
1784A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1785same.
1786Non-log devices or data devices that are part of a mirrored configuration can be removed using
cda0317e
GM
1787the
1788.Nm zpool Cm detach
1789command.
a1d477c2
MA
1790.Bl -tag -width Ds
1791.It Fl n
1792Do not actually perform the removal ("no-op").
1793Instead, print the estimated amount of memory that will be used by the
1794mapping table after the removal completes.
1795This is nonzero only for top-level vdevs.
1796.El
1797.Bl -tag -width Ds
1798.It Fl p
1799Used in conjunction with the
1800.Fl n
1801flag, displays numbers as parsable (exact) values.
1802.El
1803.It Xo
1804.Nm
1805.Cm remove
1806.Fl s
1807.Ar pool
1808.Xc
1809Stops and cancels an in-progress removal of a top-level vdev.
cda0317e
GM
1810.It Xo
1811.Nm
1812.Cm replace
1813.Op Fl f
1814.Op Fl o Ar property Ns = Ns Ar value
1815.Ar pool Ar device Op Ar new_device
1816.Xc
1817Replaces
1818.Ar old_device
1819with
1820.Ar new_device .
1821This is equivalent to attaching
1822.Ar new_device ,
1823waiting for it to resilver, and then detaching
1824.Ar old_device .
1825.Pp
1826The size of
1827.Ar new_device
1828must be greater than or equal to the minimum size of all the devices in a mirror
1829or raidz configuration.
1830.Pp
1831.Ar new_device
1832is required if the pool is not redundant.
1833If
1834.Ar new_device
1835is not specified, it defaults to
1836.Ar old_device .
1837This form of replacement is useful after an existing disk has failed and has
1838been physically replaced.
1839In this case, the new disk may have the same
1840.Pa /dev
1841path as the old device, even though it is actually a different disk.
1842ZFS recognizes this.
1843.Bl -tag -width Ds
1844.It Fl f
1845Forces use of
1846.Ar new_device ,
1847even if its appears to be in use.
1848Not all devices can be overridden in this manner.
1849.It Fl o Ar property Ns = Ns Ar value
1850Sets the given pool properties. See the
1851.Sx Properties
1852section for a list of valid properties that can be set.
1853The only property supported at the moment is
1854.Sy ashift .
1855.El
1856.It Xo
1857.Nm
1858.Cm scrub
0ea05c64 1859.Op Fl s | Fl p
cda0317e
GM
1860.Ar pool Ns ...
1861.Xc
0ea05c64 1862Begins a scrub or resumes a paused scrub.
cda0317e
GM
1863The scrub examines all data in the specified pools to verify that it checksums
1864correctly.
1865For replicated
1866.Pq mirror or raidz
1867devices, ZFS automatically repairs any damage discovered during the scrub.
1868The
1869.Nm zpool Cm status
1870command reports the progress of the scrub and summarizes the results of the
1871scrub upon completion.
1872.Pp
1873Scrubbing and resilvering are very similar operations.
1874The difference is that resilvering only examines data that ZFS knows to be out
1875of date
1876.Po
1877for example, when attaching a new device to a mirror or replacing an existing
1878device
1879.Pc ,
1880whereas scrubbing examines all data to discover silent errors due to hardware
1881faults or disk failure.
1882.Pp
1883Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1884one at a time.
0ea05c64 1885If a scrub is paused, the
cda0317e 1886.Nm zpool Cm scrub
0ea05c64 1887resumes it.
cda0317e
GM
1888If a resilver is in progress, ZFS does not allow a scrub to be started until the
1889resilver completes.
1890.Bl -tag -width Ds
1891.It Fl s
058ac9ba 1892Stop scrubbing.
cda0317e 1893.El
0ea05c64
AP
1894.Bl -tag -width Ds
1895.It Fl p
1896Pause scrubbing.
e4b6b2db
AP
1897Scrub pause state and progress are periodically synced to disk.
1898If the system is restarted or pool is exported during a paused scrub,
1899even after import, scrub will remain paused until it is resumed.
1900Once resumed the scrub will pick up from the place where it was last
1901checkpointed to disk.
0ea05c64
AP
1902To resume a paused scrub issue
1903.Nm zpool Cm scrub
1904again.
1905.El
cda0317e
GM
1906.It Xo
1907.Nm
1908.Cm set
1909.Ar property Ns = Ns Ar value
1910.Ar pool
1911.Xc
1912Sets the given property on the specified pool.
1913See the
1914.Sx Properties
1915section for more information on what properties can be set and acceptable
1916values.
1917.It Xo
1918.Nm
1919.Cm split
b5256303 1920.Op Fl gLlnP
cda0317e
GM
1921.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1922.Op Fl R Ar root
1923.Ar pool newpool
1924.Op Ar device ...
1925.Xc
1926Splits devices off
1927.Ar pool
1928creating
1929.Ar newpool .
1930All vdevs in
1931.Ar pool
1932must be mirrors and the pool must not be in the process of resilvering.
1933At the time of the split,
1934.Ar newpool
1935will be a replica of
1936.Ar pool .
1937By default, the
1938last device in each mirror is split from
1939.Ar pool
1940to create
1941.Ar newpool .
1942.Pp
1943The optional device specification causes the specified device(s) to be
1944included in the new
1945.Ar pool
1946and, should any devices remain unspecified,
1947the last device in each mirror is used as would be by default.
1948.Bl -tag -width Ds
1949.It Fl g
1950Display vdev GUIDs instead of the normal device names. These GUIDs
1951can be used in place of device names for the zpool
1952detach/offline/remove/replace commands.
1953.It Fl L
1954Display real paths for vdevs resolving all symbolic links. This can
1955be used to look up the current block device name regardless of the
1956.Pa /dev/disk/
1957path used to open it.
b5256303
TC
1958.It Fl l
1959Indicates that this command will request encryption keys for all encrypted
1960datasets it attempts to mount as it is bringing the new pool online. Note that
1961if any datasets have a
1962.Sy keylocation
1963of
1964.Sy prompt
1965this command will block waiting for the keys to be entered. Without this flag
1966encrypted datasets will be left unavailable until the keys are loaded.
cda0317e
GM
1967.It Fl n
1968Do dry run, do not actually perform the split.
1969Print out the expected configuration of
1970.Ar newpool .
1971.It Fl P
1972Display full paths for vdevs instead of only the last component of
1973the path. This can be used in conjunction with the
1974.Fl L flag.
1975.It Fl o Ar property Ns = Ns Ar value
1976Sets the specified property for
1977.Ar newpool .
1978See the
1979.Sx Properties
1980section for more information on the available pool properties.
1981.It Fl R Ar root
1982Set
1983.Sy altroot
1984for
1985.Ar newpool
1986to
1987.Ar root
1988and automatically import it.
1989.El
1990.It Xo
1991.Nm
1992.Cm status
7a8ed6b8 1993.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
1994.Op Fl gLPvxD
1995.Op Fl T Sy u Ns | Ns Sy d
1996.Oo Ar pool Oc Ns ...
1997.Op Ar interval Op Ar count
1998.Xc
1999Displays the detailed health status for the given pools.
2000If no
2001.Ar pool
2002is specified, then the status of each pool in the system is displayed.
2003For more information on pool and device health, see the
2004.Sx Device Failure and Recovery
2005section.
2006.Pp
2007If a scrub or resilver is in progress, this command reports the percentage done
2008and the estimated time to completion.
2009Both of these are only approximate, because the amount of data in the pool and
2010the other workloads on the system can change.
2011.Bl -tag -width Ds
7a8ed6b8 2012.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
cda0317e
GM
2013Run a script (or scripts) on each vdev and include the output as a new column
2014in the
2015.Nm zpool Cm status
2016output. See the
2017.Fl c
2018option of
2019.Nm zpool Cm iostat
2020for complete details.
2021.It Fl g
2022Display vdev GUIDs instead of the normal device names. These GUIDs
2023can be used in place of device names for the zpool
2024detach/offline/remove/replace commands.
2025.It Fl L
2026Display real paths for vdevs resolving all symbolic links. This can
2027be used to look up the current block device name regardless of the
2028.Pa /dev/disk/
2029path used to open it.
f4ae39a1
BB
2030.It Fl P
2031Display full paths for vdevs instead of only the last component of
2032the path. This can be used in conjunction with the
2033.Fl L flag.
cda0317e
GM
2034.It Fl D
2035Display a histogram of deduplication statistics, showing the allocated
2036.Pq physically present on disk
2037and referenced
2038.Pq logically referenced in the pool
2039block counts and sizes by reference count.
2040.It Fl T Sy u Ns | Ns Sy d
2e2ddc30 2041Display a time stamp.
cda0317e
GM
2042Specify
2043.Fl u
2044for a printed representation of the internal representation of time.
2045See
2046.Xr time 2 .
2047Specify
2048.Fl d
2049for standard date format.
2050See
2051.Xr date 1 .
2052.It Fl v
2053Displays verbose data error information, printing out a complete list of all
2054data errors since the last complete pool scrub.
2055.It Fl x
2056Only display status for pools that are exhibiting errors or are otherwise
2057unavailable.
2058Warnings about pools not using the latest on-disk format will not be included.
2059.El
2060.It Xo
2061.Nm
2062.Cm sync
2063.Op Ar pool ...
2064.Xc
2065This command forces all in-core dirty data to be written to the primary
2066pool storage and not the ZIL. It will also update administrative
2067information including quota reporting. Without arguments,
2068.Sy zpool sync
2069will sync all pools on the system. Otherwise, it will sync only the
2070specified pool(s).
2071.It Xo
2072.Nm
2073.Cm upgrade
2074.Xc
2075Displays pools which do not have all supported features enabled and pools
2076formatted using a legacy ZFS version number.
2077These pools can continue to be used, but some features may not be available.
2078Use
2079.Nm zpool Cm upgrade Fl a
2080to enable all features on all pools.
2081.It Xo
2082.Nm
2083.Cm upgrade
2084.Fl v
2085.Xc
2086Displays legacy ZFS versions supported by the current software.
2087See
2088.Xr zpool-features 5
2089for a description of feature flags features supported by the current software.
2090.It Xo
2091.Nm
2092.Cm upgrade
2093.Op Fl V Ar version
2094.Fl a Ns | Ns Ar pool Ns ...
2095.Xc
2096Enables all supported features on the given pool.
2097Once this is done, the pool will no longer be accessible on systems that do not
2098support feature flags.
2099See
2100.Xr zfs-features 5
2101for details on compatibility with systems that support feature flags, but do not
2102support all features enabled on the pool.
2103.Bl -tag -width Ds
2104.It Fl a
b9b24bb4 2105Enables all supported features on all pools.
cda0317e
GM
2106.It Fl V Ar version
2107Upgrade to the specified legacy version.
2108If the
2109.Fl V
2110flag is specified, no features will be enabled on the pool.
2111This option can only be used to increase the version number up to the last
2112supported legacy version number.
2113.El
2114.El
2115.Sh EXIT STATUS
2116The following exit values are returned:
2117.Bl -tag -width Ds
2118.It Sy 0
2119Successful completion.
2120.It Sy 1
2121An error occurred.
2122.It Sy 2
2123Invalid command line options were specified.
2124.El
2125.Sh EXAMPLES
2126.Bl -tag -width Ds
2127.It Sy Example 1 No Creating a RAID-Z Storage Pool
2128The following command creates a pool with a single raidz root vdev that
2129consists of six disks.
2130.Bd -literal
2131# zpool create tank raidz sda sdb sdc sdd sde sdf
2132.Ed
2133.It Sy Example 2 No Creating a Mirrored Storage Pool
2134The following command creates a pool with two mirrors, where each mirror
2135contains two disks.
2136.Bd -literal
2137# zpool create tank mirror sda sdb mirror sdc sdd
2138.Ed
2139.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
54e5f226 2140The following command creates an unmirrored pool using two disk partitions.
cda0317e
GM
2141.Bd -literal
2142# zpool create tank sda1 sdb2
2143.Ed
2144.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2145The following command creates an unmirrored pool using files.
2146While not recommended, a pool based on files can be useful for experimental
2147purposes.
2148.Bd -literal
2149# zpool create tank /path/to/file/a /path/to/file/b
2150.Ed
2151.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2152The following command adds two mirrored disks to the pool
2153.Em tank ,
2154assuming the pool is already made up of two-way mirrors.
2155The additional space is immediately available to any datasets within the pool.
2156.Bd -literal
2157# zpool add tank mirror sda sdb
2158.Ed
2159.It Sy Example 6 No Listing Available ZFS Storage Pools
2160The following command lists all available pools on the system.
2161In this case, the pool
2162.Em zion
2163is faulted due to a missing device.
058ac9ba 2164The results from this command are similar to the following:
cda0317e
GM
2165.Bd -literal
2166# zpool list
d72cd017
TK
2167NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2168rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2169tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2170zion - - - - - - - FAULTED -
cda0317e
GM
2171.Ed
2172.It Sy Example 7 No Destroying a ZFS Storage Pool
2173The following command destroys the pool
2174.Em tank
2175and any datasets contained within.
2176.Bd -literal
2177# zpool destroy -f tank
2178.Ed
2179.It Sy Example 8 No Exporting a ZFS Storage Pool
2180The following command exports the devices in pool
2181.Em tank
2182so that they can be relocated or later imported.
2183.Bd -literal
2184# zpool export tank
2185.Ed
2186.It Sy Example 9 No Importing a ZFS Storage Pool
2187The following command displays available pools, and then imports the pool
2188.Em tank
2189for use on the system.
058ac9ba 2190The results from this command are similar to the following:
cda0317e
GM
2191.Bd -literal
2192# zpool import
058ac9ba
BB
2193 pool: tank
2194 id: 15451357997522795478
2195 state: ONLINE
2196action: The pool can be imported using its name or numeric identifier.
2197config:
2198
2199 tank ONLINE
2200 mirror ONLINE
54e5f226
RL
2201 sda ONLINE
2202 sdb ONLINE
058ac9ba 2203
cda0317e
GM
2204# zpool import tank
2205.Ed
2206.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2207The following command upgrades all ZFS Storage pools to the current version of
2208the software.
2209.Bd -literal
2210# zpool upgrade -a
2211This system is currently running ZFS version 2.
2212.Ed
2213.It Sy Example 11 No Managing Hot Spares
058ac9ba 2214The following command creates a new pool with an available hot spare:
cda0317e
GM
2215.Bd -literal
2216# zpool create tank mirror sda sdb spare sdc
2217.Ed
2218.Pp
2219If one of the disks were to fail, the pool would be reduced to the degraded
2220state.
2221The failed device can be replaced using the following command:
2222.Bd -literal
2223# zpool replace tank sda sdd
2224.Ed
2225.Pp
2226Once the data has been resilvered, the spare is automatically removed and is
7c9abcf8 2227made available for use should another device fail.
cda0317e
GM
2228The hot spare can be permanently removed from the pool using the following
2229command:
2230.Bd -literal
2231# zpool remove tank sdc
2232.Ed
2233.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2234The following command creates a ZFS storage pool consisting of two, two-way
2235mirrors and mirrored log devices:
2236.Bd -literal
2237# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2238 sde sdf
2239.Ed
2240.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2241The following command adds two disks for use as cache devices to a ZFS storage
2242pool:
2243.Bd -literal
2244# zpool add pool cache sdc sdd
2245.Ed
2246.Pp
2247Once added, the cache devices gradually fill with content from main memory.
2248Depending on the size of your cache devices, it could take over an hour for
2249them to fill.
2250Capacity and reads can be monitored using the
2251.Cm iostat
2252option as follows:
2253.Bd -literal
2254# zpool iostat -v pool 5
2255.Ed
a1d477c2
MA
2256.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2257The following commands remove the mirrored log device
2258.Sy mirror-2
2259and mirrored top-level data device
2260.Sy mirror-1 .
2261.Pp
058ac9ba 2262Given this configuration:
cda0317e
GM
2263.Bd -literal
2264 pool: tank
2265 state: ONLINE
2266 scrub: none requested
058ac9ba
BB
2267config:
2268
2269 NAME STATE READ WRITE CKSUM
2270 tank ONLINE 0 0 0
2271 mirror-0 ONLINE 0 0 0
54e5f226
RL
2272 sda ONLINE 0 0 0
2273 sdb ONLINE 0 0 0
058ac9ba 2274 mirror-1 ONLINE 0 0 0
54e5f226
RL
2275 sdc ONLINE 0 0 0
2276 sdd ONLINE 0 0 0
058ac9ba
BB
2277 logs
2278 mirror-2 ONLINE 0 0 0
54e5f226
RL
2279 sde ONLINE 0 0 0
2280 sdf ONLINE 0 0 0
cda0317e
GM
2281.Ed
2282.Pp
2283The command to remove the mirrored log
2284.Sy mirror-2
2285is:
2286.Bd -literal
2287# zpool remove tank mirror-2
2288.Ed
a1d477c2
MA
2289.Pp
2290The command to remove the mirrored data
2291.Sy mirror-1
2292is:
2293.Bd -literal
2294# zpool remove tank mirror-1
2295.Ed
cda0317e
GM
2296.It Sy Example 15 No Displaying expanded space on a device
2297The following command displays the detailed information for the pool
2298.Em data .
2299This pool is comprised of a single raidz vdev where one of its devices
2300increased its capacity by 10GB.
2301In this example, the pool will not be able to utilize this extra capacity until
2302all the devices under the raidz vdev have been expanded.
2303.Bd -literal
2304# zpool list -v data
d72cd017
TK
2305NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2306data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2307 raidz1 23.9G 14.6G 9.30G - 48%
2308 sda - - - - -
2309 sdb - - - 10G -
2310 sdc - - - - -
cda0317e
GM
2311.Ed
2312.It Sy Example 16 No Adding output columns
2313Additional columns can be added to the
2314.Nm zpool Cm status
2315and
2316.Nm zpool Cm iostat
2317output with
2318.Fl c
2319option.
2320.Bd -literal
2321# zpool status -c vendor,model,size
2322 NAME STATE READ WRITE CKSUM vendor model size
2323 tank ONLINE 0 0 0
2324 mirror-0 ONLINE 0 0 0
2325 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2326 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2327 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2328 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2329 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2330 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2331
2332# zpool iostat -vc slaves
2333 capacity operations bandwidth
2334 pool alloc free read write read write slaves
2335 ---------- ----- ----- ----- ----- ----- ----- ---------
2336 tank 20.4G 7.23T 26 152 20.7M 21.6M
2337 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2338 U1 - - 0 31 1.46K 20.6M sdb sdff
2339 U10 - - 0 1 3.77K 13.3K sdas sdgw
2340 U11 - - 0 1 288K 13.3K sdat sdgx
2341 U12 - - 0 1 78.4K 13.3K sdau sdgy
2342 U13 - - 0 1 128K 13.3K sdav sdgz
2343 U14 - - 0 1 63.2K 13.3K sdfk sdg
2344.Ed
2345.El
2346.Sh ENVIRONMENT VARIABLES
2347.Bl -tag -width "ZFS_ABORT"
2348.It Ev ZFS_ABORT
2349Cause
2350.Nm zpool
2351to dump core on exit for the purposes of running
90cdf283 2352.Sy ::findleaks .
cda0317e
GM
2353.El
2354.Bl -tag -width "ZPOOL_IMPORT_PATH"
2355.It Ev ZPOOL_IMPORT_PATH
2356The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2357.Nm zpool
2358looks for device nodes and files.
2359Similar to the
2360.Fl d
2361option in
2362.Nm zpool import .
2363.El
2364.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2365.It Ev ZPOOL_VDEV_NAME_GUID
2366Cause
2367.Nm zpool subcommands to output vdev guids by default. This behavior
2368is identical to the
2369.Nm zpool status -g
2370command line option.
2371.El
2372.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2373.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2374Cause
2375.Nm zpool
2376subcommands to follow links for vdev names by default. This behavior is identical to the
2377.Nm zpool status -L
2378command line option.
2379.El
2380.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2381.It Ev ZPOOL_VDEV_NAME_PATH
2382Cause
2383.Nm zpool
2384subcommands to output full vdev path names by default. This
2385behavior is identical to the
2386.Nm zpool status -p
2387command line option.
2388.El
2389.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2390.It Ev ZFS_VDEV_DEVID_OPT_OUT
39fc0cb5 2391Older ZFS on Linux implementations had issues when attempting to display pool
cda0317e
GM
2392config VDEV names if a
2393.Sy devid
2394NVP value is present in the pool's config.
2395.Pp
39fc0cb5 2396For example, a pool that originated on illumos platform would have a devid
cda0317e
GM
2397value in the config and
2398.Nm zpool status
2399would fail when listing the config.
39fc0cb5 2400This would also be true for future Linux based pools.
cda0317e
GM
2401.Pp
2402A pool can be stripped of any
2403.Sy devid
2404values on import or prevented from adding
2405them on
2406.Nm zpool create
2407or
2408.Nm zpool add
2409by setting
2410.Sy ZFS_VDEV_DEVID_OPT_OUT .
2411.El
2412.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2413.It Ev ZPOOL_SCRIPTS_AS_ROOT
7a8ed6b8 2414Allow a privileged user to run the
cda0317e
GM
2415.Nm zpool status/iostat
2416with the
2417.Fl c
7a8ed6b8 2418option. Normally, only unprivileged users are allowed to run
cda0317e
GM
2419.Fl c .
2420.El
2421.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2422.It Ev ZPOOL_SCRIPTS_PATH
2423The search path for scripts when running
2424.Nm zpool status/iostat
2425with the
2426.Fl c
099700d9 2427option. This is a colon-separated list of directories and overrides the default
cda0317e
GM
2428.Pa ~/.zpool.d
2429and
2430.Pa /etc/zfs/zpool.d
2431search paths.
2432.El
2433.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2434.It Ev ZPOOL_SCRIPTS_ENABLED
2435Allow a user to run
2436.Nm zpool status/iostat
2437with the
2438.Fl c
2439option. If
2440.Sy ZPOOL_SCRIPTS_ENABLED
2441is not set, it is assumed that the user is allowed to run
2442.Nm zpool status/iostat -c .
90cdf283 2443.El
cda0317e
GM
2444.Sh INTERFACE STABILITY
2445.Sy Evolving
2446.Sh SEE ALSO
cda0317e
GM
2447.Xr zfs-events 5 ,
2448.Xr zfs-module-parameters 5 ,
90cdf283 2449.Xr zpool-features 5 ,
2450.Xr zed 8 ,
2451.Xr zfs 8