]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Added no_scrub_restart flag to zpool reopen
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd August 23, 2017
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm clear
51 .Ar pool
52 .Op Ar device
53 .Nm
54 .Cm create
55 .Op Fl dfn
56 .Op Fl m Ar mountpoint
57 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
58 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
59 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
60 .Op Fl R Ar root
61 .Ar pool vdev Ns ...
62 .Nm
63 .Cm destroy
64 .Op Fl f
65 .Ar pool
66 .Nm
67 .Cm detach
68 .Ar pool device
69 .Nm
70 .Cm events
71 .Op Fl vHfc
72 .Op Ar pool
73 .Nm
74 .Cm export
75 .Op Fl a
76 .Op Fl f
77 .Ar pool Ns ...
78 .Nm
79 .Cm get
80 .Op Fl Hp
81 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
82 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
83 .Ar pool Ns ...
84 .Nm
85 .Cm history
86 .Op Fl il
87 .Oo Ar pool Oc Ns ...
88 .Nm
89 .Cm import
90 .Op Fl D
91 .Op Fl d Ar dir
92 .Nm
93 .Cm import
94 .Fl a
95 .Op Fl DflmN
96 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
97 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
98 .Op Fl o Ar mntopts
99 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
100 .Op Fl R Ar root
101 .Nm
102 .Cm import
103 .Op Fl Dflm
104 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
105 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
106 .Op Fl o Ar mntopts
107 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
108 .Op Fl R Ar root
109 .Op Fl s
110 .Ar pool Ns | Ns Ar id
111 .Op Ar newpool Oo Fl t Oc
112 .Nm
113 .Cm iostat
114 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
115 .Op Fl T Sy u Ns | Ns Sy d
116 .Op Fl ghHLpPvy
117 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
118 .Op Ar interval Op Ar count
119 .Nm
120 .Cm labelclear
121 .Op Fl f
122 .Ar device
123 .Nm
124 .Cm list
125 .Op Fl HgLpPv
126 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
127 .Op Fl T Sy u Ns | Ns Sy d
128 .Oo Ar pool Oc Ns ...
129 .Op Ar interval Op Ar count
130 .Nm
131 .Cm offline
132 .Op Fl f
133 .Op Fl t
134 .Ar pool Ar device Ns ...
135 .Nm
136 .Cm online
137 .Op Fl e
138 .Ar pool Ar device Ns ...
139 .Nm
140 .Cm reguid
141 .Ar pool
142 .Nm
143 .Cm reopen
144 .Op Fl n
145 .Ar pool
146 .Nm
147 .Cm remove
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm replace
151 .Op Fl f
152 .Oo Fl o Ar property Ns = Ns Ar value Oc
153 .Ar pool Ar device Op Ar new_device
154 .Nm
155 .Cm scrub
156 .Op Fl s | Fl p
157 .Ar pool Ns ...
158 .Nm
159 .Cm set
160 .Ar property Ns = Ns Ar value
161 .Ar pool
162 .Nm
163 .Cm split
164 .Op Fl gLlnP
165 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
166 .Op Fl R Ar root
167 .Ar pool newpool
168 .Oo Ar device Oc Ns ...
169 .Nm
170 .Cm status
171 .Oo Fl c Ar SCRIPT Oc
172 .Op Fl gLPvxD
173 .Op Fl T Sy u Ns | Ns Sy d
174 .Oo Ar pool Oc Ns ...
175 .Op Ar interval Op Ar count
176 .Nm
177 .Cm sync
178 .Oo Ar pool Oc Ns ...
179 .Nm
180 .Cm upgrade
181 .Nm
182 .Cm upgrade
183 .Fl v
184 .Nm
185 .Cm upgrade
186 .Op Fl V Ar version
187 .Fl a Ns | Ns Ar pool Ns ...
188 .Sh DESCRIPTION
189 The
190 .Nm
191 command configures ZFS storage pools.
192 A storage pool is a collection of devices that provides physical storage and
193 data replication for ZFS datasets.
194 All datasets within a storage pool share the same space.
195 See
196 .Xr zfs 8
197 for information on managing datasets.
198 .Ss Virtual Devices (vdevs)
199 A "virtual device" describes a single device or a collection of devices
200 organized according to certain performance and fault characteristics.
201 The following virtual devices are supported:
202 .Bl -tag -width Ds
203 .It Sy disk
204 A block device, typically located under
205 .Pa /dev .
206 ZFS can use individual slices or partitions, though the recommended mode of
207 operation is to use whole disks.
208 A disk can be specified by a full path, or it can be a shorthand name
209 .Po the relative portion of the path under
210 .Pa /dev
211 .Pc .
212 A whole disk can be specified by omitting the slice or partition designation.
213 For example,
214 .Pa sda
215 is equivalent to
216 .Pa /dev/sda .
217 When given a whole disk, ZFS automatically labels the disk, if necessary.
218 .It Sy file
219 A regular file.
220 The use of files as a backing store is strongly discouraged.
221 It is designed primarily for experimental purposes, as the fault tolerance of a
222 file is only as good as the file system of which it is a part.
223 A file must be specified by a full path.
224 .It Sy mirror
225 A mirror of two or more devices.
226 Data is replicated in an identical fashion across all components of a mirror.
227 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
228 failing before data integrity is compromised.
229 .It Sy raidz , raidz1 , raidz2 , raidz3
230 A variation on RAID-5 that allows for better distribution of parity and
231 eliminates the RAID-5
232 .Qq write hole
233 .Pq in which data and parity become inconsistent after a power loss .
234 Data and parity is striped across all disks within a raidz group.
235 .Pp
236 A raidz group can have single-, double-, or triple-parity, meaning that the
237 raidz group can sustain one, two, or three failures, respectively, without
238 losing any data.
239 The
240 .Sy raidz1
241 vdev type specifies a single-parity raidz group; the
242 .Sy raidz2
243 vdev type specifies a double-parity raidz group; and the
244 .Sy raidz3
245 vdev type specifies a triple-parity raidz group.
246 The
247 .Sy raidz
248 vdev type is an alias for
249 .Sy raidz1 .
250 .Pp
251 A raidz group with N disks of size X with P parity disks can hold approximately
252 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
253 compromised.
254 The minimum number of devices in a raidz group is one more than the number of
255 parity disks.
256 The recommended number is between 3 and 9 to help increase performance.
257 .It Sy spare
258 A special pseudo-vdev which keeps track of available hot spares for a pool.
259 For more information, see the
260 .Sx Hot Spares
261 section.
262 .It Sy log
263 A separate intent log device.
264 If more than one log device is specified, then writes are load-balanced between
265 devices.
266 Log devices can be mirrored.
267 However, raidz vdev types are not supported for the intent log.
268 For more information, see the
269 .Sx Intent Log
270 section.
271 .It Sy cache
272 A device used to cache storage pool data.
273 A cache device cannot be configured as a mirror or raidz group.
274 For more information, see the
275 .Sx Cache Devices
276 section.
277 .El
278 .Pp
279 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
280 contain files or disks.
281 Mirrors of mirrors
282 .Pq or other combinations
283 are not allowed.
284 .Pp
285 A pool can have any number of virtual devices at the top of the configuration
286 .Po known as
287 .Qq root vdevs
288 .Pc .
289 Data is dynamically distributed across all top-level devices to balance data
290 among devices.
291 As new virtual devices are added, ZFS automatically places data on the newly
292 available devices.
293 .Pp
294 Virtual devices are specified one at a time on the command line, separated by
295 whitespace.
296 The keywords
297 .Sy mirror
298 and
299 .Sy raidz
300 are used to distinguish where a group ends and another begins.
301 For example, the following creates two root vdevs, each a mirror of two disks:
302 .Bd -literal
303 # zpool create mypool mirror sda sdb mirror sdc sdd
304 .Ed
305 .Ss Device Failure and Recovery
306 ZFS supports a rich set of mechanisms for handling device failure and data
307 corruption.
308 All metadata and data is checksummed, and ZFS automatically repairs bad data
309 from a good copy when corruption is detected.
310 .Pp
311 In order to take advantage of these features, a pool must make use of some form
312 of redundancy, using either mirrored or raidz groups.
313 While ZFS supports running in a non-redundant configuration, where each root
314 vdev is simply a disk or file, this is strongly discouraged.
315 A single case of bit corruption can render some or all of your data unavailable.
316 .Pp
317 A pool's health status is described by one of three states: online, degraded,
318 or faulted.
319 An online pool has all devices operating normally.
320 A degraded pool is one in which one or more devices have failed, but the data is
321 still available due to a redundant configuration.
322 A faulted pool has corrupted metadata, or one or more faulted devices, and
323 insufficient replicas to continue functioning.
324 .Pp
325 The health of the top-level vdev, such as mirror or raidz device, is
326 potentially impacted by the state of its associated vdevs, or component
327 devices.
328 A top-level vdev or component device is in one of the following states:
329 .Bl -tag -width "DEGRADED"
330 .It Sy DEGRADED
331 One or more top-level vdevs is in the degraded state because one or more
332 component devices are offline.
333 Sufficient replicas exist to continue functioning.
334 .Pp
335 One or more component devices is in the degraded or faulted state, but
336 sufficient replicas exist to continue functioning.
337 The underlying conditions are as follows:
338 .Bl -bullet
339 .It
340 The number of checksum errors exceeds acceptable levels and the device is
341 degraded as an indication that something may be wrong.
342 ZFS continues to use the device as necessary.
343 .It
344 The number of I/O errors exceeds acceptable levels.
345 The device could not be marked as faulted because there are insufficient
346 replicas to continue functioning.
347 .El
348 .It Sy FAULTED
349 One or more top-level vdevs is in the faulted state because one or more
350 component devices are offline.
351 Insufficient replicas exist to continue functioning.
352 .Pp
353 One or more component devices is in the faulted state, and insufficient
354 replicas exist to continue functioning.
355 The underlying conditions are as follows:
356 .Bl -bullet
357 .It
358 The device could be opened, but the contents did not match expected values.
359 .It
360 The number of I/O errors exceeds acceptable levels and the device is faulted to
361 prevent further use of the device.
362 .El
363 .It Sy OFFLINE
364 The device was explicitly taken offline by the
365 .Nm zpool Cm offline
366 command.
367 .It Sy ONLINE
368 The device is online and functioning.
369 .It Sy REMOVED
370 The device was physically removed while the system was running.
371 Device removal detection is hardware-dependent and may not be supported on all
372 platforms.
373 .It Sy UNAVAIL
374 The device could not be opened.
375 If a pool is imported when a device was unavailable, then the device will be
376 identified by a unique identifier instead of its path since the path was never
377 correct in the first place.
378 .El
379 .Pp
380 If a device is removed and later re-attached to the system, ZFS attempts
381 to put the device online automatically.
382 Device attach detection is hardware-dependent and might not be supported on all
383 platforms.
384 .Ss Hot Spares
385 ZFS allows devices to be associated with pools as
386 .Qq hot spares .
387 These devices are not actively used in the pool, but when an active device
388 fails, it is automatically replaced by a hot spare.
389 To create a pool with hot spares, specify a
390 .Sy spare
391 vdev with any number of devices.
392 For example,
393 .Bd -literal
394 # zpool create pool mirror sda sdb spare sdc sdd
395 .Ed
396 .Pp
397 Spares can be shared across multiple pools, and can be added with the
398 .Nm zpool Cm add
399 command and removed with the
400 .Nm zpool Cm remove
401 command.
402 Once a spare replacement is initiated, a new
403 .Sy spare
404 vdev is created within the configuration that will remain there until the
405 original device is replaced.
406 At this point, the hot spare becomes available again if another device fails.
407 .Pp
408 If a pool has a shared spare that is currently being used, the pool can not be
409 exported since other pools may use this shared spare, which may lead to
410 potential data corruption.
411 .Pp
412 An in-progress spare replacement can be cancelled by detaching the hot spare.
413 If the original faulted device is detached, then the hot spare assumes its
414 place in the configuration, and is removed from the spare list of all active
415 pools.
416 .Pp
417 Spares cannot replace log devices.
418 .Ss Intent Log
419 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
420 transactions.
421 For instance, databases often require their transactions to be on stable storage
422 devices when returning from a system call.
423 NFS and other applications can also use
424 .Xr fsync 2
425 to ensure data stability.
426 By default, the intent log is allocated from blocks within the main pool.
427 However, it might be possible to get better performance using separate intent
428 log devices such as NVRAM or a dedicated disk.
429 For example:
430 .Bd -literal
431 # zpool create pool sda sdb log sdc
432 .Ed
433 .Pp
434 Multiple log devices can also be specified, and they can be mirrored.
435 See the
436 .Sx EXAMPLES
437 section for an example of mirroring multiple log devices.
438 .Pp
439 Log devices can be added, replaced, attached, detached, and imported and
440 exported as part of the larger pool.
441 Mirrored log devices can be removed by specifying the top-level mirror for the
442 log.
443 .Ss Cache Devices
444 Devices can be added to a storage pool as
445 .Qq cache devices .
446 These devices provide an additional layer of caching between main memory and
447 disk.
448 For read-heavy workloads, where the working set size is much larger than what
449 can be cached in main memory, using cache devices allow much more of this
450 working set to be served from low latency media.
451 Using cache devices provides the greatest performance improvement for random
452 read-workloads of mostly static content.
453 .Pp
454 To create a pool with cache devices, specify a
455 .Sy cache
456 vdev with any number of devices.
457 For example:
458 .Bd -literal
459 # zpool create pool sda sdb cache sdc sdd
460 .Ed
461 .Pp
462 Cache devices cannot be mirrored or part of a raidz configuration.
463 If a read error is encountered on a cache device, that read I/O is reissued to
464 the original storage pool device, which might be part of a mirrored or raidz
465 configuration.
466 .Pp
467 The content of the cache devices is considered volatile, as is the case with
468 other system caches.
469 .Ss Properties
470 Each pool has several properties associated with it.
471 Some properties are read-only statistics while others are configurable and
472 change the behavior of the pool.
473 .Pp
474 The following are read-only properties:
475 .Bl -tag -width Ds
476 .It Sy available
477 Amount of storage available within the pool.
478 This property can also be referred to by its shortened column name,
479 .Sy avail .
480 .It Sy capacity
481 Percentage of pool space used.
482 This property can also be referred to by its shortened column name,
483 .Sy cap .
484 .It Sy expandsize
485 Amount of uninitialized space within the pool or device that can be used to
486 increase the total capacity of the pool.
487 Uninitialized space consists of any space on an EFI labeled vdev which has not
488 been brought online
489 .Po e.g, using
490 .Nm zpool Cm online Fl e
491 .Pc .
492 This space occurs when a LUN is dynamically expanded.
493 .It Sy fragmentation
494 The amount of fragmentation in the pool.
495 .It Sy free
496 The amount of free space available in the pool.
497 .It Sy freeing
498 After a file system or snapshot is destroyed, the space it was using is
499 returned to the pool asynchronously.
500 .Sy freeing
501 is the amount of space remaining to be reclaimed.
502 Over time
503 .Sy freeing
504 will decrease while
505 .Sy free
506 increases.
507 .It Sy health
508 The current health of the pool.
509 Health can be one of
510 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
511 .It Sy guid
512 A unique identifier for the pool.
513 .It Sy size
514 Total size of the storage pool.
515 .It Sy unsupported@ Ns Em feature_guid
516 Information about unsupported features that are enabled on the pool.
517 See
518 .Xr zpool-features 5
519 for details.
520 .It Sy used
521 Amount of storage space used within the pool.
522 .El
523 .Pp
524 The space usage properties report actual physical space available to the
525 storage pool.
526 The physical space can be different from the total amount of space that any
527 contained datasets can actually use.
528 The amount of space used in a raidz configuration depends on the characteristics
529 of the data being written.
530 In addition, ZFS reserves some space for internal accounting that the
531 .Xr zfs 8
532 command takes into account, but the
533 .Nm
534 command does not.
535 For non-full pools of a reasonable size, these effects should be invisible.
536 For small pools, or pools that are close to being completely full, these
537 discrepancies may become more noticeable.
538 .Pp
539 The following property can be set at creation time and import time:
540 .Bl -tag -width Ds
541 .It Sy altroot
542 Alternate root directory.
543 If set, this directory is prepended to any mount points within the pool.
544 This can be used when examining an unknown pool where the mount points cannot be
545 trusted, or in an alternate boot environment, where the typical paths are not
546 valid.
547 .Sy altroot
548 is not a persistent property.
549 It is valid only while the system is up.
550 Setting
551 .Sy altroot
552 defaults to using
553 .Sy cachefile Ns = Ns Sy none ,
554 though this may be overridden using an explicit setting.
555 .El
556 .Pp
557 The following property can be set only at import time:
558 .Bl -tag -width Ds
559 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
560 If set to
561 .Sy on ,
562 the pool will be imported in read-only mode.
563 This property can also be referred to by its shortened column name,
564 .Sy rdonly .
565 .El
566 .Pp
567 The following properties can be set at creation time and import time, and later
568 changed with the
569 .Nm zpool Cm set
570 command:
571 .Bl -tag -width Ds
572 .It Sy ashift Ns = Ns Sy ashift
573 Pool sector size exponent, to the power of
574 .Sy 2
575 (internally referred to as
576 .Sy ashift
577 ). Values from 9 to 16, inclusive, are valid; also, the special
578 value 0 (the default) means to auto-detect using the kernel's block
579 layer and a ZFS internal exception list. I/O operations will be aligned
580 to the specified size boundaries. Additionally, the minimum (disk)
581 write size will be set to the specified size, so this represents a
582 space vs. performance trade-off. For optimal performance, the pool
583 sector size should be greater than or equal to the sector size of the
584 underlying disks. The typical case for setting this property is when
585 performance is important and the underlying disks use 4KiB sectors but
586 report 512B sectors to the OS (for compatibility reasons); in that
587 case, set
588 .Sy ashift=12
589 (which is 1<<12 = 4096). When set, this property is
590 used as the default hint value in subsequent vdev operations (add,
591 attach and replace). Changing this value will not modify any existing
592 vdev, not even on disk replacement; however it can be used, for
593 instance, to replace a dying 512B sectors disk with a newer 4KiB
594 sectors device: this will probably result in bad performance but at the
595 same time could prevent loss of data.
596 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
597 Controls automatic pool expansion when the underlying LUN is grown.
598 If set to
599 .Sy on ,
600 the pool will be resized according to the size of the expanded device.
601 If the device is part of a mirror or raidz then all devices within that
602 mirror/raidz group must be expanded before the new space is made available to
603 the pool.
604 The default behavior is
605 .Sy off .
606 This property can also be referred to by its shortened column name,
607 .Sy expand .
608 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
609 Controls automatic device replacement.
610 If set to
611 .Sy off ,
612 device replacement must be initiated by the administrator by using the
613 .Nm zpool Cm replace
614 command.
615 If set to
616 .Sy on ,
617 any new device, found in the same physical location as a device that previously
618 belonged to the pool, is automatically formatted and replaced.
619 The default behavior is
620 .Sy off .
621 This property can also be referred to by its shortened column name,
622 .Sy replace .
623 Autoreplace can also be used with virtual disks (like device
624 mapper) provided that you use the /dev/disk/by-vdev paths setup by
625 vdev_id.conf. See the
626 .Xr vdev_id 8
627 man page for more details.
628 Autoreplace and autoonline require the ZFS Event Daemon be configured and
629 running. See the
630 .Xr zed 8
631 man page for more details.
632 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
633 Identifies the default bootable dataset for the root pool. This property is
634 expected to be set mainly by the installation and upgrade programs.
635 Not all Linux distribution boot processes use the bootfs property.
636 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
637 Controls the location of where the pool configuration is cached.
638 Discovering all pools on system startup requires a cached copy of the
639 configuration data that is stored on the root file system.
640 All pools in this cache are automatically imported when the system boots.
641 Some environments, such as install and clustering, need to cache this
642 information in a different location so that pools are not automatically
643 imported.
644 Setting this property caches the pool configuration in a different location that
645 can later be imported with
646 .Nm zpool Cm import Fl c .
647 Setting it to the special value
648 .Sy none
649 creates a temporary pool that is never cached, and the special value
650 .Qq
651 .Pq empty string
652 uses the default location.
653 .Pp
654 Multiple pools can share the same cache file.
655 Because the kernel destroys and recreates this file when pools are added and
656 removed, care should be taken when attempting to access this file.
657 When the last pool using a
658 .Sy cachefile
659 is exported or destroyed, the file will be empty.
660 .It Sy comment Ns = Ns Ar text
661 A text string consisting of printable ASCII characters that will be stored
662 such that it is available even if the pool becomes faulted.
663 An administrator can provide additional information about a pool using this
664 property.
665 .It Sy dedupditto Ns = Ns Ar number
666 Threshold for the number of block ditto copies.
667 If the reference count for a deduplicated block increases above this number, a
668 new ditto copy of this block is automatically stored.
669 The default setting is
670 .Sy 0
671 which causes no ditto copies to be created for deduplicated blocks.
672 The minimum legal nonzero setting is
673 .Sy 100 .
674 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
675 Controls whether a non-privileged user is granted access based on the dataset
676 permissions defined on the dataset.
677 See
678 .Xr zfs 8
679 for more information on ZFS delegated administration.
680 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
681 Controls the system behavior in the event of catastrophic pool failure.
682 This condition is typically a result of a loss of connectivity to the underlying
683 storage device(s) or a failure of all devices within the pool.
684 The behavior of such an event is determined as follows:
685 .Bl -tag -width "continue"
686 .It Sy wait
687 Blocks all I/O access until the device connectivity is recovered and the errors
688 are cleared.
689 This is the default behavior.
690 .It Sy continue
691 Returns
692 .Er EIO
693 to any new write I/O requests but allows reads to any of the remaining healthy
694 devices.
695 Any write requests that have yet to be committed to disk would be blocked.
696 .It Sy panic
697 Prints out a message to the console and generates a system crash dump.
698 .El
699 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
700 The value of this property is the current state of
701 .Ar feature_name .
702 The only valid value when setting this property is
703 .Sy enabled
704 which moves
705 .Ar feature_name
706 to the enabled state.
707 See
708 .Xr zpool-features 5
709 for details on feature states.
710 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
711 Controls whether information about snapshots associated with this pool is
712 output when
713 .Nm zfs Cm list
714 is run without the
715 .Fl t
716 option.
717 The default value is
718 .Sy off .
719 This property can also be referred to by its shortened name,
720 .Sy listsnaps .
721 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
722 Controls whether a pool activity check should be performed during
723 .Nm zpool Cm import .
724 When a pool is determined to be active it cannot be imported, even with the
725 .Fl f
726 option. This property is intended to be used in failover configurations
727 where multiple hosts have access to a pool on shared storage. When this
728 property is on, periodic writes to storage occur to show the pool is in use.
729 See
730 .Sy zfs_multihost_interval
731 in the
732 .Xr zfs-module-parameters 5
733 man page. In order to enable this property each host must set a unique hostid.
734 See
735 .Xr genhostid 1
736 .Xr zgenhostid 8
737 .Xr spl-module-paramters 5
738 for additional details. The default value is
739 .Sy off .
740 .It Sy version Ns = Ns Ar version
741 The current on-disk version of the pool.
742 This can be increased, but never decreased.
743 The preferred method of updating pools is with the
744 .Nm zpool Cm upgrade
745 command, though this property can be used when a specific version is needed for
746 backwards compatibility.
747 Once feature flags are enabled on a pool this property will no longer have a
748 value.
749 .El
750 .Ss Subcommands
751 All subcommands that modify state are logged persistently to the pool in their
752 original form.
753 .Pp
754 The
755 .Nm
756 command provides subcommands to create and destroy storage pools, add capacity
757 to storage pools, and provide information about the storage pools.
758 The following subcommands are supported:
759 .Bl -tag -width Ds
760 .It Xo
761 .Nm
762 .Fl ?
763 .Xc
764 Displays a help message.
765 .It Xo
766 .Nm
767 .Cm add
768 .Op Fl fgLnP
769 .Oo Fl o Ar property Ns = Ns Ar value Oc
770 .Ar pool vdev Ns ...
771 .Xc
772 Adds the specified virtual devices to the given pool.
773 The
774 .Ar vdev
775 specification is described in the
776 .Sx Virtual Devices
777 section.
778 The behavior of the
779 .Fl f
780 option, and the device checks performed are described in the
781 .Nm zpool Cm create
782 subcommand.
783 .Bl -tag -width Ds
784 .It Fl f
785 Forces use of
786 .Ar vdev Ns s ,
787 even if they appear in use or specify a conflicting replication level.
788 Not all devices can be overridden in this manner.
789 .It Fl g
790 Display
791 .Ar vdev ,
792 GUIDs instead of the normal device names. These GUIDs can be used in place of
793 device names for the zpool detach/offline/remove/replace commands.
794 .It Fl L
795 Display real paths for
796 .Ar vdev Ns s
797 resolving all symbolic links. This can be used to look up the current block
798 device name regardless of the /dev/disk/ path used to open it.
799 .It Fl n
800 Displays the configuration that would be used without actually adding the
801 .Ar vdev Ns s .
802 The actual pool creation can still fail due to insufficient privileges or
803 device sharing.
804 .It Fl P
805 Display real paths for
806 .Ar vdev Ns s
807 instead of only the last component of the path. This can be used in
808 conjunction with the -L flag.
809 .It Fl o Ar property Ns = Ns Ar value
810 Sets the given pool properties. See the
811 .Sx Properties
812 section for a list of valid properties that can be set. The only property
813 supported at the moment is ashift.
814 .El
815 .It Xo
816 .Nm
817 .Cm attach
818 .Op Fl f
819 .Oo Fl o Ar property Ns = Ns Ar value Oc
820 .Ar pool device new_device
821 .Xc
822 Attaches
823 .Ar new_device
824 to the existing
825 .Ar device .
826 The existing device cannot be part of a raidz configuration.
827 If
828 .Ar device
829 is not currently part of a mirrored configuration,
830 .Ar device
831 automatically transforms into a two-way mirror of
832 .Ar device
833 and
834 .Ar new_device .
835 If
836 .Ar device
837 is part of a two-way mirror, attaching
838 .Ar new_device
839 creates a three-way mirror, and so on.
840 In either case,
841 .Ar new_device
842 begins to resilver immediately.
843 .Bl -tag -width Ds
844 .It Fl f
845 Forces use of
846 .Ar new_device ,
847 even if its appears to be in use.
848 Not all devices can be overridden in this manner.
849 .It Fl o Ar property Ns = Ns Ar value
850 Sets the given pool properties. See the
851 .Sx Properties
852 section for a list of valid properties that can be set. The only property
853 supported at the moment is ashift.
854 .El
855 .It Xo
856 .Nm
857 .Cm clear
858 .Ar pool
859 .Op Ar device
860 .Xc
861 Clears device errors in a pool.
862 If no arguments are specified, all device errors within the pool are cleared.
863 If one or more devices is specified, only those errors associated with the
864 specified device or devices are cleared.
865 .It Xo
866 .Nm
867 .Cm create
868 .Op Fl dfn
869 .Op Fl m Ar mountpoint
870 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
871 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
872 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
873 .Op Fl R Ar root
874 .Op Fl t Ar tname
875 .Ar pool vdev Ns ...
876 .Xc
877 Creates a new storage pool containing the virtual devices specified on the
878 command line.
879 The pool name must begin with a letter, and can only contain
880 alphanumeric characters as well as underscore
881 .Pq Qq Sy _ ,
882 dash
883 .Pq Qq Sy \&- ,
884 colon
885 .Pq Qq Sy \&: ,
886 space
887 .Pq Qq Sy \&\ ,
888 and period
889 .Pq Qq Sy \&. .
890 The pool names
891 .Sy mirror ,
892 .Sy raidz ,
893 .Sy spare
894 and
895 .Sy log
896 are reserved, as are names beginning with the pattern
897 .Sy c[0-9] .
898 The
899 .Ar vdev
900 specification is described in the
901 .Sx Virtual Devices
902 section.
903 .Pp
904 The command verifies that each device specified is accessible and not currently
905 in use by another subsystem.
906 There are some uses, such as being currently mounted, or specified as the
907 dedicated dump device, that prevents a device from ever being used by ZFS.
908 Other uses, such as having a preexisting UFS file system, can be overridden with
909 the
910 .Fl f
911 option.
912 .Pp
913 The command also checks that the replication strategy for the pool is
914 consistent.
915 An attempt to combine redundant and non-redundant storage in a single pool, or
916 to mix disks and files, results in an error unless
917 .Fl f
918 is specified.
919 The use of differently sized devices within a single raidz or mirror group is
920 also flagged as an error unless
921 .Fl f
922 is specified.
923 .Pp
924 Unless the
925 .Fl R
926 option is specified, the default mount point is
927 .Pa / Ns Ar pool .
928 The mount point must not exist or must be empty, or else the root dataset
929 cannot be mounted.
930 This can be overridden with the
931 .Fl m
932 option.
933 .Pp
934 By default all supported features are enabled on the new pool unless the
935 .Fl d
936 option is specified.
937 .Bl -tag -width Ds
938 .It Fl d
939 Do not enable any features on the new pool.
940 Individual features can be enabled by setting their corresponding properties to
941 .Sy enabled
942 with the
943 .Fl o
944 option.
945 See
946 .Xr zpool-features 5
947 for details about feature properties.
948 .It Fl f
949 Forces use of
950 .Ar vdev Ns s ,
951 even if they appear in use or specify a conflicting replication level.
952 Not all devices can be overridden in this manner.
953 .It Fl m Ar mountpoint
954 Sets the mount point for the root dataset.
955 The default mount point is
956 .Pa /pool
957 or
958 .Pa altroot/pool
959 if
960 .Ar altroot
961 is specified.
962 The mount point must be an absolute path,
963 .Sy legacy ,
964 or
965 .Sy none .
966 For more information on dataset mount points, see
967 .Xr zfs 8 .
968 .It Fl n
969 Displays the configuration that would be used without actually creating the
970 pool.
971 The actual pool creation can still fail due to insufficient privileges or
972 device sharing.
973 .It Fl o Ar property Ns = Ns Ar value
974 Sets the given pool properties.
975 See the
976 .Sx Properties
977 section for a list of valid properties that can be set.
978 .It Fl o Ar feature@feature Ns = Ns Ar value
979 Sets the given pool feature. See the
980 .Xr zpool-features 5
981 section for a list of valid features that can be set.
982 Value can be either disabled or enabled.
983 .It Fl O Ar file-system-property Ns = Ns Ar value
984 Sets the given file system properties in the root file system of the pool.
985 See the
986 .Sx Properties
987 section of
988 .Xr zfs 8
989 for a list of valid properties that can be set.
990 .It Fl R Ar root
991 Equivalent to
992 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
993 .It Fl t Ar tname
994 Sets the in-core pool name to
995 .Sy tname
996 while the on-disk name will be the name specified as the pool name
997 .Sy pool .
998 This will set the default cachefile property to none. This is intended
999 to handle name space collisions when creating pools for other systems,
1000 such as virtual machines or physical machines whose pools live on network
1001 block devices.
1002 .El
1003 .It Xo
1004 .Nm
1005 .Cm destroy
1006 .Op Fl f
1007 .Ar pool
1008 .Xc
1009 Destroys the given pool, freeing up any devices for other use.
1010 This command tries to unmount any active datasets before destroying the pool.
1011 .Bl -tag -width Ds
1012 .It Fl f
1013 Forces any active datasets contained within the pool to be unmounted.
1014 .El
1015 .It Xo
1016 .Nm
1017 .Cm detach
1018 .Ar pool device
1019 .Xc
1020 Detaches
1021 .Ar device
1022 from a mirror.
1023 The operation is refused if there are no other valid replicas of the data.
1024 If device may be re-added to the pool later on then consider the
1025 .Sy zpool offline
1026 command instead.
1027 .It Xo
1028 .Nm
1029 .Cm events
1030 .Op Fl cfHv
1031 .Op Ar pool Ns ...
1032 .Xc
1033 Lists all recent events generated by the ZFS kernel modules. These events
1034 are consumed by the
1035 .Xr zed 8
1036 and used to automate administrative tasks such as replacing a failed device
1037 with a hot spare. For more information about the subclasses and event payloads
1038 that can be generated see the
1039 .Xr zfs-events 5
1040 man page.
1041 .Bl -tag -width Ds
1042 .It Fl c
1043 Clear all previous events.
1044 .It Fl f
1045 Follow mode.
1046 .It Fl H
1047 Scripted mode. Do not display headers, and separate fields by a
1048 single tab instead of arbitrary space.
1049 .It Fl v
1050 Print the entire payload for each event.
1051 .El
1052 .It Xo
1053 .Nm
1054 .Cm export
1055 .Op Fl a
1056 .Op Fl f
1057 .Ar pool Ns ...
1058 .Xc
1059 Exports the given pools from the system.
1060 All devices are marked as exported, but are still considered in use by other
1061 subsystems.
1062 The devices can be moved between systems
1063 .Pq even those of different endianness
1064 and imported as long as a sufficient number of devices are present.
1065 .Pp
1066 Before exporting the pool, all datasets within the pool are unmounted.
1067 A pool can not be exported if it has a shared spare that is currently being
1068 used.
1069 .Pp
1070 For pools to be portable, you must give the
1071 .Nm
1072 command whole disks, not just partitions, so that ZFS can label the disks with
1073 portable EFI labels.
1074 Otherwise, disk drivers on platforms of different endianness will not recognize
1075 the disks.
1076 .Bl -tag -width Ds
1077 .It Fl a
1078 Exports all pools imported on the system.
1079 .It Fl f
1080 Forcefully unmount all datasets, using the
1081 .Nm unmount Fl f
1082 command.
1083 .Pp
1084 This command will forcefully export the pool even if it has a shared spare that
1085 is currently being used.
1086 This may lead to potential data corruption.
1087 .El
1088 .It Xo
1089 .Nm
1090 .Cm get
1091 .Op Fl Hp
1092 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1093 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1094 .Ar pool Ns ...
1095 .Xc
1096 Retrieves the given list of properties
1097 .Po
1098 or all properties if
1099 .Sy all
1100 is used
1101 .Pc
1102 for the specified storage pool(s).
1103 These properties are displayed with the following fields:
1104 .Bd -literal
1105 name Name of storage pool
1106 property Property name
1107 value Property value
1108 source Property source, either 'default' or 'local'.
1109 .Ed
1110 .Pp
1111 See the
1112 .Sx Properties
1113 section for more information on the available pool properties.
1114 .Bl -tag -width Ds
1115 .It Fl H
1116 Scripted mode.
1117 Do not display headers, and separate fields by a single tab instead of arbitrary
1118 space.
1119 .It Fl o Ar field
1120 A comma-separated list of columns to display.
1121 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1122 is the default value.
1123 .It Fl p
1124 Display numbers in parsable (exact) values.
1125 .El
1126 .It Xo
1127 .Nm
1128 .Cm history
1129 .Op Fl il
1130 .Oo Ar pool Oc Ns ...
1131 .Xc
1132 Displays the command history of the specified pool(s) or all pools if no pool is
1133 specified.
1134 .Bl -tag -width Ds
1135 .It Fl i
1136 Displays internally logged ZFS events in addition to user initiated events.
1137 .It Fl l
1138 Displays log records in long format, which in addition to standard format
1139 includes, the user name, the hostname, and the zone in which the operation was
1140 performed.
1141 .El
1142 .It Xo
1143 .Nm
1144 .Cm import
1145 .Op Fl D
1146 .Op Fl d Ar dir
1147 .Xc
1148 Lists pools available to import.
1149 If the
1150 .Fl d
1151 option is not specified, this command searches for devices in
1152 .Pa /dev .
1153 The
1154 .Fl d
1155 option can be specified multiple times, and all directories are searched.
1156 If the device appears to be part of an exported pool, this command displays a
1157 summary of the pool with the name of the pool, a numeric identifier, as well as
1158 the vdev layout and current health of the device for each device or file.
1159 Destroyed pools, pools that were previously destroyed with the
1160 .Nm zpool Cm destroy
1161 command, are not listed unless the
1162 .Fl D
1163 option is specified.
1164 .Pp
1165 The numeric identifier is unique, and can be used instead of the pool name when
1166 multiple exported pools of the same name are available.
1167 .Bl -tag -width Ds
1168 .It Fl c Ar cachefile
1169 Reads configuration from the given
1170 .Ar cachefile
1171 that was created with the
1172 .Sy cachefile
1173 pool property.
1174 This
1175 .Ar cachefile
1176 is used instead of searching for devices.
1177 .It Fl d Ar dir
1178 Searches for devices or files in
1179 .Ar dir .
1180 The
1181 .Fl d
1182 option can be specified multiple times.
1183 .It Fl D
1184 Lists destroyed pools only.
1185 .El
1186 .It Xo
1187 .Nm
1188 .Cm import
1189 .Fl a
1190 .Op Fl DflmN
1191 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1192 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1193 .Op Fl o Ar mntopts
1194 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1195 .Op Fl R Ar root
1196 .Op Fl s
1197 .Xc
1198 Imports all pools found in the search directories.
1199 Identical to the previous command, except that all pools with a sufficient
1200 number of devices available are imported.
1201 Destroyed pools, pools that were previously destroyed with the
1202 .Nm zpool Cm destroy
1203 command, will not be imported unless the
1204 .Fl D
1205 option is specified.
1206 .Bl -tag -width Ds
1207 .It Fl a
1208 Searches for and imports all pools found.
1209 .It Fl c Ar cachefile
1210 Reads configuration from the given
1211 .Ar cachefile
1212 that was created with the
1213 .Sy cachefile
1214 pool property.
1215 This
1216 .Ar cachefile
1217 is used instead of searching for devices.
1218 .It Fl d Ar dir
1219 Searches for devices or files in
1220 .Ar dir .
1221 The
1222 .Fl d
1223 option can be specified multiple times.
1224 This option is incompatible with the
1225 .Fl c
1226 option.
1227 .It Fl D
1228 Imports destroyed pools only.
1229 The
1230 .Fl f
1231 option is also required.
1232 .It Fl f
1233 Forces import, even if the pool appears to be potentially active.
1234 .It Fl F
1235 Recovery mode for a non-importable pool.
1236 Attempt to return the pool to an importable state by discarding the last few
1237 transactions.
1238 Not all damaged pools can be recovered by using this option.
1239 If successful, the data from the discarded transactions is irretrievably lost.
1240 This option is ignored if the pool is importable or already imported.
1241 .It Fl l
1242 Indicates that this command will request encryption keys for all encrypted
1243 datasets it attempts to mount as it is bringing the pool online. Note that if
1244 any datasets have a
1245 .Sy keylocation
1246 of
1247 .Sy prompt
1248 this command will block waiting for the keys to be entered. Without this flag
1249 encrypted datasets will be left unavailable until the keys are loaded.
1250 .It Fl m
1251 Allows a pool to import when there is a missing log device.
1252 Recent transactions can be lost because the log device will be discarded.
1253 .It Fl n
1254 Used with the
1255 .Fl F
1256 recovery option.
1257 Determines whether a non-importable pool can be made importable again, but does
1258 not actually perform the pool recovery.
1259 For more details about pool recovery mode, see the
1260 .Fl F
1261 option, above.
1262 .It Fl N
1263 Import the pool without mounting any file systems.
1264 .It Fl o Ar mntopts
1265 Comma-separated list of mount options to use when mounting datasets within the
1266 pool.
1267 See
1268 .Xr zfs 8
1269 for a description of dataset properties and mount options.
1270 .It Fl o Ar property Ns = Ns Ar value
1271 Sets the specified property on the imported pool.
1272 See the
1273 .Sx Properties
1274 section for more information on the available pool properties.
1275 .It Fl R Ar root
1276 Sets the
1277 .Sy cachefile
1278 property to
1279 .Sy none
1280 and the
1281 .Sy altroot
1282 property to
1283 .Ar root .
1284 .It Fl s
1285 Scan using the default search path, the libblkid cache will not be
1286 consulted. A custom search path may be specified by setting the
1287 ZPOOL_IMPORT_PATH environment variable.
1288 .It Fl X
1289 Used with the
1290 .Fl F
1291 recovery option. Determines whether extreme
1292 measures to find a valid txg should take place. This allows the pool to
1293 be rolled back to a txg which is no longer guaranteed to be consistent.
1294 Pools imported at an inconsistent txg may contain uncorrectable
1295 checksum errors. For more details about pool recovery mode, see the
1296 .Fl F
1297 option, above. WARNING: This option can be extremely hazardous to the
1298 health of your pool and should only be used as a last resort.
1299 .It Fl T
1300 Specify the txg to use for rollback. Implies
1301 .Fl FX .
1302 For more details
1303 about pool recovery mode, see the
1304 .Fl X
1305 option, above. WARNING: This option can be extremely hazardous to the
1306 health of your pool and should only be used as a last resort.
1307 .El
1308 .It Xo
1309 .Nm
1310 .Cm import
1311 .Op Fl Dflm
1312 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1313 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1314 .Op Fl o Ar mntopts
1315 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1316 .Op Fl R Ar root
1317 .Op Fl s
1318 .Ar pool Ns | Ns Ar id
1319 .Op Ar newpool
1320 .Xc
1321 Imports a specific pool.
1322 A pool can be identified by its name or the numeric identifier.
1323 If
1324 .Ar newpool
1325 is specified, the pool is imported using the name
1326 .Ar newpool .
1327 Otherwise, it is imported with the same name as its exported name.
1328 .Pp
1329 If a device is removed from a system without running
1330 .Nm zpool Cm export
1331 first, the device appears as potentially active.
1332 It cannot be determined if this was a failed export, or whether the device is
1333 really in use from another host.
1334 To import a pool in this state, the
1335 .Fl f
1336 option is required.
1337 .Bl -tag -width Ds
1338 .It Fl c Ar cachefile
1339 Reads configuration from the given
1340 .Ar cachefile
1341 that was created with the
1342 .Sy cachefile
1343 pool property.
1344 This
1345 .Ar cachefile
1346 is used instead of searching for devices.
1347 .It Fl d Ar dir
1348 Searches for devices or files in
1349 .Ar dir .
1350 The
1351 .Fl d
1352 option can be specified multiple times.
1353 This option is incompatible with the
1354 .Fl c
1355 option.
1356 .It Fl D
1357 Imports destroyed pool.
1358 The
1359 .Fl f
1360 option is also required.
1361 .It Fl f
1362 Forces import, even if the pool appears to be potentially active.
1363 .It Fl F
1364 Recovery mode for a non-importable pool.
1365 Attempt to return the pool to an importable state by discarding the last few
1366 transactions.
1367 Not all damaged pools can be recovered by using this option.
1368 If successful, the data from the discarded transactions is irretrievably lost.
1369 This option is ignored if the pool is importable or already imported.
1370 .It Fl l
1371 Indicates that this command will request encryption keys for all encrypted
1372 datasets it attempts to mount as it is bringing the pool online. Note that if
1373 any datasets have a
1374 .Sy keylocation
1375 of
1376 .Sy prompt
1377 this command will block waiting for the keys to be entered. Without this flag
1378 encrypted datasets will be left unavailable until the keys are loaded.
1379 .It Fl m
1380 Allows a pool to import when there is a missing log device.
1381 Recent transactions can be lost because the log device will be discarded.
1382 .It Fl n
1383 Used with the
1384 .Fl F
1385 recovery option.
1386 Determines whether a non-importable pool can be made importable again, but does
1387 not actually perform the pool recovery.
1388 For more details about pool recovery mode, see the
1389 .Fl F
1390 option, above.
1391 .It Fl o Ar mntopts
1392 Comma-separated list of mount options to use when mounting datasets within the
1393 pool.
1394 See
1395 .Xr zfs 8
1396 for a description of dataset properties and mount options.
1397 .It Fl o Ar property Ns = Ns Ar value
1398 Sets the specified property on the imported pool.
1399 See the
1400 .Sx Properties
1401 section for more information on the available pool properties.
1402 .It Fl R Ar root
1403 Sets the
1404 .Sy cachefile
1405 property to
1406 .Sy none
1407 and the
1408 .Sy altroot
1409 property to
1410 .Ar root .
1411 .It Fl s
1412 Scan using the default search path, the libblkid cache will not be
1413 consulted. A custom search path may be specified by setting the
1414 ZPOOL_IMPORT_PATH environment variable.
1415 .It Fl X
1416 Used with the
1417 .Fl F
1418 recovery option. Determines whether extreme
1419 measures to find a valid txg should take place. This allows the pool to
1420 be rolled back to a txg which is no longer guaranteed to be consistent.
1421 Pools imported at an inconsistent txg may contain uncorrectable
1422 checksum errors. For more details about pool recovery mode, see the
1423 .Fl F
1424 option, above. WARNING: This option can be extremely hazardous to the
1425 health of your pool and should only be used as a last resort.
1426 .It Fl T
1427 Specify the txg to use for rollback. Implies
1428 .Fl FX .
1429 For more details
1430 about pool recovery mode, see the
1431 .Fl X
1432 option, above. WARNING: This option can be extremely hazardous to the
1433 health of your pool and should only be used as a last resort.
1434 .It Fl s
1435 Used with
1436 .Sy newpool .
1437 Specifies that
1438 .Sy newpool
1439 is temporary. Temporary pool names last until export. Ensures that
1440 the original pool name will be used in all label updates and therefore
1441 is retained upon export.
1442 Will also set -o cachefile=none when not explicitly specified.
1443 .El
1444 .It Xo
1445 .Nm
1446 .Cm iostat
1447 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1448 .Op Fl T Sy u Ns | Ns Sy d
1449 .Op Fl ghHLpPvy
1450 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1451 .Op Ar interval Op Ar count
1452 .Xc
1453 Displays I/O statistics for the given pools/vdevs. You can pass in a
1454 list of pools, a pool and list of vdevs in that pool, or a list of any
1455 vdevs from any pool. If no items are specified, statistics for every
1456 pool in the system are shown.
1457 When given an
1458 .Ar interval ,
1459 the statistics are printed every
1460 .Ar interval
1461 seconds until ^C is pressed. If count is specified, the command exits
1462 after count reports are printed. The first report printed is always
1463 the statistics since boot regardless of whether
1464 .Ar interval
1465 and
1466 .Ar count
1467 are passed. However, this behavior can be suppressed with the
1468 .Fl y
1469 flag. Also note that the units of
1470 .Sy K ,
1471 .Sy M ,
1472 .Sy G ...
1473 that are printed in the report are in base 1024. To get the raw
1474 values, use the
1475 .Fl p
1476 flag.
1477 .Bl -tag -width Ds
1478 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1479 Run a script (or scripts) on each vdev and include the output as a new column
1480 in the
1481 .Nm zpool Cm iostat
1482 output. Users can run any script found in their
1483 .Pa ~/.zpool.d
1484 directory or from the system
1485 .Pa /etc/zfs/zpool.d
1486 directory. Script names containing the slash (/) character are not allowed.
1487 The default search path can be overridden by setting the
1488 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1489 .Fl c
1490 if they have the ZPOOL_SCRIPTS_AS_ROOT
1491 environment variable set. If a script requires the use of a privileged
1492 command, like
1493 .Xr smartctl 8 ,
1494 then it's recommended you allow the user access to it in
1495 .Pa /etc/sudoers
1496 or add the user to the
1497 .Pa /etc/sudoers.d/zfs
1498 file.
1499 .Pp
1500 If
1501 .Fl c
1502 is passed without a script name, it prints a list of all scripts.
1503 .Fl c
1504 also sets verbose mode
1505 .No \&( Ns Fl v Ns No \&).
1506 .Pp
1507 Script output should be in the form of "name=value". The column name is
1508 set to "name" and the value is set to "value". Multiple lines can be
1509 used to output multiple columns. The first line of output not in the
1510 "name=value" format is displayed without a column title, and no more
1511 output after that is displayed. This can be useful for printing error
1512 messages. Blank or NULL values are printed as a '-' to make output
1513 awk-able.
1514 .Pp
1515 The following environment variables are set before running each script:
1516 .Bl -tag -width "VDEV_PATH"
1517 .It Sy VDEV_PATH
1518 Full path to the vdev
1519 .El
1520 .Bl -tag -width "VDEV_UPATH"
1521 .It Sy VDEV_UPATH
1522 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1523 multipath, or partitioned vdevs.
1524 .El
1525 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1526 .It Sy VDEV_ENC_SYSFS_PATH
1527 The sysfs path to the enclosure for the vdev (if any).
1528 .El
1529 .It Fl T Sy u Ns | Ns Sy d
1530 Display a time stamp.
1531 Specify
1532 .Sy u
1533 for a printed representation of the internal representation of time.
1534 See
1535 .Xr time 2 .
1536 Specify
1537 .Sy d
1538 for standard date format.
1539 See
1540 .Xr date 1 .
1541 .It Fl g
1542 Display vdev GUIDs instead of the normal device names. These GUIDs
1543 can be used in place of device names for the zpool
1544 detach/offline/remove/replace commands.
1545 .It Fl H
1546 Scripted mode. Do not display headers, and separate fields by a
1547 single tab instead of arbitrary space.
1548 .It Fl L
1549 Display real paths for vdevs resolving all symbolic links. This can
1550 be used to look up the current block device name regardless of the
1551 .Pa /dev/disk/
1552 path used to open it.
1553 .It Fl p
1554 Display numbers in parsable (exact) values. Time values are in
1555 nanoseconds.
1556 .It Fl P
1557 Display full paths for vdevs instead of only the last component of
1558 the path. This can be used in conjunction with the
1559 .Fl L
1560 flag.
1561 .It Fl r
1562 Print request size histograms for the leaf ZIOs. This includes
1563 histograms of individual ZIOs (
1564 .Ar ind )
1565 and aggregate ZIOs (
1566 .Ar agg ).
1567 These stats can be useful for seeing how well the ZFS IO aggregator is
1568 working. Do not confuse these request size stats with the block layer
1569 requests; it's possible ZIOs can be broken up before being sent to the
1570 block device.
1571 .It Fl v
1572 Verbose statistics Reports usage statistics for individual vdevs within the
1573 pool, in addition to the pool-wide statistics.
1574 .It Fl y
1575 .It Fl w
1576 .It Fl l
1577 Include average latency statistics:
1578 .Pp
1579 .Ar total_wait :
1580 Average total IO time (queuing + disk IO time).
1581 .Ar disk_wait :
1582 Average disk IO time (time reading/writing the disk).
1583 .Ar syncq_wait :
1584 Average amount of time IO spent in synchronous priority queues. Does
1585 not include disk time.
1586 .Ar asyncq_wait :
1587 Average amount of time IO spent in asynchronous priority queues.
1588 Does not include disk time.
1589 .Ar scrub :
1590 Average queuing time in scrub queue. Does not include disk time.
1591 .It Fl q
1592 Include active queue statistics. Each priority queue has both
1593 pending (
1594 .Ar pend )
1595 and active (
1596 .Ar activ )
1597 IOs. Pending IOs are waiting to
1598 be issued to the disk, and active IOs have been issued to disk and are
1599 waiting for completion. These stats are broken out by priority queue:
1600 .Pp
1601 .Ar syncq_read/write :
1602 Current number of entries in synchronous priority
1603 queues.
1604 .Ar asyncq_read/write :
1605 Current number of entries in asynchronous priority queues.
1606 .Ar scrubq_read :
1607 Current number of entries in scrub queue.
1608 .Pp
1609 All queue statistics are instantaneous measurements of the number of
1610 entries in the queues. If you specify an interval, the measurements
1611 will be sampled from the end of the interval.
1612 .El
1613 .It Xo
1614 .Nm
1615 .Cm labelclear
1616 .Op Fl f
1617 .Ar device
1618 .Xc
1619 Removes ZFS label information from the specified
1620 .Ar device .
1621 The
1622 .Ar device
1623 must not be part of an active pool configuration.
1624 .Bl -tag -width Ds
1625 .It Fl f
1626 Treat exported or foreign devices as inactive.
1627 .El
1628 .It Xo
1629 .Nm
1630 .Cm list
1631 .Op Fl HgLpPv
1632 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1633 .Op Fl T Sy u Ns | Ns Sy d
1634 .Oo Ar pool Oc Ns ...
1635 .Op Ar interval Op Ar count
1636 .Xc
1637 Lists the given pools along with a health status and space usage.
1638 If no
1639 .Ar pool Ns s
1640 are specified, all pools in the system are listed.
1641 When given an
1642 .Ar interval ,
1643 the information is printed every
1644 .Ar interval
1645 seconds until ^C is pressed.
1646 If
1647 .Ar count
1648 is specified, the command exits after
1649 .Ar count
1650 reports are printed.
1651 .Bl -tag -width Ds
1652 .It Fl g
1653 Display vdev GUIDs instead of the normal device names. These GUIDs
1654 can be used in place of device names for the zpool
1655 detach/offline/remove/replace commands.
1656 .It Fl H
1657 Scripted mode.
1658 Do not display headers, and separate fields by a single tab instead of arbitrary
1659 space.
1660 .It Fl o Ar property
1661 Comma-separated list of properties to display.
1662 See the
1663 .Sx Properties
1664 section for a list of valid properties.
1665 The default list is
1666 .Sy name, size, alloc, free, fragmentation, expandsize, capacity,
1667 .Sy dedupratio, health, altroot .
1668 .It Fl L
1669 Display real paths for vdevs resolving all symbolic links. This can
1670 be used to look up the current block device name regardless of the
1671 /dev/disk/ path used to open it.
1672 .It Fl p
1673 Display numbers in parsable
1674 .Pq exact
1675 values.
1676 .It Fl P
1677 Display full paths for vdevs instead of only the last component of
1678 the path. This can be used in conjunction with the
1679 .Fl L flag.
1680 .It Fl T Sy u Ns | Ns Sy d
1681 Display a time stamp.
1682 Specify
1683 .Fl u
1684 for a printed representation of the internal representation of time.
1685 See
1686 .Xr time 2 .
1687 Specify
1688 .Fl d
1689 for standard date format.
1690 See
1691 .Xr date 1 .
1692 .It Fl v
1693 Verbose statistics.
1694 Reports usage statistics for individual vdevs within the pool, in addition to
1695 the pool-wise statistics.
1696 .El
1697 .It Xo
1698 .Nm
1699 .Cm offline
1700 .Op Fl f
1701 .Op Fl t
1702 .Ar pool Ar device Ns ...
1703 .Xc
1704 Takes the specified physical device offline.
1705 While the
1706 .Ar device
1707 is offline, no attempt is made to read or write to the device.
1708 This command is not applicable to spares.
1709 .Bl -tag -width Ds
1710 .It Fl f
1711 Force fault. Instead of offlining the disk, put it into a faulted
1712 state. The fault will persist across imports unless the
1713 .Fl t
1714 flag was specified.
1715 .It Fl t
1716 Temporary.
1717 Upon reboot, the specified physical device reverts to its previous state.
1718 .El
1719 .It Xo
1720 .Nm
1721 .Cm online
1722 .Op Fl e
1723 .Ar pool Ar device Ns ...
1724 .Xc
1725 Brings the specified physical device online.
1726 This command is not applicable to spares.
1727 .Bl -tag -width Ds
1728 .It Fl e
1729 Expand the device to use all available space.
1730 If the device is part of a mirror or raidz then all devices must be expanded
1731 before the new space will become available to the pool.
1732 .El
1733 .It Xo
1734 .Nm
1735 .Cm reguid
1736 .Ar pool
1737 .Xc
1738 Generates a new unique identifier for the pool.
1739 You must ensure that all devices in this pool are online and healthy before
1740 performing this action.
1741 .It Xo
1742 .Nm
1743 .Cm reopen
1744 .Op Fl n
1745 .Ar pool
1746 .Xc
1747 Reopen all the vdevs associated with the pool.
1748 .Bl -tag -width Ds
1749 .It Fl n
1750 Do not restart an in-progress scrub operation. This is not recommended and can
1751 result in partially resilvered devices unless a second scrub is performed.
1752 .It Xo
1753 .Nm
1754 .Cm remove
1755 .Ar pool Ar device Ns ...
1756 .Xc
1757 Removes the specified device from the pool.
1758 This command currently only supports removing hot spares, cache, and log
1759 devices.
1760 A mirrored log device can be removed by specifying the top-level mirror for the
1761 log.
1762 Non-log devices that are part of a mirrored configuration can be removed using
1763 the
1764 .Nm zpool Cm detach
1765 command.
1766 Non-redundant and raidz devices cannot be removed from a pool.
1767 .It Xo
1768 .Nm
1769 .Cm replace
1770 .Op Fl f
1771 .Op Fl o Ar property Ns = Ns Ar value
1772 .Ar pool Ar device Op Ar new_device
1773 .Xc
1774 Replaces
1775 .Ar old_device
1776 with
1777 .Ar new_device .
1778 This is equivalent to attaching
1779 .Ar new_device ,
1780 waiting for it to resilver, and then detaching
1781 .Ar old_device .
1782 .Pp
1783 The size of
1784 .Ar new_device
1785 must be greater than or equal to the minimum size of all the devices in a mirror
1786 or raidz configuration.
1787 .Pp
1788 .Ar new_device
1789 is required if the pool is not redundant.
1790 If
1791 .Ar new_device
1792 is not specified, it defaults to
1793 .Ar old_device .
1794 This form of replacement is useful after an existing disk has failed and has
1795 been physically replaced.
1796 In this case, the new disk may have the same
1797 .Pa /dev
1798 path as the old device, even though it is actually a different disk.
1799 ZFS recognizes this.
1800 .Bl -tag -width Ds
1801 .It Fl f
1802 Forces use of
1803 .Ar new_device ,
1804 even if its appears to be in use.
1805 Not all devices can be overridden in this manner.
1806 .It Fl o Ar property Ns = Ns Ar value
1807 Sets the given pool properties. See the
1808 .Sx Properties
1809 section for a list of valid properties that can be set.
1810 The only property supported at the moment is
1811 .Sy ashift .
1812 .El
1813 .It Xo
1814 .Nm
1815 .Cm scrub
1816 .Op Fl s | Fl p
1817 .Ar pool Ns ...
1818 .Xc
1819 Begins a scrub or resumes a paused scrub.
1820 The scrub examines all data in the specified pools to verify that it checksums
1821 correctly.
1822 For replicated
1823 .Pq mirror or raidz
1824 devices, ZFS automatically repairs any damage discovered during the scrub.
1825 The
1826 .Nm zpool Cm status
1827 command reports the progress of the scrub and summarizes the results of the
1828 scrub upon completion.
1829 .Pp
1830 Scrubbing and resilvering are very similar operations.
1831 The difference is that resilvering only examines data that ZFS knows to be out
1832 of date
1833 .Po
1834 for example, when attaching a new device to a mirror or replacing an existing
1835 device
1836 .Pc ,
1837 whereas scrubbing examines all data to discover silent errors due to hardware
1838 faults or disk failure.
1839 .Pp
1840 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1841 one at a time.
1842 If a scrub is paused, the
1843 .Nm zpool Cm scrub
1844 resumes it.
1845 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1846 resilver completes.
1847 .Bl -tag -width Ds
1848 .It Fl s
1849 Stop scrubbing.
1850 .El
1851 .Bl -tag -width Ds
1852 .It Fl p
1853 Pause scrubbing.
1854 Scrub pause state and progress are periodically synced to disk.
1855 If the system is restarted or pool is exported during a paused scrub,
1856 even after import, scrub will remain paused until it is resumed.
1857 Once resumed the scrub will pick up from the place where it was last
1858 checkpointed to disk.
1859 To resume a paused scrub issue
1860 .Nm zpool Cm scrub
1861 again.
1862 .El
1863 .It Xo
1864 .Nm
1865 .Cm set
1866 .Ar property Ns = Ns Ar value
1867 .Ar pool
1868 .Xc
1869 Sets the given property on the specified pool.
1870 See the
1871 .Sx Properties
1872 section for more information on what properties can be set and acceptable
1873 values.
1874 .It Xo
1875 .Nm
1876 .Cm split
1877 .Op Fl gLlnP
1878 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1879 .Op Fl R Ar root
1880 .Ar pool newpool
1881 .Op Ar device ...
1882 .Xc
1883 Splits devices off
1884 .Ar pool
1885 creating
1886 .Ar newpool .
1887 All vdevs in
1888 .Ar pool
1889 must be mirrors and the pool must not be in the process of resilvering.
1890 At the time of the split,
1891 .Ar newpool
1892 will be a replica of
1893 .Ar pool .
1894 By default, the
1895 last device in each mirror is split from
1896 .Ar pool
1897 to create
1898 .Ar newpool .
1899 .Pp
1900 The optional device specification causes the specified device(s) to be
1901 included in the new
1902 .Ar pool
1903 and, should any devices remain unspecified,
1904 the last device in each mirror is used as would be by default.
1905 .Bl -tag -width Ds
1906 .It Fl g
1907 Display vdev GUIDs instead of the normal device names. These GUIDs
1908 can be used in place of device names for the zpool
1909 detach/offline/remove/replace commands.
1910 .It Fl L
1911 Display real paths for vdevs resolving all symbolic links. This can
1912 be used to look up the current block device name regardless of the
1913 .Pa /dev/disk/
1914 path used to open it.
1915 .It Fl l
1916 Indicates that this command will request encryption keys for all encrypted
1917 datasets it attempts to mount as it is bringing the new pool online. Note that
1918 if any datasets have a
1919 .Sy keylocation
1920 of
1921 .Sy prompt
1922 this command will block waiting for the keys to be entered. Without this flag
1923 encrypted datasets will be left unavailable until the keys are loaded.
1924 .It Fl n
1925 Do dry run, do not actually perform the split.
1926 Print out the expected configuration of
1927 .Ar newpool .
1928 .It Fl P
1929 Display full paths for vdevs instead of only the last component of
1930 the path. This can be used in conjunction with the
1931 .Fl L flag.
1932 .It Fl o Ar property Ns = Ns Ar value
1933 Sets the specified property for
1934 .Ar newpool .
1935 See the
1936 .Sx Properties
1937 section for more information on the available pool properties.
1938 .It Fl R Ar root
1939 Set
1940 .Sy altroot
1941 for
1942 .Ar newpool
1943 to
1944 .Ar root
1945 and automatically import it.
1946 .El
1947 .It Xo
1948 .Nm
1949 .Cm status
1950 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1951 .Op Fl gLPvxD
1952 .Op Fl T Sy u Ns | Ns Sy d
1953 .Oo Ar pool Oc Ns ...
1954 .Op Ar interval Op Ar count
1955 .Xc
1956 Displays the detailed health status for the given pools.
1957 If no
1958 .Ar pool
1959 is specified, then the status of each pool in the system is displayed.
1960 For more information on pool and device health, see the
1961 .Sx Device Failure and Recovery
1962 section.
1963 .Pp
1964 If a scrub or resilver is in progress, this command reports the percentage done
1965 and the estimated time to completion.
1966 Both of these are only approximate, because the amount of data in the pool and
1967 the other workloads on the system can change.
1968 .Bl -tag -width Ds
1969 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1970 Run a script (or scripts) on each vdev and include the output as a new column
1971 in the
1972 .Nm zpool Cm status
1973 output. See the
1974 .Fl c
1975 option of
1976 .Nm zpool Cm iostat
1977 for complete details.
1978 .It Fl g
1979 Display vdev GUIDs instead of the normal device names. These GUIDs
1980 can be used in place of device names for the zpool
1981 detach/offline/remove/replace commands.
1982 .It Fl L
1983 Display real paths for vdevs resolving all symbolic links. This can
1984 be used to look up the current block device name regardless of the
1985 .Pa /dev/disk/
1986 path used to open it.
1987 .It Fl p
1988 Display numbers in parsable (exact) values. Time values are in
1989 nanoseconds.
1990 .It Fl D
1991 Display a histogram of deduplication statistics, showing the allocated
1992 .Pq physically present on disk
1993 and referenced
1994 .Pq logically referenced in the pool
1995 block counts and sizes by reference count.
1996 .It Fl T Sy u Ns | Ns Sy d
1997 Display a time stamp.
1998 Specify
1999 .Fl u
2000 for a printed representation of the internal representation of time.
2001 See
2002 .Xr time 2 .
2003 Specify
2004 .Fl d
2005 for standard date format.
2006 See
2007 .Xr date 1 .
2008 .It Fl v
2009 Displays verbose data error information, printing out a complete list of all
2010 data errors since the last complete pool scrub.
2011 .It Fl x
2012 Only display status for pools that are exhibiting errors or are otherwise
2013 unavailable.
2014 Warnings about pools not using the latest on-disk format will not be included.
2015 .El
2016 .It Xo
2017 .Nm
2018 .Cm sync
2019 .Op Ar pool ...
2020 .Xc
2021 This command forces all in-core dirty data to be written to the primary
2022 pool storage and not the ZIL. It will also update administrative
2023 information including quota reporting. Without arguments,
2024 .Sy zpool sync
2025 will sync all pools on the system. Otherwise, it will sync only the
2026 specified pool(s).
2027 .It Xo
2028 .Nm
2029 .Cm upgrade
2030 .Xc
2031 Displays pools which do not have all supported features enabled and pools
2032 formatted using a legacy ZFS version number.
2033 These pools can continue to be used, but some features may not be available.
2034 Use
2035 .Nm zpool Cm upgrade Fl a
2036 to enable all features on all pools.
2037 .It Xo
2038 .Nm
2039 .Cm upgrade
2040 .Fl v
2041 .Xc
2042 Displays legacy ZFS versions supported by the current software.
2043 See
2044 .Xr zpool-features 5
2045 for a description of feature flags features supported by the current software.
2046 .It Xo
2047 .Nm
2048 .Cm upgrade
2049 .Op Fl V Ar version
2050 .Fl a Ns | Ns Ar pool Ns ...
2051 .Xc
2052 Enables all supported features on the given pool.
2053 Once this is done, the pool will no longer be accessible on systems that do not
2054 support feature flags.
2055 See
2056 .Xr zfs-features 5
2057 for details on compatibility with systems that support feature flags, but do not
2058 support all features enabled on the pool.
2059 .Bl -tag -width Ds
2060 .It Fl a
2061 Enables all supported features on all pools.
2062 .It Fl V Ar version
2063 Upgrade to the specified legacy version.
2064 If the
2065 .Fl V
2066 flag is specified, no features will be enabled on the pool.
2067 This option can only be used to increase the version number up to the last
2068 supported legacy version number.
2069 .El
2070 .El
2071 .Sh EXIT STATUS
2072 The following exit values are returned:
2073 .Bl -tag -width Ds
2074 .It Sy 0
2075 Successful completion.
2076 .It Sy 1
2077 An error occurred.
2078 .It Sy 2
2079 Invalid command line options were specified.
2080 .El
2081 .Sh EXAMPLES
2082 .Bl -tag -width Ds
2083 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2084 The following command creates a pool with a single raidz root vdev that
2085 consists of six disks.
2086 .Bd -literal
2087 # zpool create tank raidz sda sdb sdc sdd sde sdf
2088 .Ed
2089 .It Sy Example 2 No Creating a Mirrored Storage Pool
2090 The following command creates a pool with two mirrors, where each mirror
2091 contains two disks.
2092 .Bd -literal
2093 # zpool create tank mirror sda sdb mirror sdc sdd
2094 .Ed
2095 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2096 The following command creates an unmirrored pool using two disk partitions.
2097 .Bd -literal
2098 # zpool create tank sda1 sdb2
2099 .Ed
2100 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2101 The following command creates an unmirrored pool using files.
2102 While not recommended, a pool based on files can be useful for experimental
2103 purposes.
2104 .Bd -literal
2105 # zpool create tank /path/to/file/a /path/to/file/b
2106 .Ed
2107 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2108 The following command adds two mirrored disks to the pool
2109 .Em tank ,
2110 assuming the pool is already made up of two-way mirrors.
2111 The additional space is immediately available to any datasets within the pool.
2112 .Bd -literal
2113 # zpool add tank mirror sda sdb
2114 .Ed
2115 .It Sy Example 6 No Listing Available ZFS Storage Pools
2116 The following command lists all available pools on the system.
2117 In this case, the pool
2118 .Em zion
2119 is faulted due to a missing device.
2120 The results from this command are similar to the following:
2121 .Bd -literal
2122 # zpool list
2123 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2124 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
2125 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
2126 zion - - - - - - - FAULTED -
2127 .Ed
2128 .It Sy Example 7 No Destroying a ZFS Storage Pool
2129 The following command destroys the pool
2130 .Em tank
2131 and any datasets contained within.
2132 .Bd -literal
2133 # zpool destroy -f tank
2134 .Ed
2135 .It Sy Example 8 No Exporting a ZFS Storage Pool
2136 The following command exports the devices in pool
2137 .Em tank
2138 so that they can be relocated or later imported.
2139 .Bd -literal
2140 # zpool export tank
2141 .Ed
2142 .It Sy Example 9 No Importing a ZFS Storage Pool
2143 The following command displays available pools, and then imports the pool
2144 .Em tank
2145 for use on the system.
2146 The results from this command are similar to the following:
2147 .Bd -literal
2148 # zpool import
2149 pool: tank
2150 id: 15451357997522795478
2151 state: ONLINE
2152 action: The pool can be imported using its name or numeric identifier.
2153 config:
2154
2155 tank ONLINE
2156 mirror ONLINE
2157 sda ONLINE
2158 sdb ONLINE
2159
2160 # zpool import tank
2161 .Ed
2162 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2163 The following command upgrades all ZFS Storage pools to the current version of
2164 the software.
2165 .Bd -literal
2166 # zpool upgrade -a
2167 This system is currently running ZFS version 2.
2168 .Ed
2169 .It Sy Example 11 No Managing Hot Spares
2170 The following command creates a new pool with an available hot spare:
2171 .Bd -literal
2172 # zpool create tank mirror sda sdb spare sdc
2173 .Ed
2174 .Pp
2175 If one of the disks were to fail, the pool would be reduced to the degraded
2176 state.
2177 The failed device can be replaced using the following command:
2178 .Bd -literal
2179 # zpool replace tank sda sdd
2180 .Ed
2181 .Pp
2182 Once the data has been resilvered, the spare is automatically removed and is
2183 made available for use should another device fail.
2184 The hot spare can be permanently removed from the pool using the following
2185 command:
2186 .Bd -literal
2187 # zpool remove tank sdc
2188 .Ed
2189 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2190 The following command creates a ZFS storage pool consisting of two, two-way
2191 mirrors and mirrored log devices:
2192 .Bd -literal
2193 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2194 sde sdf
2195 .Ed
2196 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2197 The following command adds two disks for use as cache devices to a ZFS storage
2198 pool:
2199 .Bd -literal
2200 # zpool add pool cache sdc sdd
2201 .Ed
2202 .Pp
2203 Once added, the cache devices gradually fill with content from main memory.
2204 Depending on the size of your cache devices, it could take over an hour for
2205 them to fill.
2206 Capacity and reads can be monitored using the
2207 .Cm iostat
2208 option as follows:
2209 .Bd -literal
2210 # zpool iostat -v pool 5
2211 .Ed
2212 .It Sy Example 14 No Removing a Mirrored Log Device
2213 The following command removes the mirrored log device
2214 .Sy mirror-2 .
2215 Given this configuration:
2216 .Bd -literal
2217 pool: tank
2218 state: ONLINE
2219 scrub: none requested
2220 config:
2221
2222 NAME STATE READ WRITE CKSUM
2223 tank ONLINE 0 0 0
2224 mirror-0 ONLINE 0 0 0
2225 sda ONLINE 0 0 0
2226 sdb ONLINE 0 0 0
2227 mirror-1 ONLINE 0 0 0
2228 sdc ONLINE 0 0 0
2229 sdd ONLINE 0 0 0
2230 logs
2231 mirror-2 ONLINE 0 0 0
2232 sde ONLINE 0 0 0
2233 sdf ONLINE 0 0 0
2234 .Ed
2235 .Pp
2236 The command to remove the mirrored log
2237 .Sy mirror-2
2238 is:
2239 .Bd -literal
2240 # zpool remove tank mirror-2
2241 .Ed
2242 .It Sy Example 15 No Displaying expanded space on a device
2243 The following command displays the detailed information for the pool
2244 .Em data .
2245 This pool is comprised of a single raidz vdev where one of its devices
2246 increased its capacity by 10GB.
2247 In this example, the pool will not be able to utilize this extra capacity until
2248 all the devices under the raidz vdev have been expanded.
2249 .Bd -literal
2250 # zpool list -v data
2251 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2252 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2253 raidz1 23.9G 14.6G 9.30G 48% -
2254 sda - - - - -
2255 sdb - - - - 10G
2256 sdc - - - - -
2257 .Ed
2258 .It Sy Example 16 No Adding output columns
2259 Additional columns can be added to the
2260 .Nm zpool Cm status
2261 and
2262 .Nm zpool Cm iostat
2263 output with
2264 .Fl c
2265 option.
2266 .Bd -literal
2267 # zpool status -c vendor,model,size
2268 NAME STATE READ WRITE CKSUM vendor model size
2269 tank ONLINE 0 0 0
2270 mirror-0 ONLINE 0 0 0
2271 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2272 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2273 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2274 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2275 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2276 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2277
2278 # zpool iostat -vc slaves
2279 capacity operations bandwidth
2280 pool alloc free read write read write slaves
2281 ---------- ----- ----- ----- ----- ----- ----- ---------
2282 tank 20.4G 7.23T 26 152 20.7M 21.6M
2283 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2284 U1 - - 0 31 1.46K 20.6M sdb sdff
2285 U10 - - 0 1 3.77K 13.3K sdas sdgw
2286 U11 - - 0 1 288K 13.3K sdat sdgx
2287 U12 - - 0 1 78.4K 13.3K sdau sdgy
2288 U13 - - 0 1 128K 13.3K sdav sdgz
2289 U14 - - 0 1 63.2K 13.3K sdfk sdg
2290 .Ed
2291 .El
2292 .Sh ENVIRONMENT VARIABLES
2293 .Bl -tag -width "ZFS_ABORT"
2294 .It Ev ZFS_ABORT
2295 Cause
2296 .Nm zpool
2297 to dump core on exit for the purposes of running
2298 .Sy ::findleaks .
2299 .El
2300 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2301 .It Ev ZPOOL_IMPORT_PATH
2302 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2303 .Nm zpool
2304 looks for device nodes and files.
2305 Similar to the
2306 .Fl d
2307 option in
2308 .Nm zpool import .
2309 .El
2310 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2311 .It Ev ZPOOL_VDEV_NAME_GUID
2312 Cause
2313 .Nm zpool subcommands to output vdev guids by default. This behavior
2314 is identical to the
2315 .Nm zpool status -g
2316 command line option.
2317 .El
2318 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2319 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2320 Cause
2321 .Nm zpool
2322 subcommands to follow links for vdev names by default. This behavior is identical to the
2323 .Nm zpool status -L
2324 command line option.
2325 .El
2326 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2327 .It Ev ZPOOL_VDEV_NAME_PATH
2328 Cause
2329 .Nm zpool
2330 subcommands to output full vdev path names by default. This
2331 behavior is identical to the
2332 .Nm zpool status -p
2333 command line option.
2334 .El
2335 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2336 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2337 Older ZFS on Linux implementations had issues when attempting to display pool
2338 config VDEV names if a
2339 .Sy devid
2340 NVP value is present in the pool's config.
2341 .Pp
2342 For example, a pool that originated on illumos platform would have a devid
2343 value in the config and
2344 .Nm zpool status
2345 would fail when listing the config.
2346 This would also be true for future Linux based pools.
2347 .Pp
2348 A pool can be stripped of any
2349 .Sy devid
2350 values on import or prevented from adding
2351 them on
2352 .Nm zpool create
2353 or
2354 .Nm zpool add
2355 by setting
2356 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2357 .El
2358 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2359 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2360 Allow a privileged user to run the
2361 .Nm zpool status/iostat
2362 with the
2363 .Fl c
2364 option. Normally, only unprivileged users are allowed to run
2365 .Fl c .
2366 .El
2367 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2368 .It Ev ZPOOL_SCRIPTS_PATH
2369 The search path for scripts when running
2370 .Nm zpool status/iostat
2371 with the
2372 .Fl c
2373 option. This is a colon-separated list of directories and overrides the default
2374 .Pa ~/.zpool.d
2375 and
2376 .Pa /etc/zfs/zpool.d
2377 search paths.
2378 .El
2379 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2380 .It Ev ZPOOL_SCRIPTS_ENABLED
2381 Allow a user to run
2382 .Nm zpool status/iostat
2383 with the
2384 .Fl c
2385 option. If
2386 .Sy ZPOOL_SCRIPTS_ENABLED
2387 is not set, it is assumed that the user is allowed to run
2388 .Nm zpool status/iostat -c .
2389 .El
2390 .Sh INTERFACE STABILITY
2391 .Sy Evolving
2392 .Sh SEE ALSO
2393 .Xr zfs-events 5 ,
2394 .Xr zfs-module-parameters 5 ,
2395 .Xr zpool-features 5 ,
2396 .Xr zed 8 ,
2397 .Xr zfs 8