]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
OpenZFS 8899 - zpool list property documentation doesn't match actual behaviour
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd January 10, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm clear
51 .Ar pool
52 .Op Ar device
53 .Nm
54 .Cm create
55 .Op Fl dfn
56 .Op Fl m Ar mountpoint
57 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
58 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
59 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
60 .Op Fl R Ar root
61 .Ar pool vdev Ns ...
62 .Nm
63 .Cm destroy
64 .Op Fl f
65 .Ar pool
66 .Nm
67 .Cm detach
68 .Ar pool device
69 .Nm
70 .Cm events
71 .Op Fl vHf Oo Ar pool Oc | Fl c
72 .Nm
73 .Cm export
74 .Op Fl a
75 .Op Fl f
76 .Ar pool Ns ...
77 .Nm
78 .Cm get
79 .Op Fl Hp
80 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
81 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
82 .Ar pool Ns ...
83 .Nm
84 .Cm history
85 .Op Fl il
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm import
89 .Op Fl D
90 .Op Fl d Ar dir
91 .Nm
92 .Cm import
93 .Fl a
94 .Op Fl DflmN
95 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
96 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
97 .Op Fl o Ar mntopts
98 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99 .Op Fl R Ar root
100 .Nm
101 .Cm import
102 .Op Fl Dflm
103 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
104 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
105 .Op Fl o Ar mntopts
106 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
107 .Op Fl R Ar root
108 .Op Fl s
109 .Ar pool Ns | Ns Ar id
110 .Op Ar newpool Oo Fl t Oc
111 .Nm
112 .Cm iostat
113 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
114 .Op Fl T Sy u Ns | Ns Sy d
115 .Op Fl ghHLpPvy
116 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
117 .Op Ar interval Op Ar count
118 .Nm
119 .Cm labelclear
120 .Op Fl f
121 .Ar device
122 .Nm
123 .Cm list
124 .Op Fl HgLpPv
125 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
126 .Op Fl T Sy u Ns | Ns Sy d
127 .Oo Ar pool Oc Ns ...
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm offline
131 .Op Fl f
132 .Op Fl t
133 .Ar pool Ar device Ns ...
134 .Nm
135 .Cm online
136 .Op Fl e
137 .Ar pool Ar device Ns ...
138 .Nm
139 .Cm reguid
140 .Ar pool
141 .Nm
142 .Cm reopen
143 .Op Fl n
144 .Ar pool
145 .Nm
146 .Cm remove
147 .Ar pool Ar device Ns ...
148 .Nm
149 .Cm replace
150 .Op Fl f
151 .Oo Fl o Ar property Ns = Ns Ar value Oc
152 .Ar pool Ar device Op Ar new_device
153 .Nm
154 .Cm scrub
155 .Op Fl s | Fl p
156 .Ar pool Ns ...
157 .Nm
158 .Cm set
159 .Ar property Ns = Ns Ar value
160 .Ar pool
161 .Nm
162 .Cm split
163 .Op Fl gLlnP
164 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
165 .Op Fl R Ar root
166 .Ar pool newpool
167 .Oo Ar device Oc Ns ...
168 .Nm
169 .Cm status
170 .Oo Fl c Ar SCRIPT Oc
171 .Op Fl gLPvxD
172 .Op Fl T Sy u Ns | Ns Sy d
173 .Oo Ar pool Oc Ns ...
174 .Op Ar interval Op Ar count
175 .Nm
176 .Cm sync
177 .Oo Ar pool Oc Ns ...
178 .Nm
179 .Cm upgrade
180 .Nm
181 .Cm upgrade
182 .Fl v
183 .Nm
184 .Cm upgrade
185 .Op Fl V Ar version
186 .Fl a Ns | Ns Ar pool Ns ...
187 .Sh DESCRIPTION
188 The
189 .Nm
190 command configures ZFS storage pools.
191 A storage pool is a collection of devices that provides physical storage and
192 data replication for ZFS datasets.
193 All datasets within a storage pool share the same space.
194 See
195 .Xr zfs 8
196 for information on managing datasets.
197 .Ss Virtual Devices (vdevs)
198 A "virtual device" describes a single device or a collection of devices
199 organized according to certain performance and fault characteristics.
200 The following virtual devices are supported:
201 .Bl -tag -width Ds
202 .It Sy disk
203 A block device, typically located under
204 .Pa /dev .
205 ZFS can use individual slices or partitions, though the recommended mode of
206 operation is to use whole disks.
207 A disk can be specified by a full path, or it can be a shorthand name
208 .Po the relative portion of the path under
209 .Pa /dev
210 .Pc .
211 A whole disk can be specified by omitting the slice or partition designation.
212 For example,
213 .Pa sda
214 is equivalent to
215 .Pa /dev/sda .
216 When given a whole disk, ZFS automatically labels the disk, if necessary.
217 .It Sy file
218 A regular file.
219 The use of files as a backing store is strongly discouraged.
220 It is designed primarily for experimental purposes, as the fault tolerance of a
221 file is only as good as the file system of which it is a part.
222 A file must be specified by a full path.
223 .It Sy mirror
224 A mirror of two or more devices.
225 Data is replicated in an identical fashion across all components of a mirror.
226 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
227 failing before data integrity is compromised.
228 .It Sy raidz , raidz1 , raidz2 , raidz3
229 A variation on RAID-5 that allows for better distribution of parity and
230 eliminates the RAID-5
231 .Qq write hole
232 .Pq in which data and parity become inconsistent after a power loss .
233 Data and parity is striped across all disks within a raidz group.
234 .Pp
235 A raidz group can have single-, double-, or triple-parity, meaning that the
236 raidz group can sustain one, two, or three failures, respectively, without
237 losing any data.
238 The
239 .Sy raidz1
240 vdev type specifies a single-parity raidz group; the
241 .Sy raidz2
242 vdev type specifies a double-parity raidz group; and the
243 .Sy raidz3
244 vdev type specifies a triple-parity raidz group.
245 The
246 .Sy raidz
247 vdev type is an alias for
248 .Sy raidz1 .
249 .Pp
250 A raidz group with N disks of size X with P parity disks can hold approximately
251 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
252 compromised.
253 The minimum number of devices in a raidz group is one more than the number of
254 parity disks.
255 The recommended number is between 3 and 9 to help increase performance.
256 .It Sy spare
257 A special pseudo-vdev which keeps track of available hot spares for a pool.
258 For more information, see the
259 .Sx Hot Spares
260 section.
261 .It Sy log
262 A separate intent log device.
263 If more than one log device is specified, then writes are load-balanced between
264 devices.
265 Log devices can be mirrored.
266 However, raidz vdev types are not supported for the intent log.
267 For more information, see the
268 .Sx Intent Log
269 section.
270 .It Sy cache
271 A device used to cache storage pool data.
272 A cache device cannot be configured as a mirror or raidz group.
273 For more information, see the
274 .Sx Cache Devices
275 section.
276 .El
277 .Pp
278 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
279 contain files or disks.
280 Mirrors of mirrors
281 .Pq or other combinations
282 are not allowed.
283 .Pp
284 A pool can have any number of virtual devices at the top of the configuration
285 .Po known as
286 .Qq root vdevs
287 .Pc .
288 Data is dynamically distributed across all top-level devices to balance data
289 among devices.
290 As new virtual devices are added, ZFS automatically places data on the newly
291 available devices.
292 .Pp
293 Virtual devices are specified one at a time on the command line, separated by
294 whitespace.
295 The keywords
296 .Sy mirror
297 and
298 .Sy raidz
299 are used to distinguish where a group ends and another begins.
300 For example, the following creates two root vdevs, each a mirror of two disks:
301 .Bd -literal
302 # zpool create mypool mirror sda sdb mirror sdc sdd
303 .Ed
304 .Ss Device Failure and Recovery
305 ZFS supports a rich set of mechanisms for handling device failure and data
306 corruption.
307 All metadata and data is checksummed, and ZFS automatically repairs bad data
308 from a good copy when corruption is detected.
309 .Pp
310 In order to take advantage of these features, a pool must make use of some form
311 of redundancy, using either mirrored or raidz groups.
312 While ZFS supports running in a non-redundant configuration, where each root
313 vdev is simply a disk or file, this is strongly discouraged.
314 A single case of bit corruption can render some or all of your data unavailable.
315 .Pp
316 A pool's health status is described by one of three states: online, degraded,
317 or faulted.
318 An online pool has all devices operating normally.
319 A degraded pool is one in which one or more devices have failed, but the data is
320 still available due to a redundant configuration.
321 A faulted pool has corrupted metadata, or one or more faulted devices, and
322 insufficient replicas to continue functioning.
323 .Pp
324 The health of the top-level vdev, such as mirror or raidz device, is
325 potentially impacted by the state of its associated vdevs, or component
326 devices.
327 A top-level vdev or component device is in one of the following states:
328 .Bl -tag -width "DEGRADED"
329 .It Sy DEGRADED
330 One or more top-level vdevs is in the degraded state because one or more
331 component devices are offline.
332 Sufficient replicas exist to continue functioning.
333 .Pp
334 One or more component devices is in the degraded or faulted state, but
335 sufficient replicas exist to continue functioning.
336 The underlying conditions are as follows:
337 .Bl -bullet
338 .It
339 The number of checksum errors exceeds acceptable levels and the device is
340 degraded as an indication that something may be wrong.
341 ZFS continues to use the device as necessary.
342 .It
343 The number of I/O errors exceeds acceptable levels.
344 The device could not be marked as faulted because there are insufficient
345 replicas to continue functioning.
346 .El
347 .It Sy FAULTED
348 One or more top-level vdevs is in the faulted state because one or more
349 component devices are offline.
350 Insufficient replicas exist to continue functioning.
351 .Pp
352 One or more component devices is in the faulted state, and insufficient
353 replicas exist to continue functioning.
354 The underlying conditions are as follows:
355 .Bl -bullet
356 .It
357 The device could be opened, but the contents did not match expected values.
358 .It
359 The number of I/O errors exceeds acceptable levels and the device is faulted to
360 prevent further use of the device.
361 .El
362 .It Sy OFFLINE
363 The device was explicitly taken offline by the
364 .Nm zpool Cm offline
365 command.
366 .It Sy ONLINE
367 The device is online and functioning.
368 .It Sy REMOVED
369 The device was physically removed while the system was running.
370 Device removal detection is hardware-dependent and may not be supported on all
371 platforms.
372 .It Sy UNAVAIL
373 The device could not be opened.
374 If a pool is imported when a device was unavailable, then the device will be
375 identified by a unique identifier instead of its path since the path was never
376 correct in the first place.
377 .El
378 .Pp
379 If a device is removed and later re-attached to the system, ZFS attempts
380 to put the device online automatically.
381 Device attach detection is hardware-dependent and might not be supported on all
382 platforms.
383 .Ss Hot Spares
384 ZFS allows devices to be associated with pools as
385 .Qq hot spares .
386 These devices are not actively used in the pool, but when an active device
387 fails, it is automatically replaced by a hot spare.
388 To create a pool with hot spares, specify a
389 .Sy spare
390 vdev with any number of devices.
391 For example,
392 .Bd -literal
393 # zpool create pool mirror sda sdb spare sdc sdd
394 .Ed
395 .Pp
396 Spares can be shared across multiple pools, and can be added with the
397 .Nm zpool Cm add
398 command and removed with the
399 .Nm zpool Cm remove
400 command.
401 Once a spare replacement is initiated, a new
402 .Sy spare
403 vdev is created within the configuration that will remain there until the
404 original device is replaced.
405 At this point, the hot spare becomes available again if another device fails.
406 .Pp
407 If a pool has a shared spare that is currently being used, the pool can not be
408 exported since other pools may use this shared spare, which may lead to
409 potential data corruption.
410 .Pp
411 An in-progress spare replacement can be cancelled by detaching the hot spare.
412 If the original faulted device is detached, then the hot spare assumes its
413 place in the configuration, and is removed from the spare list of all active
414 pools.
415 .Pp
416 Spares cannot replace log devices.
417 .Ss Intent Log
418 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
419 transactions.
420 For instance, databases often require their transactions to be on stable storage
421 devices when returning from a system call.
422 NFS and other applications can also use
423 .Xr fsync 2
424 to ensure data stability.
425 By default, the intent log is allocated from blocks within the main pool.
426 However, it might be possible to get better performance using separate intent
427 log devices such as NVRAM or a dedicated disk.
428 For example:
429 .Bd -literal
430 # zpool create pool sda sdb log sdc
431 .Ed
432 .Pp
433 Multiple log devices can also be specified, and they can be mirrored.
434 See the
435 .Sx EXAMPLES
436 section for an example of mirroring multiple log devices.
437 .Pp
438 Log devices can be added, replaced, attached, detached, and imported and
439 exported as part of the larger pool.
440 Mirrored log devices can be removed by specifying the top-level mirror for the
441 log.
442 .Ss Cache Devices
443 Devices can be added to a storage pool as
444 .Qq cache devices .
445 These devices provide an additional layer of caching between main memory and
446 disk.
447 For read-heavy workloads, where the working set size is much larger than what
448 can be cached in main memory, using cache devices allow much more of this
449 working set to be served from low latency media.
450 Using cache devices provides the greatest performance improvement for random
451 read-workloads of mostly static content.
452 .Pp
453 To create a pool with cache devices, specify a
454 .Sy cache
455 vdev with any number of devices.
456 For example:
457 .Bd -literal
458 # zpool create pool sda sdb cache sdc sdd
459 .Ed
460 .Pp
461 Cache devices cannot be mirrored or part of a raidz configuration.
462 If a read error is encountered on a cache device, that read I/O is reissued to
463 the original storage pool device, which might be part of a mirrored or raidz
464 configuration.
465 .Pp
466 The content of the cache devices is considered volatile, as is the case with
467 other system caches.
468 .Ss Properties
469 Each pool has several properties associated with it.
470 Some properties are read-only statistics while others are configurable and
471 change the behavior of the pool.
472 .Pp
473 The following are read-only properties:
474 .Bl -tag -width Ds
475 .It Cm allocated
476 Amount of storage used within the pool.
477 .It Sy capacity
478 Percentage of pool space used.
479 This property can also be referred to by its shortened column name,
480 .Sy cap .
481 .It Sy expandsize
482 Amount of uninitialized space within the pool or device that can be used to
483 increase the total capacity of the pool.
484 Uninitialized space consists of any space on an EFI labeled vdev which has not
485 been brought online
486 .Po e.g, using
487 .Nm zpool Cm online Fl e
488 .Pc .
489 This space occurs when a LUN is dynamically expanded.
490 .It Sy fragmentation
491 The amount of fragmentation in the pool.
492 .It Sy free
493 The amount of free space available in the pool.
494 .It Sy freeing
495 After a file system or snapshot is destroyed, the space it was using is
496 returned to the pool asynchronously.
497 .Sy freeing
498 is the amount of space remaining to be reclaimed.
499 Over time
500 .Sy freeing
501 will decrease while
502 .Sy free
503 increases.
504 .It Sy health
505 The current health of the pool.
506 Health can be one of
507 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
508 .It Sy guid
509 A unique identifier for the pool.
510 .It Sy size
511 Total size of the storage pool.
512 .It Sy unsupported@ Ns Em feature_guid
513 Information about unsupported features that are enabled on the pool.
514 See
515 .Xr zpool-features 5
516 for details.
517 .El
518 .Pp
519 The space usage properties report actual physical space available to the
520 storage pool.
521 The physical space can be different from the total amount of space that any
522 contained datasets can actually use.
523 The amount of space used in a raidz configuration depends on the characteristics
524 of the data being written.
525 In addition, ZFS reserves some space for internal accounting that the
526 .Xr zfs 8
527 command takes into account, but the
528 .Nm
529 command does not.
530 For non-full pools of a reasonable size, these effects should be invisible.
531 For small pools, or pools that are close to being completely full, these
532 discrepancies may become more noticeable.
533 .Pp
534 The following property can be set at creation time and import time:
535 .Bl -tag -width Ds
536 .It Sy altroot
537 Alternate root directory.
538 If set, this directory is prepended to any mount points within the pool.
539 This can be used when examining an unknown pool where the mount points cannot be
540 trusted, or in an alternate boot environment, where the typical paths are not
541 valid.
542 .Sy altroot
543 is not a persistent property.
544 It is valid only while the system is up.
545 Setting
546 .Sy altroot
547 defaults to using
548 .Sy cachefile Ns = Ns Sy none ,
549 though this may be overridden using an explicit setting.
550 .El
551 .Pp
552 The following property can be set only at import time:
553 .Bl -tag -width Ds
554 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
555 If set to
556 .Sy on ,
557 the pool will be imported in read-only mode.
558 This property can also be referred to by its shortened column name,
559 .Sy rdonly .
560 .El
561 .Pp
562 The following properties can be set at creation time and import time, and later
563 changed with the
564 .Nm zpool Cm set
565 command:
566 .Bl -tag -width Ds
567 .It Sy ashift Ns = Ns Sy ashift
568 Pool sector size exponent, to the power of
569 .Sy 2
570 (internally referred to as
571 .Sy ashift
572 ). Values from 9 to 16, inclusive, are valid; also, the special
573 value 0 (the default) means to auto-detect using the kernel's block
574 layer and a ZFS internal exception list. I/O operations will be aligned
575 to the specified size boundaries. Additionally, the minimum (disk)
576 write size will be set to the specified size, so this represents a
577 space vs. performance trade-off. For optimal performance, the pool
578 sector size should be greater than or equal to the sector size of the
579 underlying disks. The typical case for setting this property is when
580 performance is important and the underlying disks use 4KiB sectors but
581 report 512B sectors to the OS (for compatibility reasons); in that
582 case, set
583 .Sy ashift=12
584 (which is 1<<12 = 4096). When set, this property is
585 used as the default hint value in subsequent vdev operations (add,
586 attach and replace). Changing this value will not modify any existing
587 vdev, not even on disk replacement; however it can be used, for
588 instance, to replace a dying 512B sectors disk with a newer 4KiB
589 sectors device: this will probably result in bad performance but at the
590 same time could prevent loss of data.
591 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
592 Controls automatic pool expansion when the underlying LUN is grown.
593 If set to
594 .Sy on ,
595 the pool will be resized according to the size of the expanded device.
596 If the device is part of a mirror or raidz then all devices within that
597 mirror/raidz group must be expanded before the new space is made available to
598 the pool.
599 The default behavior is
600 .Sy off .
601 This property can also be referred to by its shortened column name,
602 .Sy expand .
603 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
604 Controls automatic device replacement.
605 If set to
606 .Sy off ,
607 device replacement must be initiated by the administrator by using the
608 .Nm zpool Cm replace
609 command.
610 If set to
611 .Sy on ,
612 any new device, found in the same physical location as a device that previously
613 belonged to the pool, is automatically formatted and replaced.
614 The default behavior is
615 .Sy off .
616 This property can also be referred to by its shortened column name,
617 .Sy replace .
618 Autoreplace can also be used with virtual disks (like device
619 mapper) provided that you use the /dev/disk/by-vdev paths setup by
620 vdev_id.conf. See the
621 .Xr vdev_id 8
622 man page for more details.
623 Autoreplace and autoonline require the ZFS Event Daemon be configured and
624 running. See the
625 .Xr zed 8
626 man page for more details.
627 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
628 Identifies the default bootable dataset for the root pool. This property is
629 expected to be set mainly by the installation and upgrade programs.
630 Not all Linux distribution boot processes use the bootfs property.
631 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
632 Controls the location of where the pool configuration is cached.
633 Discovering all pools on system startup requires a cached copy of the
634 configuration data that is stored on the root file system.
635 All pools in this cache are automatically imported when the system boots.
636 Some environments, such as install and clustering, need to cache this
637 information in a different location so that pools are not automatically
638 imported.
639 Setting this property caches the pool configuration in a different location that
640 can later be imported with
641 .Nm zpool Cm import Fl c .
642 Setting it to the special value
643 .Sy none
644 creates a temporary pool that is never cached, and the special value
645 .Qq
646 .Pq empty string
647 uses the default location.
648 .Pp
649 Multiple pools can share the same cache file.
650 Because the kernel destroys and recreates this file when pools are added and
651 removed, care should be taken when attempting to access this file.
652 When the last pool using a
653 .Sy cachefile
654 is exported or destroyed, the file will be empty.
655 .It Sy comment Ns = Ns Ar text
656 A text string consisting of printable ASCII characters that will be stored
657 such that it is available even if the pool becomes faulted.
658 An administrator can provide additional information about a pool using this
659 property.
660 .It Sy dedupditto Ns = Ns Ar number
661 Threshold for the number of block ditto copies.
662 If the reference count for a deduplicated block increases above this number, a
663 new ditto copy of this block is automatically stored.
664 The default setting is
665 .Sy 0
666 which causes no ditto copies to be created for deduplicated blocks.
667 The minimum legal nonzero setting is
668 .Sy 100 .
669 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
670 Controls whether a non-privileged user is granted access based on the dataset
671 permissions defined on the dataset.
672 See
673 .Xr zfs 8
674 for more information on ZFS delegated administration.
675 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
676 Controls the system behavior in the event of catastrophic pool failure.
677 This condition is typically a result of a loss of connectivity to the underlying
678 storage device(s) or a failure of all devices within the pool.
679 The behavior of such an event is determined as follows:
680 .Bl -tag -width "continue"
681 .It Sy wait
682 Blocks all I/O access until the device connectivity is recovered and the errors
683 are cleared.
684 This is the default behavior.
685 .It Sy continue
686 Returns
687 .Er EIO
688 to any new write I/O requests but allows reads to any of the remaining healthy
689 devices.
690 Any write requests that have yet to be committed to disk would be blocked.
691 .It Sy panic
692 Prints out a message to the console and generates a system crash dump.
693 .El
694 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
695 The value of this property is the current state of
696 .Ar feature_name .
697 The only valid value when setting this property is
698 .Sy enabled
699 which moves
700 .Ar feature_name
701 to the enabled state.
702 See
703 .Xr zpool-features 5
704 for details on feature states.
705 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
706 Controls whether information about snapshots associated with this pool is
707 output when
708 .Nm zfs Cm list
709 is run without the
710 .Fl t
711 option.
712 The default value is
713 .Sy off .
714 This property can also be referred to by its shortened name,
715 .Sy listsnaps .
716 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
717 Controls whether a pool activity check should be performed during
718 .Nm zpool Cm import .
719 When a pool is determined to be active it cannot be imported, even with the
720 .Fl f
721 option. This property is intended to be used in failover configurations
722 where multiple hosts have access to a pool on shared storage. When this
723 property is on, periodic writes to storage occur to show the pool is in use.
724 See
725 .Sy zfs_multihost_interval
726 in the
727 .Xr zfs-module-parameters 5
728 man page. In order to enable this property each host must set a unique hostid.
729 See
730 .Xr genhostid 1
731 .Xr zgenhostid 8
732 .Xr spl-module-paramters 5
733 for additional details. The default value is
734 .Sy off .
735 .It Sy version Ns = Ns Ar version
736 The current on-disk version of the pool.
737 This can be increased, but never decreased.
738 The preferred method of updating pools is with the
739 .Nm zpool Cm upgrade
740 command, though this property can be used when a specific version is needed for
741 backwards compatibility.
742 Once feature flags are enabled on a pool this property will no longer have a
743 value.
744 .El
745 .Ss Subcommands
746 All subcommands that modify state are logged persistently to the pool in their
747 original form.
748 .Pp
749 The
750 .Nm
751 command provides subcommands to create and destroy storage pools, add capacity
752 to storage pools, and provide information about the storage pools.
753 The following subcommands are supported:
754 .Bl -tag -width Ds
755 .It Xo
756 .Nm
757 .Fl ?
758 .Xc
759 Displays a help message.
760 .It Xo
761 .Nm
762 .Cm add
763 .Op Fl fgLnP
764 .Oo Fl o Ar property Ns = Ns Ar value Oc
765 .Ar pool vdev Ns ...
766 .Xc
767 Adds the specified virtual devices to the given pool.
768 The
769 .Ar vdev
770 specification is described in the
771 .Sx Virtual Devices
772 section.
773 The behavior of the
774 .Fl f
775 option, and the device checks performed are described in the
776 .Nm zpool Cm create
777 subcommand.
778 .Bl -tag -width Ds
779 .It Fl f
780 Forces use of
781 .Ar vdev Ns s ,
782 even if they appear in use or specify a conflicting replication level.
783 Not all devices can be overridden in this manner.
784 .It Fl g
785 Display
786 .Ar vdev ,
787 GUIDs instead of the normal device names. These GUIDs can be used in place of
788 device names for the zpool detach/offline/remove/replace commands.
789 .It Fl L
790 Display real paths for
791 .Ar vdev Ns s
792 resolving all symbolic links. This can be used to look up the current block
793 device name regardless of the /dev/disk/ path used to open it.
794 .It Fl n
795 Displays the configuration that would be used without actually adding the
796 .Ar vdev Ns s .
797 The actual pool creation can still fail due to insufficient privileges or
798 device sharing.
799 .It Fl P
800 Display real paths for
801 .Ar vdev Ns s
802 instead of only the last component of the path. This can be used in
803 conjunction with the -L flag.
804 .It Fl o Ar property Ns = Ns Ar value
805 Sets the given pool properties. See the
806 .Sx Properties
807 section for a list of valid properties that can be set. The only property
808 supported at the moment is ashift.
809 .El
810 .It Xo
811 .Nm
812 .Cm attach
813 .Op Fl f
814 .Oo Fl o Ar property Ns = Ns Ar value Oc
815 .Ar pool device new_device
816 .Xc
817 Attaches
818 .Ar new_device
819 to the existing
820 .Ar device .
821 The existing device cannot be part of a raidz configuration.
822 If
823 .Ar device
824 is not currently part of a mirrored configuration,
825 .Ar device
826 automatically transforms into a two-way mirror of
827 .Ar device
828 and
829 .Ar new_device .
830 If
831 .Ar device
832 is part of a two-way mirror, attaching
833 .Ar new_device
834 creates a three-way mirror, and so on.
835 In either case,
836 .Ar new_device
837 begins to resilver immediately.
838 .Bl -tag -width Ds
839 .It Fl f
840 Forces use of
841 .Ar new_device ,
842 even if its appears to be in use.
843 Not all devices can be overridden in this manner.
844 .It Fl o Ar property Ns = Ns Ar value
845 Sets the given pool properties. See the
846 .Sx Properties
847 section for a list of valid properties that can be set. The only property
848 supported at the moment is ashift.
849 .El
850 .It Xo
851 .Nm
852 .Cm clear
853 .Ar pool
854 .Op Ar device
855 .Xc
856 Clears device errors in a pool.
857 If no arguments are specified, all device errors within the pool are cleared.
858 If one or more devices is specified, only those errors associated with the
859 specified device or devices are cleared.
860 .It Xo
861 .Nm
862 .Cm create
863 .Op Fl dfn
864 .Op Fl m Ar mountpoint
865 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
866 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
867 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
868 .Op Fl R Ar root
869 .Op Fl t Ar tname
870 .Ar pool vdev Ns ...
871 .Xc
872 Creates a new storage pool containing the virtual devices specified on the
873 command line.
874 The pool name must begin with a letter, and can only contain
875 alphanumeric characters as well as underscore
876 .Pq Qq Sy _ ,
877 dash
878 .Pq Qq Sy \&- ,
879 colon
880 .Pq Qq Sy \&: ,
881 space
882 .Pq Qq Sy \&\ ,
883 and period
884 .Pq Qq Sy \&. .
885 The pool names
886 .Sy mirror ,
887 .Sy raidz ,
888 .Sy spare
889 and
890 .Sy log
891 are reserved, as are names beginning with the pattern
892 .Sy c[0-9] .
893 The
894 .Ar vdev
895 specification is described in the
896 .Sx Virtual Devices
897 section.
898 .Pp
899 The command verifies that each device specified is accessible and not currently
900 in use by another subsystem.
901 There are some uses, such as being currently mounted, or specified as the
902 dedicated dump device, that prevents a device from ever being used by ZFS.
903 Other uses, such as having a preexisting UFS file system, can be overridden with
904 the
905 .Fl f
906 option.
907 .Pp
908 The command also checks that the replication strategy for the pool is
909 consistent.
910 An attempt to combine redundant and non-redundant storage in a single pool, or
911 to mix disks and files, results in an error unless
912 .Fl f
913 is specified.
914 The use of differently sized devices within a single raidz or mirror group is
915 also flagged as an error unless
916 .Fl f
917 is specified.
918 .Pp
919 Unless the
920 .Fl R
921 option is specified, the default mount point is
922 .Pa / Ns Ar pool .
923 The mount point must not exist or must be empty, or else the root dataset
924 cannot be mounted.
925 This can be overridden with the
926 .Fl m
927 option.
928 .Pp
929 By default all supported features are enabled on the new pool unless the
930 .Fl d
931 option is specified.
932 .Bl -tag -width Ds
933 .It Fl d
934 Do not enable any features on the new pool.
935 Individual features can be enabled by setting their corresponding properties to
936 .Sy enabled
937 with the
938 .Fl o
939 option.
940 See
941 .Xr zpool-features 5
942 for details about feature properties.
943 .It Fl f
944 Forces use of
945 .Ar vdev Ns s ,
946 even if they appear in use or specify a conflicting replication level.
947 Not all devices can be overridden in this manner.
948 .It Fl m Ar mountpoint
949 Sets the mount point for the root dataset.
950 The default mount point is
951 .Pa /pool
952 or
953 .Pa altroot/pool
954 if
955 .Ar altroot
956 is specified.
957 The mount point must be an absolute path,
958 .Sy legacy ,
959 or
960 .Sy none .
961 For more information on dataset mount points, see
962 .Xr zfs 8 .
963 .It Fl n
964 Displays the configuration that would be used without actually creating the
965 pool.
966 The actual pool creation can still fail due to insufficient privileges or
967 device sharing.
968 .It Fl o Ar property Ns = Ns Ar value
969 Sets the given pool properties.
970 See the
971 .Sx Properties
972 section for a list of valid properties that can be set.
973 .It Fl o Ar feature@feature Ns = Ns Ar value
974 Sets the given pool feature. See the
975 .Xr zpool-features 5
976 section for a list of valid features that can be set.
977 Value can be either disabled or enabled.
978 .It Fl O Ar file-system-property Ns = Ns Ar value
979 Sets the given file system properties in the root file system of the pool.
980 See the
981 .Sx Properties
982 section of
983 .Xr zfs 8
984 for a list of valid properties that can be set.
985 .It Fl R Ar root
986 Equivalent to
987 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
988 .It Fl t Ar tname
989 Sets the in-core pool name to
990 .Sy tname
991 while the on-disk name will be the name specified as the pool name
992 .Sy pool .
993 This will set the default cachefile property to none. This is intended
994 to handle name space collisions when creating pools for other systems,
995 such as virtual machines or physical machines whose pools live on network
996 block devices.
997 .El
998 .It Xo
999 .Nm
1000 .Cm destroy
1001 .Op Fl f
1002 .Ar pool
1003 .Xc
1004 Destroys the given pool, freeing up any devices for other use.
1005 This command tries to unmount any active datasets before destroying the pool.
1006 .Bl -tag -width Ds
1007 .It Fl f
1008 Forces any active datasets contained within the pool to be unmounted.
1009 .El
1010 .It Xo
1011 .Nm
1012 .Cm detach
1013 .Ar pool device
1014 .Xc
1015 Detaches
1016 .Ar device
1017 from a mirror.
1018 The operation is refused if there are no other valid replicas of the data.
1019 If device may be re-added to the pool later on then consider the
1020 .Sy zpool offline
1021 command instead.
1022 .It Xo
1023 .Nm
1024 .Cm events
1025 .Op Fl vHf Oo Ar pool Oc | Fl c
1026 .Xc
1027 Lists all recent events generated by the ZFS kernel modules. These events
1028 are consumed by the
1029 .Xr zed 8
1030 and used to automate administrative tasks such as replacing a failed device
1031 with a hot spare. For more information about the subclasses and event payloads
1032 that can be generated see the
1033 .Xr zfs-events 5
1034 man page.
1035 .Bl -tag -width Ds
1036 .It Fl c
1037 Clear all previous events.
1038 .It Fl f
1039 Follow mode.
1040 .It Fl H
1041 Scripted mode. Do not display headers, and separate fields by a
1042 single tab instead of arbitrary space.
1043 .It Fl v
1044 Print the entire payload for each event.
1045 .El
1046 .It Xo
1047 .Nm
1048 .Cm export
1049 .Op Fl a
1050 .Op Fl f
1051 .Ar pool Ns ...
1052 .Xc
1053 Exports the given pools from the system.
1054 All devices are marked as exported, but are still considered in use by other
1055 subsystems.
1056 The devices can be moved between systems
1057 .Pq even those of different endianness
1058 and imported as long as a sufficient number of devices are present.
1059 .Pp
1060 Before exporting the pool, all datasets within the pool are unmounted.
1061 A pool can not be exported if it has a shared spare that is currently being
1062 used.
1063 .Pp
1064 For pools to be portable, you must give the
1065 .Nm
1066 command whole disks, not just partitions, so that ZFS can label the disks with
1067 portable EFI labels.
1068 Otherwise, disk drivers on platforms of different endianness will not recognize
1069 the disks.
1070 .Bl -tag -width Ds
1071 .It Fl a
1072 Exports all pools imported on the system.
1073 .It Fl f
1074 Forcefully unmount all datasets, using the
1075 .Nm unmount Fl f
1076 command.
1077 .Pp
1078 This command will forcefully export the pool even if it has a shared spare that
1079 is currently being used.
1080 This may lead to potential data corruption.
1081 .El
1082 .It Xo
1083 .Nm
1084 .Cm get
1085 .Op Fl Hp
1086 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1087 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1088 .Ar pool Ns ...
1089 .Xc
1090 Retrieves the given list of properties
1091 .Po
1092 or all properties if
1093 .Sy all
1094 is used
1095 .Pc
1096 for the specified storage pool(s).
1097 These properties are displayed with the following fields:
1098 .Bd -literal
1099 name Name of storage pool
1100 property Property name
1101 value Property value
1102 source Property source, either 'default' or 'local'.
1103 .Ed
1104 .Pp
1105 See the
1106 .Sx Properties
1107 section for more information on the available pool properties.
1108 .Bl -tag -width Ds
1109 .It Fl H
1110 Scripted mode.
1111 Do not display headers, and separate fields by a single tab instead of arbitrary
1112 space.
1113 .It Fl o Ar field
1114 A comma-separated list of columns to display.
1115 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1116 is the default value.
1117 .It Fl p
1118 Display numbers in parsable (exact) values.
1119 .El
1120 .It Xo
1121 .Nm
1122 .Cm history
1123 .Op Fl il
1124 .Oo Ar pool Oc Ns ...
1125 .Xc
1126 Displays the command history of the specified pool(s) or all pools if no pool is
1127 specified.
1128 .Bl -tag -width Ds
1129 .It Fl i
1130 Displays internally logged ZFS events in addition to user initiated events.
1131 .It Fl l
1132 Displays log records in long format, which in addition to standard format
1133 includes, the user name, the hostname, and the zone in which the operation was
1134 performed.
1135 .El
1136 .It Xo
1137 .Nm
1138 .Cm import
1139 .Op Fl D
1140 .Op Fl d Ar dir
1141 .Xc
1142 Lists pools available to import.
1143 If the
1144 .Fl d
1145 option is not specified, this command searches for devices in
1146 .Pa /dev .
1147 The
1148 .Fl d
1149 option can be specified multiple times, and all directories are searched.
1150 If the device appears to be part of an exported pool, this command displays a
1151 summary of the pool with the name of the pool, a numeric identifier, as well as
1152 the vdev layout and current health of the device for each device or file.
1153 Destroyed pools, pools that were previously destroyed with the
1154 .Nm zpool Cm destroy
1155 command, are not listed unless the
1156 .Fl D
1157 option is specified.
1158 .Pp
1159 The numeric identifier is unique, and can be used instead of the pool name when
1160 multiple exported pools of the same name are available.
1161 .Bl -tag -width Ds
1162 .It Fl c Ar cachefile
1163 Reads configuration from the given
1164 .Ar cachefile
1165 that was created with the
1166 .Sy cachefile
1167 pool property.
1168 This
1169 .Ar cachefile
1170 is used instead of searching for devices.
1171 .It Fl d Ar dir
1172 Searches for devices or files in
1173 .Ar dir .
1174 The
1175 .Fl d
1176 option can be specified multiple times.
1177 .It Fl D
1178 Lists destroyed pools only.
1179 .El
1180 .It Xo
1181 .Nm
1182 .Cm import
1183 .Fl a
1184 .Op Fl DflmN
1185 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1186 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1187 .Op Fl o Ar mntopts
1188 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1189 .Op Fl R Ar root
1190 .Op Fl s
1191 .Xc
1192 Imports all pools found in the search directories.
1193 Identical to the previous command, except that all pools with a sufficient
1194 number of devices available are imported.
1195 Destroyed pools, pools that were previously destroyed with the
1196 .Nm zpool Cm destroy
1197 command, will not be imported unless the
1198 .Fl D
1199 option is specified.
1200 .Bl -tag -width Ds
1201 .It Fl a
1202 Searches for and imports all pools found.
1203 .It Fl c Ar cachefile
1204 Reads configuration from the given
1205 .Ar cachefile
1206 that was created with the
1207 .Sy cachefile
1208 pool property.
1209 This
1210 .Ar cachefile
1211 is used instead of searching for devices.
1212 .It Fl d Ar dir
1213 Searches for devices or files in
1214 .Ar dir .
1215 The
1216 .Fl d
1217 option can be specified multiple times.
1218 This option is incompatible with the
1219 .Fl c
1220 option.
1221 .It Fl D
1222 Imports destroyed pools only.
1223 The
1224 .Fl f
1225 option is also required.
1226 .It Fl f
1227 Forces import, even if the pool appears to be potentially active.
1228 .It Fl F
1229 Recovery mode for a non-importable pool.
1230 Attempt to return the pool to an importable state by discarding the last few
1231 transactions.
1232 Not all damaged pools can be recovered by using this option.
1233 If successful, the data from the discarded transactions is irretrievably lost.
1234 This option is ignored if the pool is importable or already imported.
1235 .It Fl l
1236 Indicates that this command will request encryption keys for all encrypted
1237 datasets it attempts to mount as it is bringing the pool online. Note that if
1238 any datasets have a
1239 .Sy keylocation
1240 of
1241 .Sy prompt
1242 this command will block waiting for the keys to be entered. Without this flag
1243 encrypted datasets will be left unavailable until the keys are loaded.
1244 .It Fl m
1245 Allows a pool to import when there is a missing log device.
1246 Recent transactions can be lost because the log device will be discarded.
1247 .It Fl n
1248 Used with the
1249 .Fl F
1250 recovery option.
1251 Determines whether a non-importable pool can be made importable again, but does
1252 not actually perform the pool recovery.
1253 For more details about pool recovery mode, see the
1254 .Fl F
1255 option, above.
1256 .It Fl N
1257 Import the pool without mounting any file systems.
1258 .It Fl o Ar mntopts
1259 Comma-separated list of mount options to use when mounting datasets within the
1260 pool.
1261 See
1262 .Xr zfs 8
1263 for a description of dataset properties and mount options.
1264 .It Fl o Ar property Ns = Ns Ar value
1265 Sets the specified property on the imported pool.
1266 See the
1267 .Sx Properties
1268 section for more information on the available pool properties.
1269 .It Fl R Ar root
1270 Sets the
1271 .Sy cachefile
1272 property to
1273 .Sy none
1274 and the
1275 .Sy altroot
1276 property to
1277 .Ar root .
1278 .It Fl s
1279 Scan using the default search path, the libblkid cache will not be
1280 consulted. A custom search path may be specified by setting the
1281 ZPOOL_IMPORT_PATH environment variable.
1282 .It Fl X
1283 Used with the
1284 .Fl F
1285 recovery option. Determines whether extreme
1286 measures to find a valid txg should take place. This allows the pool to
1287 be rolled back to a txg which is no longer guaranteed to be consistent.
1288 Pools imported at an inconsistent txg may contain uncorrectable
1289 checksum errors. For more details about pool recovery mode, see the
1290 .Fl F
1291 option, above. WARNING: This option can be extremely hazardous to the
1292 health of your pool and should only be used as a last resort.
1293 .It Fl T
1294 Specify the txg to use for rollback. Implies
1295 .Fl FX .
1296 For more details
1297 about pool recovery mode, see the
1298 .Fl X
1299 option, above. WARNING: This option can be extremely hazardous to the
1300 health of your pool and should only be used as a last resort.
1301 .El
1302 .It Xo
1303 .Nm
1304 .Cm import
1305 .Op Fl Dflm
1306 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1307 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1308 .Op Fl o Ar mntopts
1309 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1310 .Op Fl R Ar root
1311 .Op Fl s
1312 .Ar pool Ns | Ns Ar id
1313 .Op Ar newpool
1314 .Xc
1315 Imports a specific pool.
1316 A pool can be identified by its name or the numeric identifier.
1317 If
1318 .Ar newpool
1319 is specified, the pool is imported using the name
1320 .Ar newpool .
1321 Otherwise, it is imported with the same name as its exported name.
1322 .Pp
1323 If a device is removed from a system without running
1324 .Nm zpool Cm export
1325 first, the device appears as potentially active.
1326 It cannot be determined if this was a failed export, or whether the device is
1327 really in use from another host.
1328 To import a pool in this state, the
1329 .Fl f
1330 option is required.
1331 .Bl -tag -width Ds
1332 .It Fl c Ar cachefile
1333 Reads configuration from the given
1334 .Ar cachefile
1335 that was created with the
1336 .Sy cachefile
1337 pool property.
1338 This
1339 .Ar cachefile
1340 is used instead of searching for devices.
1341 .It Fl d Ar dir
1342 Searches for devices or files in
1343 .Ar dir .
1344 The
1345 .Fl d
1346 option can be specified multiple times.
1347 This option is incompatible with the
1348 .Fl c
1349 option.
1350 .It Fl D
1351 Imports destroyed pool.
1352 The
1353 .Fl f
1354 option is also required.
1355 .It Fl f
1356 Forces import, even if the pool appears to be potentially active.
1357 .It Fl F
1358 Recovery mode for a non-importable pool.
1359 Attempt to return the pool to an importable state by discarding the last few
1360 transactions.
1361 Not all damaged pools can be recovered by using this option.
1362 If successful, the data from the discarded transactions is irretrievably lost.
1363 This option is ignored if the pool is importable or already imported.
1364 .It Fl l
1365 Indicates that this command will request encryption keys for all encrypted
1366 datasets it attempts to mount as it is bringing the pool online. Note that if
1367 any datasets have a
1368 .Sy keylocation
1369 of
1370 .Sy prompt
1371 this command will block waiting for the keys to be entered. Without this flag
1372 encrypted datasets will be left unavailable until the keys are loaded.
1373 .It Fl m
1374 Allows a pool to import when there is a missing log device.
1375 Recent transactions can be lost because the log device will be discarded.
1376 .It Fl n
1377 Used with the
1378 .Fl F
1379 recovery option.
1380 Determines whether a non-importable pool can be made importable again, but does
1381 not actually perform the pool recovery.
1382 For more details about pool recovery mode, see the
1383 .Fl F
1384 option, above.
1385 .It Fl o Ar mntopts
1386 Comma-separated list of mount options to use when mounting datasets within the
1387 pool.
1388 See
1389 .Xr zfs 8
1390 for a description of dataset properties and mount options.
1391 .It Fl o Ar property Ns = Ns Ar value
1392 Sets the specified property on the imported pool.
1393 See the
1394 .Sx Properties
1395 section for more information on the available pool properties.
1396 .It Fl R Ar root
1397 Sets the
1398 .Sy cachefile
1399 property to
1400 .Sy none
1401 and the
1402 .Sy altroot
1403 property to
1404 .Ar root .
1405 .It Fl s
1406 Scan using the default search path, the libblkid cache will not be
1407 consulted. A custom search path may be specified by setting the
1408 ZPOOL_IMPORT_PATH environment variable.
1409 .It Fl X
1410 Used with the
1411 .Fl F
1412 recovery option. Determines whether extreme
1413 measures to find a valid txg should take place. This allows the pool to
1414 be rolled back to a txg which is no longer guaranteed to be consistent.
1415 Pools imported at an inconsistent txg may contain uncorrectable
1416 checksum errors. For more details about pool recovery mode, see the
1417 .Fl F
1418 option, above. WARNING: This option can be extremely hazardous to the
1419 health of your pool and should only be used as a last resort.
1420 .It Fl T
1421 Specify the txg to use for rollback. Implies
1422 .Fl FX .
1423 For more details
1424 about pool recovery mode, see the
1425 .Fl X
1426 option, above. WARNING: This option can be extremely hazardous to the
1427 health of your pool and should only be used as a last resort.
1428 .It Fl t
1429 Used with
1430 .Sy newpool .
1431 Specifies that
1432 .Sy newpool
1433 is temporary. Temporary pool names last until export. Ensures that
1434 the original pool name will be used in all label updates and therefore
1435 is retained upon export.
1436 Will also set -o cachefile=none when not explicitly specified.
1437 .El
1438 .It Xo
1439 .Nm
1440 .Cm iostat
1441 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1442 .Op Fl T Sy u Ns | Ns Sy d
1443 .Op Fl ghHLpPvy
1444 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1445 .Op Ar interval Op Ar count
1446 .Xc
1447 Displays I/O statistics for the given pools/vdevs. You can pass in a
1448 list of pools, a pool and list of vdevs in that pool, or a list of any
1449 vdevs from any pool. If no items are specified, statistics for every
1450 pool in the system are shown.
1451 When given an
1452 .Ar interval ,
1453 the statistics are printed every
1454 .Ar interval
1455 seconds until ^C is pressed. If count is specified, the command exits
1456 after count reports are printed. The first report printed is always
1457 the statistics since boot regardless of whether
1458 .Ar interval
1459 and
1460 .Ar count
1461 are passed. However, this behavior can be suppressed with the
1462 .Fl y
1463 flag. Also note that the units of
1464 .Sy K ,
1465 .Sy M ,
1466 .Sy G ...
1467 that are printed in the report are in base 1024. To get the raw
1468 values, use the
1469 .Fl p
1470 flag.
1471 .Bl -tag -width Ds
1472 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1473 Run a script (or scripts) on each vdev and include the output as a new column
1474 in the
1475 .Nm zpool Cm iostat
1476 output. Users can run any script found in their
1477 .Pa ~/.zpool.d
1478 directory or from the system
1479 .Pa /etc/zfs/zpool.d
1480 directory. Script names containing the slash (/) character are not allowed.
1481 The default search path can be overridden by setting the
1482 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1483 .Fl c
1484 if they have the ZPOOL_SCRIPTS_AS_ROOT
1485 environment variable set. If a script requires the use of a privileged
1486 command, like
1487 .Xr smartctl 8 ,
1488 then it's recommended you allow the user access to it in
1489 .Pa /etc/sudoers
1490 or add the user to the
1491 .Pa /etc/sudoers.d/zfs
1492 file.
1493 .Pp
1494 If
1495 .Fl c
1496 is passed without a script name, it prints a list of all scripts.
1497 .Fl c
1498 also sets verbose mode
1499 .No \&( Ns Fl v Ns No \&).
1500 .Pp
1501 Script output should be in the form of "name=value". The column name is
1502 set to "name" and the value is set to "value". Multiple lines can be
1503 used to output multiple columns. The first line of output not in the
1504 "name=value" format is displayed without a column title, and no more
1505 output after that is displayed. This can be useful for printing error
1506 messages. Blank or NULL values are printed as a '-' to make output
1507 awk-able.
1508 .Pp
1509 The following environment variables are set before running each script:
1510 .Bl -tag -width "VDEV_PATH"
1511 .It Sy VDEV_PATH
1512 Full path to the vdev
1513 .El
1514 .Bl -tag -width "VDEV_UPATH"
1515 .It Sy VDEV_UPATH
1516 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1517 multipath, or partitioned vdevs.
1518 .El
1519 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1520 .It Sy VDEV_ENC_SYSFS_PATH
1521 The sysfs path to the enclosure for the vdev (if any).
1522 .El
1523 .It Fl T Sy u Ns | Ns Sy d
1524 Display a time stamp.
1525 Specify
1526 .Sy u
1527 for a printed representation of the internal representation of time.
1528 See
1529 .Xr time 2 .
1530 Specify
1531 .Sy d
1532 for standard date format.
1533 See
1534 .Xr date 1 .
1535 .It Fl g
1536 Display vdev GUIDs instead of the normal device names. These GUIDs
1537 can be used in place of device names for the zpool
1538 detach/offline/remove/replace commands.
1539 .It Fl H
1540 Scripted mode. Do not display headers, and separate fields by a
1541 single tab instead of arbitrary space.
1542 .It Fl L
1543 Display real paths for vdevs resolving all symbolic links. This can
1544 be used to look up the current block device name regardless of the
1545 .Pa /dev/disk/
1546 path used to open it.
1547 .It Fl p
1548 Display numbers in parsable (exact) values. Time values are in
1549 nanoseconds.
1550 .It Fl P
1551 Display full paths for vdevs instead of only the last component of
1552 the path. This can be used in conjunction with the
1553 .Fl L
1554 flag.
1555 .It Fl r
1556 Print request size histograms for the leaf ZIOs. This includes
1557 histograms of individual ZIOs (
1558 .Ar ind )
1559 and aggregate ZIOs (
1560 .Ar agg ).
1561 These stats can be useful for seeing how well the ZFS IO aggregator is
1562 working. Do not confuse these request size stats with the block layer
1563 requests; it's possible ZIOs can be broken up before being sent to the
1564 block device.
1565 .It Fl v
1566 Verbose statistics Reports usage statistics for individual vdevs within the
1567 pool, in addition to the pool-wide statistics.
1568 .It Fl y
1569 .It Fl w
1570 .It Fl l
1571 Include average latency statistics:
1572 .Pp
1573 .Ar total_wait :
1574 Average total IO time (queuing + disk IO time).
1575 .Ar disk_wait :
1576 Average disk IO time (time reading/writing the disk).
1577 .Ar syncq_wait :
1578 Average amount of time IO spent in synchronous priority queues. Does
1579 not include disk time.
1580 .Ar asyncq_wait :
1581 Average amount of time IO spent in asynchronous priority queues.
1582 Does not include disk time.
1583 .Ar scrub :
1584 Average queuing time in scrub queue. Does not include disk time.
1585 .It Fl q
1586 Include active queue statistics. Each priority queue has both
1587 pending (
1588 .Ar pend )
1589 and active (
1590 .Ar activ )
1591 IOs. Pending IOs are waiting to
1592 be issued to the disk, and active IOs have been issued to disk and are
1593 waiting for completion. These stats are broken out by priority queue:
1594 .Pp
1595 .Ar syncq_read/write :
1596 Current number of entries in synchronous priority
1597 queues.
1598 .Ar asyncq_read/write :
1599 Current number of entries in asynchronous priority queues.
1600 .Ar scrubq_read :
1601 Current number of entries in scrub queue.
1602 .Pp
1603 All queue statistics are instantaneous measurements of the number of
1604 entries in the queues. If you specify an interval, the measurements
1605 will be sampled from the end of the interval.
1606 .El
1607 .It Xo
1608 .Nm
1609 .Cm labelclear
1610 .Op Fl f
1611 .Ar device
1612 .Xc
1613 Removes ZFS label information from the specified
1614 .Ar device .
1615 The
1616 .Ar device
1617 must not be part of an active pool configuration.
1618 .Bl -tag -width Ds
1619 .It Fl f
1620 Treat exported or foreign devices as inactive.
1621 .El
1622 .It Xo
1623 .Nm
1624 .Cm list
1625 .Op Fl HgLpPv
1626 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1627 .Op Fl T Sy u Ns | Ns Sy d
1628 .Oo Ar pool Oc Ns ...
1629 .Op Ar interval Op Ar count
1630 .Xc
1631 Lists the given pools along with a health status and space usage.
1632 If no
1633 .Ar pool Ns s
1634 are specified, all pools in the system are listed.
1635 When given an
1636 .Ar interval ,
1637 the information is printed every
1638 .Ar interval
1639 seconds until ^C is pressed.
1640 If
1641 .Ar count
1642 is specified, the command exits after
1643 .Ar count
1644 reports are printed.
1645 .Bl -tag -width Ds
1646 .It Fl g
1647 Display vdev GUIDs instead of the normal device names. These GUIDs
1648 can be used in place of device names for the zpool
1649 detach/offline/remove/replace commands.
1650 .It Fl H
1651 Scripted mode.
1652 Do not display headers, and separate fields by a single tab instead of arbitrary
1653 space.
1654 .It Fl o Ar property
1655 Comma-separated list of properties to display.
1656 See the
1657 .Sx Properties
1658 section for a list of valid properties.
1659 The default list is
1660 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1661 .Cm dedupratio , health , altroot .
1662 .It Fl L
1663 Display real paths for vdevs resolving all symbolic links. This can
1664 be used to look up the current block device name regardless of the
1665 /dev/disk/ path used to open it.
1666 .It Fl p
1667 Display numbers in parsable
1668 .Pq exact
1669 values.
1670 .It Fl P
1671 Display full paths for vdevs instead of only the last component of
1672 the path. This can be used in conjunction with the
1673 .Fl L flag.
1674 .It Fl T Sy u Ns | Ns Sy d
1675 Display a time stamp.
1676 Specify
1677 .Fl u
1678 for a printed representation of the internal representation of time.
1679 See
1680 .Xr time 2 .
1681 Specify
1682 .Fl d
1683 for standard date format.
1684 See
1685 .Xr date 1 .
1686 .It Fl v
1687 Verbose statistics.
1688 Reports usage statistics for individual vdevs within the pool, in addition to
1689 the pool-wise statistics.
1690 .El
1691 .It Xo
1692 .Nm
1693 .Cm offline
1694 .Op Fl f
1695 .Op Fl t
1696 .Ar pool Ar device Ns ...
1697 .Xc
1698 Takes the specified physical device offline.
1699 While the
1700 .Ar device
1701 is offline, no attempt is made to read or write to the device.
1702 This command is not applicable to spares.
1703 .Bl -tag -width Ds
1704 .It Fl f
1705 Force fault. Instead of offlining the disk, put it into a faulted
1706 state. The fault will persist across imports unless the
1707 .Fl t
1708 flag was specified.
1709 .It Fl t
1710 Temporary.
1711 Upon reboot, the specified physical device reverts to its previous state.
1712 .El
1713 .It Xo
1714 .Nm
1715 .Cm online
1716 .Op Fl e
1717 .Ar pool Ar device Ns ...
1718 .Xc
1719 Brings the specified physical device online.
1720 This command is not applicable to spares.
1721 .Bl -tag -width Ds
1722 .It Fl e
1723 Expand the device to use all available space.
1724 If the device is part of a mirror or raidz then all devices must be expanded
1725 before the new space will become available to the pool.
1726 .El
1727 .It Xo
1728 .Nm
1729 .Cm reguid
1730 .Ar pool
1731 .Xc
1732 Generates a new unique identifier for the pool.
1733 You must ensure that all devices in this pool are online and healthy before
1734 performing this action.
1735 .It Xo
1736 .Nm
1737 .Cm reopen
1738 .Op Fl n
1739 .Ar pool
1740 .Xc
1741 Reopen all the vdevs associated with the pool.
1742 .Bl -tag -width Ds
1743 .It Fl n
1744 Do not restart an in-progress scrub operation. This is not recommended and can
1745 result in partially resilvered devices unless a second scrub is performed.
1746 .El
1747 .It Xo
1748 .Nm
1749 .Cm remove
1750 .Ar pool Ar device Ns ...
1751 .Xc
1752 Removes the specified device from the pool.
1753 This command currently only supports removing hot spares, cache, and log
1754 devices.
1755 A mirrored log device can be removed by specifying the top-level mirror for the
1756 log.
1757 Non-log devices that are part of a mirrored configuration can be removed using
1758 the
1759 .Nm zpool Cm detach
1760 command.
1761 Non-redundant and raidz devices cannot be removed from a pool.
1762 .It Xo
1763 .Nm
1764 .Cm replace
1765 .Op Fl f
1766 .Op Fl o Ar property Ns = Ns Ar value
1767 .Ar pool Ar device Op Ar new_device
1768 .Xc
1769 Replaces
1770 .Ar old_device
1771 with
1772 .Ar new_device .
1773 This is equivalent to attaching
1774 .Ar new_device ,
1775 waiting for it to resilver, and then detaching
1776 .Ar old_device .
1777 .Pp
1778 The size of
1779 .Ar new_device
1780 must be greater than or equal to the minimum size of all the devices in a mirror
1781 or raidz configuration.
1782 .Pp
1783 .Ar new_device
1784 is required if the pool is not redundant.
1785 If
1786 .Ar new_device
1787 is not specified, it defaults to
1788 .Ar old_device .
1789 This form of replacement is useful after an existing disk has failed and has
1790 been physically replaced.
1791 In this case, the new disk may have the same
1792 .Pa /dev
1793 path as the old device, even though it is actually a different disk.
1794 ZFS recognizes this.
1795 .Bl -tag -width Ds
1796 .It Fl f
1797 Forces use of
1798 .Ar new_device ,
1799 even if its appears to be in use.
1800 Not all devices can be overridden in this manner.
1801 .It Fl o Ar property Ns = Ns Ar value
1802 Sets the given pool properties. See the
1803 .Sx Properties
1804 section for a list of valid properties that can be set.
1805 The only property supported at the moment is
1806 .Sy ashift .
1807 .El
1808 .It Xo
1809 .Nm
1810 .Cm scrub
1811 .Op Fl s | Fl p
1812 .Ar pool Ns ...
1813 .Xc
1814 Begins a scrub or resumes a paused scrub.
1815 The scrub examines all data in the specified pools to verify that it checksums
1816 correctly.
1817 For replicated
1818 .Pq mirror or raidz
1819 devices, ZFS automatically repairs any damage discovered during the scrub.
1820 The
1821 .Nm zpool Cm status
1822 command reports the progress of the scrub and summarizes the results of the
1823 scrub upon completion.
1824 .Pp
1825 Scrubbing and resilvering are very similar operations.
1826 The difference is that resilvering only examines data that ZFS knows to be out
1827 of date
1828 .Po
1829 for example, when attaching a new device to a mirror or replacing an existing
1830 device
1831 .Pc ,
1832 whereas scrubbing examines all data to discover silent errors due to hardware
1833 faults or disk failure.
1834 .Pp
1835 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1836 one at a time.
1837 If a scrub is paused, the
1838 .Nm zpool Cm scrub
1839 resumes it.
1840 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1841 resilver completes.
1842 .Bl -tag -width Ds
1843 .It Fl s
1844 Stop scrubbing.
1845 .El
1846 .Bl -tag -width Ds
1847 .It Fl p
1848 Pause scrubbing.
1849 Scrub pause state and progress are periodically synced to disk.
1850 If the system is restarted or pool is exported during a paused scrub,
1851 even after import, scrub will remain paused until it is resumed.
1852 Once resumed the scrub will pick up from the place where it was last
1853 checkpointed to disk.
1854 To resume a paused scrub issue
1855 .Nm zpool Cm scrub
1856 again.
1857 .El
1858 .It Xo
1859 .Nm
1860 .Cm set
1861 .Ar property Ns = Ns Ar value
1862 .Ar pool
1863 .Xc
1864 Sets the given property on the specified pool.
1865 See the
1866 .Sx Properties
1867 section for more information on what properties can be set and acceptable
1868 values.
1869 .It Xo
1870 .Nm
1871 .Cm split
1872 .Op Fl gLlnP
1873 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1874 .Op Fl R Ar root
1875 .Ar pool newpool
1876 .Op Ar device ...
1877 .Xc
1878 Splits devices off
1879 .Ar pool
1880 creating
1881 .Ar newpool .
1882 All vdevs in
1883 .Ar pool
1884 must be mirrors and the pool must not be in the process of resilvering.
1885 At the time of the split,
1886 .Ar newpool
1887 will be a replica of
1888 .Ar pool .
1889 By default, the
1890 last device in each mirror is split from
1891 .Ar pool
1892 to create
1893 .Ar newpool .
1894 .Pp
1895 The optional device specification causes the specified device(s) to be
1896 included in the new
1897 .Ar pool
1898 and, should any devices remain unspecified,
1899 the last device in each mirror is used as would be by default.
1900 .Bl -tag -width Ds
1901 .It Fl g
1902 Display vdev GUIDs instead of the normal device names. These GUIDs
1903 can be used in place of device names for the zpool
1904 detach/offline/remove/replace commands.
1905 .It Fl L
1906 Display real paths for vdevs resolving all symbolic links. This can
1907 be used to look up the current block device name regardless of the
1908 .Pa /dev/disk/
1909 path used to open it.
1910 .It Fl l
1911 Indicates that this command will request encryption keys for all encrypted
1912 datasets it attempts to mount as it is bringing the new pool online. Note that
1913 if any datasets have a
1914 .Sy keylocation
1915 of
1916 .Sy prompt
1917 this command will block waiting for the keys to be entered. Without this flag
1918 encrypted datasets will be left unavailable until the keys are loaded.
1919 .It Fl n
1920 Do dry run, do not actually perform the split.
1921 Print out the expected configuration of
1922 .Ar newpool .
1923 .It Fl P
1924 Display full paths for vdevs instead of only the last component of
1925 the path. This can be used in conjunction with the
1926 .Fl L flag.
1927 .It Fl o Ar property Ns = Ns Ar value
1928 Sets the specified property for
1929 .Ar newpool .
1930 See the
1931 .Sx Properties
1932 section for more information on the available pool properties.
1933 .It Fl R Ar root
1934 Set
1935 .Sy altroot
1936 for
1937 .Ar newpool
1938 to
1939 .Ar root
1940 and automatically import it.
1941 .El
1942 .It Xo
1943 .Nm
1944 .Cm status
1945 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1946 .Op Fl gLPvxD
1947 .Op Fl T Sy u Ns | Ns Sy d
1948 .Oo Ar pool Oc Ns ...
1949 .Op Ar interval Op Ar count
1950 .Xc
1951 Displays the detailed health status for the given pools.
1952 If no
1953 .Ar pool
1954 is specified, then the status of each pool in the system is displayed.
1955 For more information on pool and device health, see the
1956 .Sx Device Failure and Recovery
1957 section.
1958 .Pp
1959 If a scrub or resilver is in progress, this command reports the percentage done
1960 and the estimated time to completion.
1961 Both of these are only approximate, because the amount of data in the pool and
1962 the other workloads on the system can change.
1963 .Bl -tag -width Ds
1964 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1965 Run a script (or scripts) on each vdev and include the output as a new column
1966 in the
1967 .Nm zpool Cm status
1968 output. See the
1969 .Fl c
1970 option of
1971 .Nm zpool Cm iostat
1972 for complete details.
1973 .It Fl g
1974 Display vdev GUIDs instead of the normal device names. These GUIDs
1975 can be used in place of device names for the zpool
1976 detach/offline/remove/replace commands.
1977 .It Fl L
1978 Display real paths for vdevs resolving all symbolic links. This can
1979 be used to look up the current block device name regardless of the
1980 .Pa /dev/disk/
1981 path used to open it.
1982 .It Fl P
1983 Display full paths for vdevs instead of only the last component of
1984 the path. This can be used in conjunction with the
1985 .Fl L flag.
1986 .It Fl D
1987 Display a histogram of deduplication statistics, showing the allocated
1988 .Pq physically present on disk
1989 and referenced
1990 .Pq logically referenced in the pool
1991 block counts and sizes by reference count.
1992 .It Fl T Sy u Ns | Ns Sy d
1993 Display a time stamp.
1994 Specify
1995 .Fl u
1996 for a printed representation of the internal representation of time.
1997 See
1998 .Xr time 2 .
1999 Specify
2000 .Fl d
2001 for standard date format.
2002 See
2003 .Xr date 1 .
2004 .It Fl v
2005 Displays verbose data error information, printing out a complete list of all
2006 data errors since the last complete pool scrub.
2007 .It Fl x
2008 Only display status for pools that are exhibiting errors or are otherwise
2009 unavailable.
2010 Warnings about pools not using the latest on-disk format will not be included.
2011 .El
2012 .It Xo
2013 .Nm
2014 .Cm sync
2015 .Op Ar pool ...
2016 .Xc
2017 This command forces all in-core dirty data to be written to the primary
2018 pool storage and not the ZIL. It will also update administrative
2019 information including quota reporting. Without arguments,
2020 .Sy zpool sync
2021 will sync all pools on the system. Otherwise, it will sync only the
2022 specified pool(s).
2023 .It Xo
2024 .Nm
2025 .Cm upgrade
2026 .Xc
2027 Displays pools which do not have all supported features enabled and pools
2028 formatted using a legacy ZFS version number.
2029 These pools can continue to be used, but some features may not be available.
2030 Use
2031 .Nm zpool Cm upgrade Fl a
2032 to enable all features on all pools.
2033 .It Xo
2034 .Nm
2035 .Cm upgrade
2036 .Fl v
2037 .Xc
2038 Displays legacy ZFS versions supported by the current software.
2039 See
2040 .Xr zpool-features 5
2041 for a description of feature flags features supported by the current software.
2042 .It Xo
2043 .Nm
2044 .Cm upgrade
2045 .Op Fl V Ar version
2046 .Fl a Ns | Ns Ar pool Ns ...
2047 .Xc
2048 Enables all supported features on the given pool.
2049 Once this is done, the pool will no longer be accessible on systems that do not
2050 support feature flags.
2051 See
2052 .Xr zfs-features 5
2053 for details on compatibility with systems that support feature flags, but do not
2054 support all features enabled on the pool.
2055 .Bl -tag -width Ds
2056 .It Fl a
2057 Enables all supported features on all pools.
2058 .It Fl V Ar version
2059 Upgrade to the specified legacy version.
2060 If the
2061 .Fl V
2062 flag is specified, no features will be enabled on the pool.
2063 This option can only be used to increase the version number up to the last
2064 supported legacy version number.
2065 .El
2066 .El
2067 .Sh EXIT STATUS
2068 The following exit values are returned:
2069 .Bl -tag -width Ds
2070 .It Sy 0
2071 Successful completion.
2072 .It Sy 1
2073 An error occurred.
2074 .It Sy 2
2075 Invalid command line options were specified.
2076 .El
2077 .Sh EXAMPLES
2078 .Bl -tag -width Ds
2079 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2080 The following command creates a pool with a single raidz root vdev that
2081 consists of six disks.
2082 .Bd -literal
2083 # zpool create tank raidz sda sdb sdc sdd sde sdf
2084 .Ed
2085 .It Sy Example 2 No Creating a Mirrored Storage Pool
2086 The following command creates a pool with two mirrors, where each mirror
2087 contains two disks.
2088 .Bd -literal
2089 # zpool create tank mirror sda sdb mirror sdc sdd
2090 .Ed
2091 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2092 The following command creates an unmirrored pool using two disk partitions.
2093 .Bd -literal
2094 # zpool create tank sda1 sdb2
2095 .Ed
2096 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2097 The following command creates an unmirrored pool using files.
2098 While not recommended, a pool based on files can be useful for experimental
2099 purposes.
2100 .Bd -literal
2101 # zpool create tank /path/to/file/a /path/to/file/b
2102 .Ed
2103 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2104 The following command adds two mirrored disks to the pool
2105 .Em tank ,
2106 assuming the pool is already made up of two-way mirrors.
2107 The additional space is immediately available to any datasets within the pool.
2108 .Bd -literal
2109 # zpool add tank mirror sda sdb
2110 .Ed
2111 .It Sy Example 6 No Listing Available ZFS Storage Pools
2112 The following command lists all available pools on the system.
2113 In this case, the pool
2114 .Em zion
2115 is faulted due to a missing device.
2116 The results from this command are similar to the following:
2117 .Bd -literal
2118 # zpool list
2119 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2120 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
2121 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
2122 zion - - - - - - - FAULTED -
2123 .Ed
2124 .It Sy Example 7 No Destroying a ZFS Storage Pool
2125 The following command destroys the pool
2126 .Em tank
2127 and any datasets contained within.
2128 .Bd -literal
2129 # zpool destroy -f tank
2130 .Ed
2131 .It Sy Example 8 No Exporting a ZFS Storage Pool
2132 The following command exports the devices in pool
2133 .Em tank
2134 so that they can be relocated or later imported.
2135 .Bd -literal
2136 # zpool export tank
2137 .Ed
2138 .It Sy Example 9 No Importing a ZFS Storage Pool
2139 The following command displays available pools, and then imports the pool
2140 .Em tank
2141 for use on the system.
2142 The results from this command are similar to the following:
2143 .Bd -literal
2144 # zpool import
2145 pool: tank
2146 id: 15451357997522795478
2147 state: ONLINE
2148 action: The pool can be imported using its name or numeric identifier.
2149 config:
2150
2151 tank ONLINE
2152 mirror ONLINE
2153 sda ONLINE
2154 sdb ONLINE
2155
2156 # zpool import tank
2157 .Ed
2158 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2159 The following command upgrades all ZFS Storage pools to the current version of
2160 the software.
2161 .Bd -literal
2162 # zpool upgrade -a
2163 This system is currently running ZFS version 2.
2164 .Ed
2165 .It Sy Example 11 No Managing Hot Spares
2166 The following command creates a new pool with an available hot spare:
2167 .Bd -literal
2168 # zpool create tank mirror sda sdb spare sdc
2169 .Ed
2170 .Pp
2171 If one of the disks were to fail, the pool would be reduced to the degraded
2172 state.
2173 The failed device can be replaced using the following command:
2174 .Bd -literal
2175 # zpool replace tank sda sdd
2176 .Ed
2177 .Pp
2178 Once the data has been resilvered, the spare is automatically removed and is
2179 made available for use should another device fail.
2180 The hot spare can be permanently removed from the pool using the following
2181 command:
2182 .Bd -literal
2183 # zpool remove tank sdc
2184 .Ed
2185 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2186 The following command creates a ZFS storage pool consisting of two, two-way
2187 mirrors and mirrored log devices:
2188 .Bd -literal
2189 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2190 sde sdf
2191 .Ed
2192 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2193 The following command adds two disks for use as cache devices to a ZFS storage
2194 pool:
2195 .Bd -literal
2196 # zpool add pool cache sdc sdd
2197 .Ed
2198 .Pp
2199 Once added, the cache devices gradually fill with content from main memory.
2200 Depending on the size of your cache devices, it could take over an hour for
2201 them to fill.
2202 Capacity and reads can be monitored using the
2203 .Cm iostat
2204 option as follows:
2205 .Bd -literal
2206 # zpool iostat -v pool 5
2207 .Ed
2208 .It Sy Example 14 No Removing a Mirrored Log Device
2209 The following command removes the mirrored log device
2210 .Sy mirror-2 .
2211 Given this configuration:
2212 .Bd -literal
2213 pool: tank
2214 state: ONLINE
2215 scrub: none requested
2216 config:
2217
2218 NAME STATE READ WRITE CKSUM
2219 tank ONLINE 0 0 0
2220 mirror-0 ONLINE 0 0 0
2221 sda ONLINE 0 0 0
2222 sdb ONLINE 0 0 0
2223 mirror-1 ONLINE 0 0 0
2224 sdc ONLINE 0 0 0
2225 sdd ONLINE 0 0 0
2226 logs
2227 mirror-2 ONLINE 0 0 0
2228 sde ONLINE 0 0 0
2229 sdf ONLINE 0 0 0
2230 .Ed
2231 .Pp
2232 The command to remove the mirrored log
2233 .Sy mirror-2
2234 is:
2235 .Bd -literal
2236 # zpool remove tank mirror-2
2237 .Ed
2238 .It Sy Example 15 No Displaying expanded space on a device
2239 The following command displays the detailed information for the pool
2240 .Em data .
2241 This pool is comprised of a single raidz vdev where one of its devices
2242 increased its capacity by 10GB.
2243 In this example, the pool will not be able to utilize this extra capacity until
2244 all the devices under the raidz vdev have been expanded.
2245 .Bd -literal
2246 # zpool list -v data
2247 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2248 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2249 raidz1 23.9G 14.6G 9.30G 48% -
2250 sda - - - - -
2251 sdb - - - - 10G
2252 sdc - - - - -
2253 .Ed
2254 .It Sy Example 16 No Adding output columns
2255 Additional columns can be added to the
2256 .Nm zpool Cm status
2257 and
2258 .Nm zpool Cm iostat
2259 output with
2260 .Fl c
2261 option.
2262 .Bd -literal
2263 # zpool status -c vendor,model,size
2264 NAME STATE READ WRITE CKSUM vendor model size
2265 tank ONLINE 0 0 0
2266 mirror-0 ONLINE 0 0 0
2267 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2268 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2269 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2270 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2271 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2272 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2273
2274 # zpool iostat -vc slaves
2275 capacity operations bandwidth
2276 pool alloc free read write read write slaves
2277 ---------- ----- ----- ----- ----- ----- ----- ---------
2278 tank 20.4G 7.23T 26 152 20.7M 21.6M
2279 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2280 U1 - - 0 31 1.46K 20.6M sdb sdff
2281 U10 - - 0 1 3.77K 13.3K sdas sdgw
2282 U11 - - 0 1 288K 13.3K sdat sdgx
2283 U12 - - 0 1 78.4K 13.3K sdau sdgy
2284 U13 - - 0 1 128K 13.3K sdav sdgz
2285 U14 - - 0 1 63.2K 13.3K sdfk sdg
2286 .Ed
2287 .El
2288 .Sh ENVIRONMENT VARIABLES
2289 .Bl -tag -width "ZFS_ABORT"
2290 .It Ev ZFS_ABORT
2291 Cause
2292 .Nm zpool
2293 to dump core on exit for the purposes of running
2294 .Sy ::findleaks .
2295 .El
2296 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2297 .It Ev ZPOOL_IMPORT_PATH
2298 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2299 .Nm zpool
2300 looks for device nodes and files.
2301 Similar to the
2302 .Fl d
2303 option in
2304 .Nm zpool import .
2305 .El
2306 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2307 .It Ev ZPOOL_VDEV_NAME_GUID
2308 Cause
2309 .Nm zpool subcommands to output vdev guids by default. This behavior
2310 is identical to the
2311 .Nm zpool status -g
2312 command line option.
2313 .El
2314 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2315 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2316 Cause
2317 .Nm zpool
2318 subcommands to follow links for vdev names by default. This behavior is identical to the
2319 .Nm zpool status -L
2320 command line option.
2321 .El
2322 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2323 .It Ev ZPOOL_VDEV_NAME_PATH
2324 Cause
2325 .Nm zpool
2326 subcommands to output full vdev path names by default. This
2327 behavior is identical to the
2328 .Nm zpool status -p
2329 command line option.
2330 .El
2331 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2332 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2333 Older ZFS on Linux implementations had issues when attempting to display pool
2334 config VDEV names if a
2335 .Sy devid
2336 NVP value is present in the pool's config.
2337 .Pp
2338 For example, a pool that originated on illumos platform would have a devid
2339 value in the config and
2340 .Nm zpool status
2341 would fail when listing the config.
2342 This would also be true for future Linux based pools.
2343 .Pp
2344 A pool can be stripped of any
2345 .Sy devid
2346 values on import or prevented from adding
2347 them on
2348 .Nm zpool create
2349 or
2350 .Nm zpool add
2351 by setting
2352 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2353 .El
2354 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2355 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2356 Allow a privileged user to run the
2357 .Nm zpool status/iostat
2358 with the
2359 .Fl c
2360 option. Normally, only unprivileged users are allowed to run
2361 .Fl c .
2362 .El
2363 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2364 .It Ev ZPOOL_SCRIPTS_PATH
2365 The search path for scripts when running
2366 .Nm zpool status/iostat
2367 with the
2368 .Fl c
2369 option. This is a colon-separated list of directories and overrides the default
2370 .Pa ~/.zpool.d
2371 and
2372 .Pa /etc/zfs/zpool.d
2373 search paths.
2374 .El
2375 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2376 .It Ev ZPOOL_SCRIPTS_ENABLED
2377 Allow a user to run
2378 .Nm zpool status/iostat
2379 with the
2380 .Fl c
2381 option. If
2382 .Sy ZPOOL_SCRIPTS_ENABLED
2383 is not set, it is assumed that the user is allowed to run
2384 .Nm zpool status/iostat -c .
2385 .El
2386 .Sh INTERFACE STABILITY
2387 .Sy Evolving
2388 .Sh SEE ALSO
2389 .Xr zfs-events 5 ,
2390 .Xr zfs-module-parameters 5 ,
2391 .Xr zpool-features 5 ,
2392 .Xr zed 8 ,
2393 .Xr zfs 8