]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
9a1520e779e1caa187ec21cc60ceca090ecc6da3
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright 2016 Nexenta Systems, Inc.
25 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
26 .\" Copyright (c) 2017 Datto Inc.
27 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
28 .\"
29 .Dd June 22, 2017
30 .Dt ZPOOL 8 SMM
31 .Os Linux
32 .Sh NAME
33 .Nm zpool
34 .Nd configure ZFS storage pools
35 .Sh SYNOPSIS
36 .Nm
37 .Fl ?
38 .Nm
39 .Cm add
40 .Op Fl fgLnP
41 .Oo Fl o Ar property Ns = Ns Ar value Oc
42 .Ar pool vdev Ns ...
43 .Nm
44 .Cm attach
45 .Op Fl f
46 .Oo Fl o Ar property Ns = Ns Ar value Oc
47 .Ar pool device new_device
48 .Nm
49 .Cm clear
50 .Ar pool
51 .Op Ar device
52 .Nm
53 .Cm create
54 .Op Fl dfn
55 .Op Fl m Ar mountpoint
56 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
57 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
58 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
59 .Op Fl R Ar root
60 .Ar pool vdev Ns ...
61 .Nm
62 .Cm destroy
63 .Op Fl f
64 .Ar pool
65 .Nm
66 .Cm detach
67 .Ar pool device
68 .Nm
69 .Cm events
70 .Op Fl vHfc
71 .Op Ar pool
72 .Nm
73 .Cm export
74 .Op Fl a
75 .Op Fl f
76 .Ar pool Ns ...
77 .Nm
78 .Cm get
79 .Op Fl Hp
80 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
81 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
82 .Ar pool Ns ...
83 .Nm
84 .Cm history
85 .Op Fl il
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm import
89 .Op Fl D
90 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
91 .Nm
92 .Cm import
93 .Fl a
94 .Op Fl DfmN
95 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
96 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
97 .Op Fl o Ar mntopts
98 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99 .Op Fl R Ar root
100 .Nm
101 .Cm import
102 .Op Fl Dfm
103 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
104 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
105 .Op Fl o Ar mntopts
106 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
107 .Op Fl R Ar root
108 .Op Fl s
109 .Ar pool Ns | Ns Ar id
110 .Op Ar newpool Oo Fl t Oc
111 .Nm
112 .Cm iostat
113 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
114 .Op Fl T Sy u Ns | Ns Sy d
115 .Op Fl ghHLpPvy
116 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
117 .Op Ar interval Op Ar count
118 .Nm
119 .Cm labelclear
120 .Op Fl f
121 .Ar device
122 .Nm
123 .Cm list
124 .Op Fl HgLpPv
125 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
126 .Op Fl T Sy u Ns | Ns Sy d
127 .Oo Ar pool Oc Ns ...
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm offline
131 .Op Fl f
132 .Op Fl t
133 .Ar pool Ar device Ns ...
134 .Nm
135 .Cm online
136 .Op Fl e
137 .Ar pool Ar device Ns ...
138 .Nm
139 .Cm reguid
140 .Ar pool
141 .Nm
142 .Cm reopen
143 .Ar pool
144 .Nm
145 .Cm remove
146 .Ar pool Ar device Ns ...
147 .Nm
148 .Cm replace
149 .Op Fl f
150 .Oo Fl o Ar property Ns = Ns Ar value Oc
151 .Ar pool Ar device Op Ar new_device
152 .Nm
153 .Cm scrub
154 .Op Fl s
155 .Ar pool Ns ...
156 .Nm
157 .Cm set
158 .Ar property Ns = Ns Ar value
159 .Ar pool
160 .Nm
161 .Cm split
162 .Op Fl gLnP
163 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
164 .Op Fl R Ar root
165 .Ar pool newpool
166 .Oo Ar device Oc Ns ...
167 .Nm
168 .Cm status
169 .Oo Fl c Ar SCRIPT Oc
170 .Op Fl gLPvxD
171 .Op Fl T Sy u Ns | Ns Sy d
172 .Oo Ar pool Oc Ns ...
173 .Op Ar interval Op Ar count
174 .Nm
175 .Cm sync
176 .Oo Ar pool Oc Ns ...
177 .Nm
178 .Cm upgrade
179 .Nm
180 .Cm upgrade
181 .Fl v
182 .Nm
183 .Cm upgrade
184 .Op Fl V Ar version
185 .Fl a Ns | Ns Ar pool Ns ...
186 .Sh DESCRIPTION
187 The
188 .Nm
189 command configures ZFS storage pools.
190 A storage pool is a collection of devices that provides physical storage and
191 data replication for ZFS datasets.
192 All datasets within a storage pool share the same space.
193 See
194 .Xr zfs 8
195 for information on managing datasets.
196 .Ss Virtual Devices (vdevs)
197 A "virtual device" describes a single device or a collection of devices
198 organized according to certain performance and fault characteristics.
199 The following virtual devices are supported:
200 .Bl -tag -width Ds
201 .It Sy disk
202 A block device, typically located under
203 .Pa /dev .
204 ZFS can use individual slices or partitions, though the recommended mode of
205 operation is to use whole disks.
206 A disk can be specified by a full path, or it can be a shorthand name
207 .Po the relative portion of the path under
208 .Pa /dev
209 .Pc .
210 A whole disk can be specified by omitting the slice or partition designation.
211 For example,
212 .Pa sda
213 is equivalent to
214 .Pa /dev/sda .
215 When given a whole disk, ZFS automatically labels the disk, if necessary.
216 .It Sy file
217 A regular file.
218 The use of files as a backing store is strongly discouraged.
219 It is designed primarily for experimental purposes, as the fault tolerance of a
220 file is only as good as the file system of which it is a part.
221 A file must be specified by a full path.
222 .It Sy mirror
223 A mirror of two or more devices.
224 Data is replicated in an identical fashion across all components of a mirror.
225 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
226 failing before data integrity is compromised.
227 .It Sy raidz , raidz1 , raidz2 , raidz3
228 A variation on RAID-5 that allows for better distribution of parity and
229 eliminates the RAID-5
230 .Qq write hole
231 .Pq in which data and parity become inconsistent after a power loss .
232 Data and parity is striped across all disks within a raidz group.
233 .Pp
234 A raidz group can have single-, double-, or triple-parity, meaning that the
235 raidz group can sustain one, two, or three failures, respectively, without
236 losing any data.
237 The
238 .Sy raidz1
239 vdev type specifies a single-parity raidz group; the
240 .Sy raidz2
241 vdev type specifies a double-parity raidz group; and the
242 .Sy raidz3
243 vdev type specifies a triple-parity raidz group.
244 The
245 .Sy raidz
246 vdev type is an alias for
247 .Sy raidz1 .
248 .Pp
249 A raidz group with N disks of size X with P parity disks can hold approximately
250 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
251 compromised.
252 The minimum number of devices in a raidz group is one more than the number of
253 parity disks.
254 The recommended number is between 3 and 9 to help increase performance.
255 .It Sy spare
256 A special pseudo-vdev which keeps track of available hot spares for a pool.
257 For more information, see the
258 .Sx Hot Spares
259 section.
260 .It Sy log
261 A separate intent log device.
262 If more than one log device is specified, then writes are load-balanced between
263 devices.
264 Log devices can be mirrored.
265 However, raidz vdev types are not supported for the intent log.
266 For more information, see the
267 .Sx Intent Log
268 section.
269 .It Sy cache
270 A device used to cache storage pool data.
271 A cache device cannot be configured as a mirror or raidz group.
272 For more information, see the
273 .Sx Cache Devices
274 section.
275 .El
276 .Pp
277 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
278 contain files or disks.
279 Mirrors of mirrors
280 .Pq or other combinations
281 are not allowed.
282 .Pp
283 A pool can have any number of virtual devices at the top of the configuration
284 .Po known as
285 .Qq root vdevs
286 .Pc .
287 Data is dynamically distributed across all top-level devices to balance data
288 among devices.
289 As new virtual devices are added, ZFS automatically places data on the newly
290 available devices.
291 .Pp
292 Virtual devices are specified one at a time on the command line, separated by
293 whitespace.
294 The keywords
295 .Sy mirror
296 and
297 .Sy raidz
298 are used to distinguish where a group ends and another begins.
299 For example, the following creates two root vdevs, each a mirror of two disks:
300 .Bd -literal
301 # zpool create mypool mirror sda sdb mirror sdc sdd
302 .Ed
303 .Ss Device Failure and Recovery
304 ZFS supports a rich set of mechanisms for handling device failure and data
305 corruption.
306 All metadata and data is checksummed, and ZFS automatically repairs bad data
307 from a good copy when corruption is detected.
308 .Pp
309 In order to take advantage of these features, a pool must make use of some form
310 of redundancy, using either mirrored or raidz groups.
311 While ZFS supports running in a non-redundant configuration, where each root
312 vdev is simply a disk or file, this is strongly discouraged.
313 A single case of bit corruption can render some or all of your data unavailable.
314 .Pp
315 A pool's health status is described by one of three states: online, degraded,
316 or faulted.
317 An online pool has all devices operating normally.
318 A degraded pool is one in which one or more devices have failed, but the data is
319 still available due to a redundant configuration.
320 A faulted pool has corrupted metadata, or one or more faulted devices, and
321 insufficient replicas to continue functioning.
322 .Pp
323 The health of the top-level vdev, such as mirror or raidz device, is
324 potentially impacted by the state of its associated vdevs, or component
325 devices.
326 A top-level vdev or component device is in one of the following states:
327 .Bl -tag -width "DEGRADED"
328 .It Sy DEGRADED
329 One or more top-level vdevs is in the degraded state because one or more
330 component devices are offline.
331 Sufficient replicas exist to continue functioning.
332 .Pp
333 One or more component devices is in the degraded or faulted state, but
334 sufficient replicas exist to continue functioning.
335 The underlying conditions are as follows:
336 .Bl -bullet
337 .It
338 The number of checksum errors exceeds acceptable levels and the device is
339 degraded as an indication that something may be wrong.
340 ZFS continues to use the device as necessary.
341 .It
342 The number of I/O errors exceeds acceptable levels.
343 The device could not be marked as faulted because there are insufficient
344 replicas to continue functioning.
345 .El
346 .It Sy FAULTED
347 One or more top-level vdevs is in the faulted state because one or more
348 component devices are offline.
349 Insufficient replicas exist to continue functioning.
350 .Pp
351 One or more component devices is in the faulted state, and insufficient
352 replicas exist to continue functioning.
353 The underlying conditions are as follows:
354 .Bl -bullet
355 .It
356 The device could be opened, but the contents did not match expected values.
357 .It
358 The number of I/O errors exceeds acceptable levels and the device is faulted to
359 prevent further use of the device.
360 .El
361 .It Sy OFFLINE
362 The device was explicitly taken offline by the
363 .Nm zpool Cm offline
364 command.
365 .It Sy ONLINE
366 The device is online and functioning.
367 .It Sy REMOVED
368 The device was physically removed while the system was running.
369 Device removal detection is hardware-dependent and may not be supported on all
370 platforms.
371 .It Sy UNAVAIL
372 The device could not be opened.
373 If a pool is imported when a device was unavailable, then the device will be
374 identified by a unique identifier instead of its path since the path was never
375 correct in the first place.
376 .El
377 .Pp
378 If a device is removed and later re-attached to the system, ZFS attempts
379 to put the device online automatically.
380 Device attach detection is hardware-dependent and might not be supported on all
381 platforms.
382 .Ss Hot Spares
383 ZFS allows devices to be associated with pools as
384 .Qq hot spares .
385 These devices are not actively used in the pool, but when an active device
386 fails, it is automatically replaced by a hot spare.
387 To create a pool with hot spares, specify a
388 .Sy spare
389 vdev with any number of devices.
390 For example,
391 .Bd -literal
392 # zpool create pool mirror sda sdb spare sdc sdd
393 .Ed
394 .Pp
395 Spares can be shared across multiple pools, and can be added with the
396 .Nm zpool Cm add
397 command and removed with the
398 .Nm zpool Cm remove
399 command.
400 Once a spare replacement is initiated, a new
401 .Sy spare
402 vdev is created within the configuration that will remain there until the
403 original device is replaced.
404 At this point, the hot spare becomes available again if another device fails.
405 .Pp
406 If a pool has a shared spare that is currently being used, the pool can not be
407 exported since other pools may use this shared spare, which may lead to
408 potential data corruption.
409 .Pp
410 An in-progress spare replacement can be canceled by detaching the hot spare.
411 If the original faulted device is detached, then the hot spare assumes its
412 place in the configuration, and is removed from the spare list of all active
413 pools.
414 .Pp
415 Spares cannot replace log devices.
416 .Ss Intent Log
417 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
418 transactions.
419 For instance, databases often require their transactions to be on stable storage
420 devices when returning from a system call.
421 NFS and other applications can also use
422 .Xr fsync 2
423 to ensure data stability.
424 By default, the intent log is allocated from blocks within the main pool.
425 However, it might be possible to get better performance using separate intent
426 log devices such as NVRAM or a dedicated disk.
427 For example:
428 .Bd -literal
429 # zpool create pool sda sdb log sdc
430 .Ed
431 .Pp
432 Multiple log devices can also be specified, and they can be mirrored.
433 See the
434 .Sx EXAMPLES
435 section for an example of mirroring multiple log devices.
436 .Pp
437 Log devices can be added, replaced, attached, detached, and imported and
438 exported as part of the larger pool.
439 Mirrored log devices can be removed by specifying the top-level mirror for the
440 log.
441 .Ss Cache Devices
442 Devices can be added to a storage pool as
443 .Qq cache devices .
444 These devices provide an additional layer of caching between main memory and
445 disk.
446 For read-heavy workloads, where the working set size is much larger than what
447 can be cached in main memory, using cache devices allow much more of this
448 working set to be served from low latency media.
449 Using cache devices provides the greatest performance improvement for random
450 read-workloads of mostly static content.
451 .Pp
452 To create a pool with cache devices, specify a
453 .Sy cache
454 vdev with any number of devices.
455 For example:
456 .Bd -literal
457 # zpool create pool sda sdb cache sdc sdd
458 .Ed
459 .Pp
460 Cache devices cannot be mirrored or part of a raidz configuration.
461 If a read error is encountered on a cache device, that read I/O is reissued to
462 the original storage pool device, which might be part of a mirrored or raidz
463 configuration.
464 .Pp
465 The content of the cache devices is considered volatile, as is the case with
466 other system caches.
467 .Ss Properties
468 Each pool has several properties associated with it.
469 Some properties are read-only statistics while others are configurable and
470 change the behavior of the pool.
471 .Pp
472 The following are read-only properties:
473 .Bl -tag -width Ds
474 .It Sy available
475 Amount of storage available within the pool.
476 This property can also be referred to by its shortened column name,
477 .Sy avail .
478 .It Sy capacity
479 Percentage of pool space used.
480 This property can also be referred to by its shortened column name,
481 .Sy cap .
482 .It Sy expandsize
483 Amount of uninitialized space within the pool or device that can be used to
484 increase the total capacity of the pool.
485 Uninitialized space consists of any space on an EFI labeled vdev which has not
486 been brought online
487 .Po e.g, using
488 .Nm zpool Cm online Fl e
489 .Pc .
490 This space occurs when a LUN is dynamically expanded.
491 .It Sy fragmentation
492 The amount of fragmentation in the pool.
493 .It Sy free
494 The amount of free space available in the pool.
495 .It Sy freeing
496 After a file system or snapshot is destroyed, the space it was using is
497 returned to the pool asynchronously.
498 .Sy freeing
499 is the amount of space remaining to be reclaimed.
500 Over time
501 .Sy freeing
502 will decrease while
503 .Sy free
504 increases.
505 .It Sy health
506 The current health of the pool.
507 Health can be one of
508 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
509 .It Sy guid
510 A unique identifier for the pool.
511 .It Sy size
512 Total size of the storage pool.
513 .It Sy unsupported@ Ns Em feature_guid
514 Information about unsupported features that are enabled on the pool.
515 See
516 .Xr zpool-features 5
517 for details.
518 .It Sy used
519 Amount of storage space used within the pool.
520 .El
521 .Pp
522 The space usage properties report actual physical space available to the
523 storage pool.
524 The physical space can be different from the total amount of space that any
525 contained datasets can actually use.
526 The amount of space used in a raidz configuration depends on the characteristics
527 of the data being written.
528 In addition, ZFS reserves some space for internal accounting that the
529 .Xr zfs 8
530 command takes into account, but the
531 .Nm
532 command does not.
533 For non-full pools of a reasonable size, these effects should be invisible.
534 For small pools, or pools that are close to being completely full, these
535 discrepancies may become more noticeable.
536 .Pp
537 The following property can be set at creation time and import time:
538 .Bl -tag -width Ds
539 .It Sy altroot
540 Alternate root directory.
541 If set, this directory is prepended to any mount points within the pool.
542 This can be used when examining an unknown pool where the mount points cannot be
543 trusted, or in an alternate boot environment, where the typical paths are not
544 valid.
545 .Sy altroot
546 is not a persistent property.
547 It is valid only while the system is up.
548 Setting
549 .Sy altroot
550 defaults to using
551 .Sy cachefile Ns = Ns Sy none ,
552 though this may be overridden using an explicit setting.
553 .El
554 .Pp
555 The following property can be set only at import time:
556 .Bl -tag -width Ds
557 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
558 If set to
559 .Sy on ,
560 the pool will be imported in read-only mode.
561 This property can also be referred to by its shortened column name,
562 .Sy rdonly .
563 .El
564 .Pp
565 The following properties can be set at creation time and import time, and later
566 changed with the
567 .Nm zpool Cm set
568 command:
569 .Bl -tag -width Ds
570 .It Sy ashift Ns = Ns Sy ashift
571 Pool sector size exponent, to the power of
572 .Sy 2
573 (internally referred to as
574 .Sy ashift
575 ). Values from 9 to 16, inclusive, are valid; also, the special
576 value 0 (the default) means to auto-detect using the kernel's block
577 layer and a ZFS internal exception list. I/O operations will be aligned
578 to the specified size boundaries. Additionally, the minimum (disk)
579 write size will be set to the specified size, so this represents a
580 space vs. performance trade-off. For optimal performance, the pool
581 sector size should be greater than or equal to the sector size of the
582 underlying disks. The typical case for setting this property is when
583 performance is important and the underlying disks use 4KiB sectors but
584 report 512B sectors to the OS (for compatibility reasons); in that
585 case, set
586 .Sy ashift=12
587 (which is 1<<12 = 4096). When set, this property is
588 used as the default hint value in subsequent vdev operations (add,
589 attach and replace). Changing this value will not modify any existing
590 vdev, not even on disk replacement; however it can be used, for
591 instance, to replace a dying 512B sectors disk with a newer 4KiB
592 sectors device: this will probably result in bad performance but at the
593 same time could prevent loss of data.
594 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
595 Controls automatic pool expansion when the underlying LUN is grown.
596 If set to
597 .Sy on ,
598 the pool will be resized according to the size of the expanded device.
599 If the device is part of a mirror or raidz then all devices within that
600 mirror/raidz group must be expanded before the new space is made available to
601 the pool.
602 The default behavior is
603 .Sy off .
604 This property can also be referred to by its shortened column name,
605 .Sy expand .
606 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
607 Controls automatic device replacement.
608 If set to
609 .Sy off ,
610 device replacement must be initiated by the administrator by using the
611 .Nm zpool Cm replace
612 command.
613 If set to
614 .Sy on ,
615 any new device, found in the same physical location as a device that previously
616 belonged to the pool, is automatically formatted and replaced.
617 The default behavior is
618 .Sy off .
619 This property can also be referred to by its shortened column name,
620 .Sy replace .
621 Autoreplace can also be used with virtual disks (like device
622 mapper) provided that you use the /dev/disk/by-vdev paths setup by
623 vdev_id.conf. See the
624 .Xr vdev_id 8
625 man page for more details.
626 Autoreplace and autoonline require the ZFS Event Daemon be configured and
627 running. See the
628 .Xr zed 8
629 man page for more details.
630 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
631 Identifies the default bootable dataset for the root pool. This property is
632 expected to be set mainly by the installation and upgrade programs.
633 Not all Linux distribution boot processes use the bootfs property.
634 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
635 Controls the location of where the pool configuration is cached.
636 Discovering all pools on system startup requires a cached copy of the
637 configuration data that is stored on the root file system.
638 All pools in this cache are automatically imported when the system boots.
639 Some environments, such as install and clustering, need to cache this
640 information in a different location so that pools are not automatically
641 imported.
642 Setting this property caches the pool configuration in a different location that
643 can later be imported with
644 .Nm zpool Cm import Fl c .
645 Setting it to the special value
646 .Sy none
647 creates a temporary pool that is never cached, and the special value
648 .Qq
649 .Pq empty string
650 uses the default location.
651 .Pp
652 Multiple pools can share the same cache file.
653 Because the kernel destroys and recreates this file when pools are added and
654 removed, care should be taken when attempting to access this file.
655 When the last pool using a
656 .Sy cachefile
657 is exported or destroyed, the file is removed.
658 .It Sy comment Ns = Ns Ar text
659 A text string consisting of printable ASCII characters that will be stored
660 such that it is available even if the pool becomes faulted.
661 An administrator can provide additional information about a pool using this
662 property.
663 .It Sy dedupditto Ns = Ns Ar number
664 Threshold for the number of block ditto copies.
665 If the reference count for a deduplicated block increases above this number, a
666 new ditto copy of this block is automatically stored.
667 The default setting is
668 .Sy 0
669 which causes no ditto copies to be created for deduplicated blocks.
670 The minimum legal nonzero setting is
671 .Sy 100 .
672 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
673 Controls whether a non-privileged user is granted access based on the dataset
674 permissions defined on the dataset.
675 See
676 .Xr zfs 8
677 for more information on ZFS delegated administration.
678 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
679 Controls the system behavior in the event of catastrophic pool failure.
680 This condition is typically a result of a loss of connectivity to the underlying
681 storage device(s) or a failure of all devices within the pool.
682 The behavior of such an event is determined as follows:
683 .Bl -tag -width "continue"
684 .It Sy wait
685 Blocks all I/O access until the device connectivity is recovered and the errors
686 are cleared.
687 This is the default behavior.
688 .It Sy continue
689 Returns
690 .Er EIO
691 to any new write I/O requests but allows reads to any of the remaining healthy
692 devices.
693 Any write requests that have yet to be committed to disk would be blocked.
694 .It Sy panic
695 Prints out a message to the console and generates a system crash dump.
696 .El
697 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
698 The value of this property is the current state of
699 .Ar feature_name .
700 The only valid value when setting this property is
701 .Sy enabled
702 which moves
703 .Ar feature_name
704 to the enabled state.
705 See
706 .Xr zpool-features 5
707 for details on feature states.
708 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
709 Controls whether information about snapshots associated with this pool is
710 output when
711 .Nm zfs Cm list
712 is run without the
713 .Fl t
714 option.
715 The default value is
716 .Sy off .
717 This property can also be referred to by its shortened name,
718 .Sy listsnaps .
719 .It Sy version Ns = Ns Ar version
720 The current on-disk version of the pool.
721 This can be increased, but never decreased.
722 The preferred method of updating pools is with the
723 .Nm zpool Cm upgrade
724 command, though this property can be used when a specific version is needed for
725 backwards compatibility.
726 Once feature flags are enabled on a pool this property will no longer have a
727 value.
728 .El
729 .Ss Subcommands
730 All subcommands that modify state are logged persistently to the pool in their
731 original form.
732 .Pp
733 The
734 .Nm
735 command provides subcommands to create and destroy storage pools, add capacity
736 to storage pools, and provide information about the storage pools.
737 The following subcommands are supported:
738 .Bl -tag -width Ds
739 .It Xo
740 .Nm
741 .Fl ?
742 .Xc
743 Displays a help message.
744 .It Xo
745 .Nm
746 .Cm add
747 .Op Fl fgLnP
748 .Oo Fl o Ar property Ns = Ns Ar value Oc
749 .Ar pool vdev Ns ...
750 .Xc
751 Adds the specified virtual devices to the given pool.
752 The
753 .Ar vdev
754 specification is described in the
755 .Sx Virtual Devices
756 section.
757 The behavior of the
758 .Fl f
759 option, and the device checks performed are described in the
760 .Nm zpool Cm create
761 subcommand.
762 .Bl -tag -width Ds
763 .It Fl f
764 Forces use of
765 .Ar vdev Ns s ,
766 even if they appear in use or specify a conflicting replication level.
767 Not all devices can be overridden in this manner.
768 .It Fl g
769 Display
770 .Ar vdev ,
771 GUIDs instead of the normal device names. These GUIDs can be used in place of
772 device names for the zpool detach/offline/remove/replace commands.
773 .It Fl L
774 Display real paths for
775 .Ar vdev Ns s
776 resolving all symbolic links. This can be used to look up the current block
777 device name regardless of the /dev/disk/ path used to open it.
778 .It Fl n
779 Displays the configuration that would be used without actually adding the
780 .Ar vdev Ns s .
781 The actual pool creation can still fail due to insufficient privileges or
782 device sharing.
783 .It Fl P
784 Display real paths for
785 .Ar vdev Ns s
786 instead of only the last component of the path. This can be used in
787 conjunction with the -L flag.
788 .It Fl o Ar property Ns = Ns Ar value
789 Sets the given pool properties. See the
790 .Sx Properties
791 section for a list of valid properties that can be set. The only property
792 supported at the moment is ashift.
793 .El
794 .It Xo
795 .Nm
796 .Cm attach
797 .Op Fl f
798 .Oo Fl o Ar property Ns = Ns Ar value Oc
799 .Ar pool device new_device
800 .Xc
801 Attaches
802 .Ar new_device
803 to the existing
804 .Ar device .
805 The existing device cannot be part of a raidz configuration.
806 If
807 .Ar device
808 is not currently part of a mirrored configuration,
809 .Ar device
810 automatically transforms into a two-way mirror of
811 .Ar device
812 and
813 .Ar new_device .
814 If
815 .Ar device
816 is part of a two-way mirror, attaching
817 .Ar new_device
818 creates a three-way mirror, and so on.
819 In either case,
820 .Ar new_device
821 begins to resilver immediately.
822 .Bl -tag -width Ds
823 .It Fl f
824 Forces use of
825 .Ar new_device ,
826 even if its appears to be in use.
827 Not all devices can be overridden in this manner.
828 .It Fl o Ar property Ns = Ns Ar value
829 Sets the given pool properties. See the
830 .Sx Properties
831 section for a list of valid properties that can be set. The only property
832 supported at the moment is ashift.
833 .El
834 .It Xo
835 .Nm
836 .Cm clear
837 .Ar pool
838 .Op Ar device
839 .Xc
840 Clears device errors in a pool.
841 If no arguments are specified, all device errors within the pool are cleared.
842 If one or more devices is specified, only those errors associated with the
843 specified device or devices are cleared.
844 .It Xo
845 .Nm
846 .Cm create
847 .Op Fl dfn
848 .Op Fl m Ar mountpoint
849 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
850 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
851 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
852 .Op Fl R Ar root
853 .Op Fl t Ar tname
854 .Ar pool vdev Ns ...
855 .Xc
856 Creates a new storage pool containing the virtual devices specified on the
857 command line.
858 The pool name must begin with a letter, and can only contain
859 alphanumeric characters as well as underscore
860 .Pq Qq Sy _ ,
861 dash
862 .Pq Qq Sy \&. ,
863 colon
864 .Pq Qq Sy \&: ,
865 space
866 .Pq Qq Sy - ,
867 and period
868 .Pq Qq Sy \&. .
869 The pool names
870 .Sy mirror ,
871 .Sy raidz ,
872 .Sy spare
873 and
874 .Sy log
875 are reserved, as are names beginning with the pattern
876 .Sy c[0-9] .
877 The
878 .Ar vdev
879 specification is described in the
880 .Sx Virtual Devices
881 section.
882 .Pp
883 The command verifies that each device specified is accessible and not currently
884 in use by another subsystem.
885 There are some uses, such as being currently mounted, or specified as the
886 dedicated dump device, that prevents a device from ever being used by ZFS.
887 Other uses, such as having a preexisting UFS file system, can be overridden with
888 the
889 .Fl f
890 option.
891 .Pp
892 The command also checks that the replication strategy for the pool is
893 consistent.
894 An attempt to combine redundant and non-redundant storage in a single pool, or
895 to mix disks and files, results in an error unless
896 .Fl f
897 is specified.
898 The use of differently sized devices within a single raidz or mirror group is
899 also flagged as an error unless
900 .Fl f
901 is specified.
902 .Pp
903 Unless the
904 .Fl R
905 option is specified, the default mount point is
906 .Pa / Ns Ar pool .
907 The mount point must not exist or must be empty, or else the root dataset
908 cannot be mounted.
909 This can be overridden with the
910 .Fl m
911 option.
912 .Pp
913 By default all supported features are enabled on the new pool unless the
914 .Fl d
915 option is specified.
916 .Bl -tag -width Ds
917 .It Fl d
918 Do not enable any features on the new pool.
919 Individual features can be enabled by setting their corresponding properties to
920 .Sy enabled
921 with the
922 .Fl o
923 option.
924 See
925 .Xr zpool-features 5
926 for details about feature properties.
927 .It Fl f
928 Forces use of
929 .Ar vdev Ns s ,
930 even if they appear in use or specify a conflicting replication level.
931 Not all devices can be overridden in this manner.
932 .It Fl m Ar mountpoint
933 Sets the mount point for the root dataset.
934 The default mount point is
935 .Pa /pool
936 or
937 .Pa altroot/pool
938 if
939 .Ar altroot
940 is specified.
941 The mount point must be an absolute path,
942 .Sy legacy ,
943 or
944 .Sy none .
945 For more information on dataset mount points, see
946 .Xr zfs 8 .
947 .It Fl n
948 Displays the configuration that would be used without actually creating the
949 pool.
950 The actual pool creation can still fail due to insufficient privileges or
951 device sharing.
952 .It Fl o Ar property Ns = Ns Ar value
953 Sets the given pool properties.
954 See the
955 .Sx Properties
956 section for a list of valid properties that can be set.
957 .It Fl o Ar feature@feature Ns = Ns Ar value
958 Sets the given pool feature. See the
959 .Xr zpool-features 5
960 section for a list of valid features that can be set.
961 Value can be either disabled or enabled.
962 .It Fl O Ar file-system-property Ns = Ns Ar value
963 Sets the given file system properties in the root file system of the pool.
964 See the
965 .Sx Properties
966 section of
967 .Xr zfs 8
968 for a list of valid properties that can be set.
969 .It Fl R Ar root
970 Equivalent to
971 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
972 .It Fl t Ar tname
973 Sets the in-core pool name to
974 .Sy tname
975 while the on-disk name will be the name specified as the pool name
976 .Sy pool .
977 This will set the default cachefile property to none. This is intended
978 to handle name space collisions when creating pools for other systems,
979 such as virtual machines or physical machines whose pools live on network
980 block devices.
981 .El
982 .It Xo
983 .Nm
984 .Cm destroy
985 .Op Fl f
986 .Ar pool
987 .Xc
988 Destroys the given pool, freeing up any devices for other use.
989 This command tries to unmount any active datasets before destroying the pool.
990 .Bl -tag -width Ds
991 .It Fl f
992 Forces any active datasets contained within the pool to be unmounted.
993 .El
994 .It Xo
995 .Nm
996 .Cm detach
997 .Ar pool device
998 .Xc
999 Detaches
1000 .Ar device
1001 from a mirror.
1002 The operation is refused if there are no other valid replicas of the data.
1003 If device may be re-added to the pool later on then consider the
1004 .Sy zpool offline
1005 command instead.
1006 .It Xo
1007 .Nm
1008 .Cm events
1009 .Op Fl cfHv
1010 .Op Ar pool Ns ...
1011 .Xc
1012 Lists all recent events generated by the ZFS kernel modules. These events
1013 are consumed by the
1014 .Xr zed 8
1015 and used to automate administrative tasks such as replacing a failed device
1016 with a hot spare. For more information about the subclasses and event payloads
1017 that can be generated see the
1018 .Xr zfs-events 5
1019 man page.
1020 .Bl -tag -width Ds
1021 .It Fl c
1022 Clear all previous events.
1023 .It Fl f
1024 Follow mode.
1025 .It Fl H
1026 Scripted mode. Do not display headers, and separate fields by a
1027 single tab instead of arbitrary space.
1028 .It Fl v
1029 Print the entire payload for each event.
1030 .El
1031 .It Xo
1032 .Nm
1033 .Cm export
1034 .Op Fl a
1035 .Op Fl f
1036 .Ar pool Ns ...
1037 .Xc
1038 Exports the given pools from the system.
1039 All devices are marked as exported, but are still considered in use by other
1040 subsystems.
1041 The devices can be moved between systems
1042 .Pq even those of different endianness
1043 and imported as long as a sufficient number of devices are present.
1044 .Pp
1045 Before exporting the pool, all datasets within the pool are unmounted.
1046 A pool can not be exported if it has a shared spare that is currently being
1047 used.
1048 .Pp
1049 For pools to be portable, you must give the
1050 .Nm
1051 command whole disks, not just partitions, so that ZFS can label the disks with
1052 portable EFI labels.
1053 Otherwise, disk drivers on platforms of different endianness will not recognize
1054 the disks.
1055 .Bl -tag -width Ds
1056 .It Fl a
1057 Exports all pools imported on the system.
1058 .It Fl f
1059 Forcefully unmount all datasets, using the
1060 .Nm unmount Fl f
1061 command.
1062 .Pp
1063 This command will forcefully export the pool even if it has a shared spare that
1064 is currently being used.
1065 This may lead to potential data corruption.
1066 .El
1067 .It Xo
1068 .Nm
1069 .Cm get
1070 .Op Fl Hp
1071 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1072 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1073 .Ar pool Ns ...
1074 .Xc
1075 Retrieves the given list of properties
1076 .Po
1077 or all properties if
1078 .Sy all
1079 is used
1080 .Pc
1081 for the specified storage pool(s).
1082 These properties are displayed with the following fields:
1083 .Bd -literal
1084 name Name of storage pool
1085 property Property name
1086 value Property value
1087 source Property source, either 'default' or 'local'.
1088 .Ed
1089 .Pp
1090 See the
1091 .Sx Properties
1092 section for more information on the available pool properties.
1093 .Bl -tag -width Ds
1094 .It Fl H
1095 Scripted mode.
1096 Do not display headers, and separate fields by a single tab instead of arbitrary
1097 space.
1098 .It Fl o Ar field
1099 A comma-separated list of columns to display.
1100 .Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
1101 is the default value.
1102 .It Fl p
1103 Display numbers in parsable (exact) values.
1104 .El
1105 .It Xo
1106 .Nm
1107 .Cm history
1108 .Op Fl il
1109 .Oo Ar pool Oc Ns ...
1110 .Xc
1111 Displays the command history of the specified pool(s) or all pools if no pool is
1112 specified.
1113 .Bl -tag -width Ds
1114 .It Fl i
1115 Displays internally logged ZFS events in addition to user initiated events.
1116 .It Fl l
1117 Displays log records in long format, which in addition to standard format
1118 includes, the user name, the hostname, and the zone in which the operation was
1119 performed.
1120 .El
1121 .It Xo
1122 .Nm
1123 .Cm import
1124 .Op Fl D
1125 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1126 .Xc
1127 Lists pools available to import.
1128 If the
1129 .Fl d
1130 option is not specified, this command searches for devices in
1131 .Pa /dev .
1132 The
1133 .Fl d
1134 option can be specified multiple times, and all directories are searched.
1135 If the device appears to be part of an exported pool, this command displays a
1136 summary of the pool with the name of the pool, a numeric identifier, as well as
1137 the vdev layout and current health of the device for each device or file.
1138 Destroyed pools, pools that were previously destroyed with the
1139 .Nm zpool Cm destroy
1140 command, are not listed unless the
1141 .Fl D
1142 option is specified.
1143 .Pp
1144 The numeric identifier is unique, and can be used instead of the pool name when
1145 multiple exported pools of the same name are available.
1146 .Bl -tag -width Ds
1147 .It Fl c Ar cachefile
1148 Reads configuration from the given
1149 .Ar cachefile
1150 that was created with the
1151 .Sy cachefile
1152 pool property.
1153 This
1154 .Ar cachefile
1155 is used instead of searching for devices.
1156 .It Fl d Ar dir
1157 Searches for devices or files in
1158 .Ar dir .
1159 The
1160 .Fl d
1161 option can be specified multiple times.
1162 .It Fl D
1163 Lists destroyed pools only.
1164 .El
1165 .It Xo
1166 .Nm
1167 .Cm import
1168 .Fl a
1169 .Op Fl DfmN
1170 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1171 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1172 .Op Fl o Ar mntopts
1173 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1174 .Op Fl R Ar root
1175 .Op Fl s
1176 .Xc
1177 Imports all pools found in the search directories.
1178 Identical to the previous command, except that all pools with a sufficient
1179 number of devices available are imported.
1180 Destroyed pools, pools that were previously destroyed with the
1181 .Nm zpool Cm destroy
1182 command, will not be imported unless the
1183 .Fl D
1184 option is specified.
1185 .Bl -tag -width Ds
1186 .It Fl a
1187 Searches for and imports all pools found.
1188 .It Fl c Ar cachefile
1189 Reads configuration from the given
1190 .Ar cachefile
1191 that was created with the
1192 .Sy cachefile
1193 pool property.
1194 This
1195 .Ar cachefile
1196 is used instead of searching for devices.
1197 .It Fl d Ar dir
1198 Searches for devices or files in
1199 .Ar dir .
1200 The
1201 .Fl d
1202 option can be specified multiple times.
1203 This option is incompatible with the
1204 .Fl c
1205 option.
1206 .It Fl D
1207 Imports destroyed pools only.
1208 The
1209 .Fl f
1210 option is also required.
1211 .It Fl f
1212 Forces import, even if the pool appears to be potentially active.
1213 .It Fl F
1214 Recovery mode for a non-importable pool.
1215 Attempt to return the pool to an importable state by discarding the last few
1216 transactions.
1217 Not all damaged pools can be recovered by using this option.
1218 If successful, the data from the discarded transactions is irretrievably lost.
1219 This option is ignored if the pool is importable or already imported.
1220 .It Fl m
1221 Allows a pool to import when there is a missing log device.
1222 Recent transactions can be lost because the log device will be discarded.
1223 .It Fl n
1224 Used with the
1225 .Fl F
1226 recovery option.
1227 Determines whether a non-importable pool can be made importable again, but does
1228 not actually perform the pool recovery.
1229 For more details about pool recovery mode, see the
1230 .Fl F
1231 option, above.
1232 .It Fl N
1233 Import the pool without mounting any file systems.
1234 .It Fl o Ar mntopts
1235 Comma-separated list of mount options to use when mounting datasets within the
1236 pool.
1237 See
1238 .Xr zfs 8
1239 for a description of dataset properties and mount options.
1240 .It Fl o Ar property Ns = Ns Ar value
1241 Sets the specified property on the imported pool.
1242 See the
1243 .Sx Properties
1244 section for more information on the available pool properties.
1245 .It Fl R Ar root
1246 Sets the
1247 .Sy cachefile
1248 property to
1249 .Sy none
1250 and the
1251 .Sy altroot
1252 property to
1253 .Ar root .
1254 .It Fl s
1255 Scan using the default search path, the libblkid cache will not be
1256 consulted. A custom search path may be specified by setting the
1257 ZPOOL_IMPORT_PATH environment variable.
1258 .It Fl X
1259 Used with the
1260 .Fl F
1261 recovery option. Determines whether extreme
1262 measures to find a valid txg should take place. This allows the pool to
1263 be rolled back to a txg which is no longer guaranteed to be consistent.
1264 Pools imported at an inconsistent txg may contain uncorrectable
1265 checksum errors. For more details about pool recovery mode, see the
1266 .Fl F
1267 option, above. WARNING: This option can be extremely hazardous to the
1268 health of your pool and should only be used as a last resort.
1269 .It Fl T
1270 Specify the txg to use for rollback. Implies
1271 .Fl FX .
1272 For more details
1273 about pool recovery mode, see the
1274 .Fl X
1275 option, above. WARNING: This option can be extremely hazardous to the
1276 health of your pool and should only be used as a last resort.
1277 .El
1278 .It Xo
1279 .Nm
1280 .Cm import
1281 .Op Fl Dfm
1282 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1283 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1284 .Op Fl o Ar mntopts
1285 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1286 .Op Fl R Ar root
1287 .Op Fl s
1288 .Ar pool Ns | Ns Ar id
1289 .Op Ar newpool
1290 .Xc
1291 Imports a specific pool.
1292 A pool can be identified by its name or the numeric identifier.
1293 If
1294 .Ar newpool
1295 is specified, the pool is imported using the name
1296 .Ar newpool .
1297 Otherwise, it is imported with the same name as its exported name.
1298 .Pp
1299 If a device is removed from a system without running
1300 .Nm zpool Cm export
1301 first, the device appears as potentially active.
1302 It cannot be determined if this was a failed export, or whether the device is
1303 really in use from another host.
1304 To import a pool in this state, the
1305 .Fl f
1306 option is required.
1307 .Bl -tag -width Ds
1308 .It Fl c Ar cachefile
1309 Reads configuration from the given
1310 .Ar cachefile
1311 that was created with the
1312 .Sy cachefile
1313 pool property.
1314 This
1315 .Ar cachefile
1316 is used instead of searching for devices.
1317 .It Fl d Ar dir
1318 Searches for devices or files in
1319 .Ar dir .
1320 The
1321 .Fl d
1322 option can be specified multiple times.
1323 This option is incompatible with the
1324 .Fl c
1325 option.
1326 .It Fl D
1327 Imports destroyed pool.
1328 The
1329 .Fl f
1330 option is also required.
1331 .It Fl f
1332 Forces import, even if the pool appears to be potentially active.
1333 .It Fl F
1334 Recovery mode for a non-importable pool.
1335 Attempt to return the pool to an importable state by discarding the last few
1336 transactions.
1337 Not all damaged pools can be recovered by using this option.
1338 If successful, the data from the discarded transactions is irretrievably lost.
1339 This option is ignored if the pool is importable or already imported.
1340 .It Fl m
1341 Allows a pool to import when there is a missing log device.
1342 Recent transactions can be lost because the log device will be discarded.
1343 .It Fl n
1344 Used with the
1345 .Fl F
1346 recovery option.
1347 Determines whether a non-importable pool can be made importable again, but does
1348 not actually perform the pool recovery.
1349 For more details about pool recovery mode, see the
1350 .Fl F
1351 option, above.
1352 .It Fl o Ar mntopts
1353 Comma-separated list of mount options to use when mounting datasets within the
1354 pool.
1355 See
1356 .Xr zfs 8
1357 for a description of dataset properties and mount options.
1358 .It Fl o Ar property Ns = Ns Ar value
1359 Sets the specified property on the imported pool.
1360 See the
1361 .Sx Properties
1362 section for more information on the available pool properties.
1363 .It Fl R Ar root
1364 Sets the
1365 .Sy cachefile
1366 property to
1367 .Sy none
1368 and the
1369 .Sy altroot
1370 property to
1371 .Ar root .
1372 .It Fl s
1373 Scan using the default search path, the libblkid cache will not be
1374 consulted. A custom search path may be specified by setting the
1375 ZPOOL_IMPORT_PATH environment variable.
1376 .It Fl X
1377 Used with the
1378 .Fl F
1379 recovery option. Determines whether extreme
1380 measures to find a valid txg should take place. This allows the pool to
1381 be rolled back to a txg which is no longer guaranteed to be consistent.
1382 Pools imported at an inconsistent txg may contain uncorrectable
1383 checksum errors. For more details about pool recovery mode, see the
1384 .Fl F
1385 option, above. WARNING: This option can be extremely hazardous to the
1386 health of your pool and should only be used as a last resort.
1387 .It Fl T
1388 Specify the txg to use for rollback. Implies
1389 .Fl FX .
1390 For more details
1391 about pool recovery mode, see the
1392 .Fl X
1393 option, above. WARNING: This option can be extremely hazardous to the
1394 health of your pool and should only be used as a last resort.
1395 .It Fl s
1396 Used with
1397 .Sy newpool .
1398 Specifies that
1399 .Sy newpool
1400 is temporary. Temporary pool names last until export. Ensures that
1401 the original pool name will be used in all label updates and therefore
1402 is retained upon export.
1403 Will also set -o cachefile=none when not explicitly specified.
1404 .El
1405 .It Xo
1406 .Nm
1407 .Cm iostat
1408 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1409 .Op Fl T Sy u Ns | Ns Sy d
1410 .Op Fl ghHLpPvy
1411 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1412 .Op Ar interval Op Ar count
1413 .Xc
1414 Displays I/O statistics for the given pools/vdevs. You can pass in a
1415 list of pools, a pool and list of vdevs in that pool, or a list of any
1416 vdevs from any pool. If no items are specified, statistics for every
1417 pool in the system are shown.
1418 When given an
1419 .Ar interval ,
1420 the statistics are printed every
1421 .Ar interval
1422 seconds until ^C is pressed. If count is specified, the command exits
1423 after count reports are printed. The first report printed is always
1424 the statistics since boot regardless of whether
1425 .Ar interval
1426 and
1427 .Ar count
1428 are passed. However, this behavior can be suppressed with the
1429 .Fl y
1430 flag. Also note that the units of
1431 .Sy K ,
1432 .Sy M ,
1433 .Sy G ...
1434 that are printed in the report are in base 1024. To get the raw
1435 values, use the
1436 .Fl p
1437 flag.
1438 .Bl -tag -width Ds
1439 .It Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1440 Run a script (or scripts) on each vdev and include the output as a new column
1441 in the
1442 .Nm zpool Cm iostat
1443 output. Users can run any script found in their
1444 .Pa ~/.zpool.d
1445 directory or from the system
1446 .Pa /etc/zfs/zpool.d
1447 directory. The default search path can be overridden by setting the
1448 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1449 .Fl c
1450 if they have the ZPOOL_SCRIPTS_AS_ROOT
1451 environment variable set. If a script requires the use of a privileged
1452 command, like
1453 .Xr smartctl 8
1454 , then it's recommended you allow the user access to it in
1455 .Pa /etc/sudoers
1456 or add the user to the
1457 .Pa /etc/sudoers.d/zfs
1458 file.
1459 .Pp
1460 If
1461 .Fl c
1462 is passed without a script name, it prints a list of all scripts.
1463 .Fl c
1464 also sets verbose mode (
1465 .Fl c ).
1466 .Pp
1467 Script output should be in the form of "name=value". The column name is
1468 set to "name" and the value is set to "value". Multiple lines can be
1469 used to output multiple columns. The first line of output not in the
1470 "name=value" format is displayed without a column title, and no more
1471 output after that is displayed. This can be useful for printing error
1472 messages. Blank or NULL values are printed as a '-' to make output
1473 awk-able.
1474 .Pp
1475 The following environment variables are set before running each script:
1476 .Pp
1477 .Bl -tag -width "VDEV_PATH"
1478 .It Sy VDEV_PATH
1479 Full path to the vdev
1480 .El
1481 .Bl -tag -width "VDEV_UPATH"
1482 .It Sy VDEV_UPATH
1483 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1484 multipath, or partitioned vdevs.
1485 .El
1486 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1487 .It Sy VDEV_ENC_SYSFS_PATH
1488 The sysfs path to the enclosure for the vdev (if any).
1489 .El
1490 .It Fl T Sy u Ns | Ns Sy d
1491 Display a time stamp.
1492 Specify
1493 .Sy u
1494 for a printed representation of the internal representation of time.
1495 See
1496 .Xr time 2 .
1497 Specify
1498 .Sy d
1499 for standard date format.
1500 See
1501 .Xr date 1 .
1502 .It Fl g
1503 Display vdev GUIDs instead of the normal device names. These GUIDs
1504 can be used in place of device names for the zpool
1505 detach/offline/remove/replace commands.
1506 .It Fl H
1507 Scripted mode. Do not display headers, and separate fields by a
1508 single tab instead of arbitrary space.
1509 .It Fl L
1510 Display real paths for vdevs resolving all symbolic links. This can
1511 be used to look up the current block device name regardless of the
1512 .Pa /dev/disk/
1513 path used to open it.
1514 .It Fl p
1515 Display numbers in parsable (exact) values. Time values are in
1516 nanoseconds.
1517 .It Fl P
1518 Display full paths for vdevs instead of only the last component of
1519 the path. This can be used in conjunction with the
1520 .Fl L
1521 flag.
1522 .It Fl r
1523 Print request size histograms for the leaf ZIOs. This includes
1524 histograms of individual ZIOs (
1525 .Ar ind )
1526 and aggregate ZIOs (
1527 .Ar agg ).
1528 These stats can be useful for seeing how well the ZFS IO aggregator is
1529 working. Do not confuse these request size stats with the block layer
1530 requests; it's possible ZIOs can be broken up before being sent to the
1531 block device.
1532 .It Fl v
1533 Verbose statistics Reports usage statistics for individual vdevs within the
1534 pool, in addition to the pool-wide statistics.
1535 .It Fl y
1536 .It Fl w
1537 .It Fl l
1538 Include average latency statistics:
1539 .Pp
1540 .Ar total_wait :
1541 Average total IO time (queuing + disk IO time).
1542 .Ar disk_wait :
1543 Average disk IO time (time reading/writing the disk).
1544 .Ar syncq_wait :
1545 Average amount of time IO spent in synchronous priority queues. Does
1546 not include disk time.
1547 .Ar asyncq_wait :
1548 Average amount of time IO spent in asynchronous priority queues.
1549 Does not include disk time.
1550 .Ar scrub :
1551 Average queuing time in scrub queue. Does not include disk time.
1552 .It Fl q
1553 Include active queue statistics. Each priority queue has both
1554 pending (
1555 .Ar pend )
1556 and active (
1557 .Ar activ )
1558 IOs. Pending IOs are waiting to
1559 be issued to the disk, and active IOs have been issued to disk and are
1560 waiting for completion. These stats are broken out by priority queue:
1561 .Pp
1562 .Ar syncq_read/write :
1563 Current number of entries in synchronous priority
1564 queues.
1565 .Ar asyncq_read/write :
1566 Current number of entries in asynchronous priority queues.
1567 .Ar scrubq_read :
1568 Current number of entries in scrub queue.
1569 .Pp
1570 All queue statistics are instantaneous measurements of the number of
1571 entries in the queues. If you specify an interval, the measurements
1572 will be sampled from the end of the interval.
1573 .El
1574 .It Xo
1575 .Nm
1576 .Cm labelclear
1577 .Op Fl f
1578 .Ar device
1579 .Xc
1580 Removes ZFS label information from the specified
1581 .Ar device .
1582 The
1583 .Ar device
1584 must not be part of an active pool configuration.
1585 .Bl -tag -width Ds
1586 .It Fl f
1587 Treat exported or foreign devices as inactive.
1588 .El
1589 .It Xo
1590 .Nm
1591 .Cm list
1592 .Op Fl HgLpPv
1593 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1594 .Op Fl T Sy u Ns | Ns Sy d
1595 .Oo Ar pool Oc Ns ...
1596 .Op Ar interval Op Ar count
1597 .Xc
1598 Lists the given pools along with a health status and space usage.
1599 If no
1600 .Ar pool Ns s
1601 are specified, all pools in the system are listed.
1602 When given an
1603 .Ar interval ,
1604 the information is printed every
1605 .Ar interval
1606 seconds until ^C is pressed.
1607 If
1608 .Ar count
1609 is specified, the command exits after
1610 .Ar count
1611 reports are printed.
1612 .Bl -tag -width Ds
1613 .It Fl g
1614 Display vdev GUIDs instead of the normal device names. These GUIDs
1615 can be used in place of device names for the zpool
1616 detach/offline/remove/replace commands.
1617 .It Fl H
1618 Scripted mode.
1619 Do not display headers, and separate fields by a single tab instead of arbitrary
1620 space.
1621 .It Fl o Ar property
1622 Comma-separated list of properties to display.
1623 See the
1624 .Sx Properties
1625 section for a list of valid properties.
1626 The default list is
1627 .Sy name, size, alloc, free, fragmentation, expandsize, capacity,
1628 .Sy dedupratio, health, altroot .
1629 .It Fl L
1630 Display real paths for vdevs resolving all symbolic links. This can
1631 be used to look up the current block device name regardless of the
1632 /dev/disk/ path used to open it.
1633 .It Fl p
1634 Display numbers in parsable
1635 .Pq exact
1636 values.
1637 .It Fl P
1638 Display full paths for vdevs instead of only the last component of
1639 the path. This can be used in conjunction with the
1640 .Fl L flag.
1641 .It Fl T Sy u Ns | Ns Sy d
1642 Display a time stamp.
1643 Specify
1644 .Fl u
1645 for a printed representation of the internal representation of time.
1646 See
1647 .Xr time 2 .
1648 Specify
1649 .Fl d
1650 for standard date format.
1651 See
1652 .Xr date 1 .
1653 .It Fl v
1654 Verbose statistics.
1655 Reports usage statistics for individual vdevs within the pool, in addition to
1656 the pool-wise statistics.
1657 .El
1658 .It Xo
1659 .Nm
1660 .Cm offline
1661 .Op Fl f
1662 .Op Fl t
1663 .Ar pool Ar device Ns ...
1664 .Xc
1665 Takes the specified physical device offline.
1666 While the
1667 .Ar device
1668 is offline, no attempt is made to read or write to the device.
1669 This command is not applicable to spares.
1670 .Bl -tag -width Ds
1671 .It Fl f
1672 Force fault. Instead of offlining the disk, put it into a faulted
1673 state. The fault will persist across imports unless the
1674 .Fl t
1675 flag was specified.
1676 .It Fl t
1677 Temporary.
1678 Upon reboot, the specified physical device reverts to its previous state.
1679 .El
1680 .It Xo
1681 .Nm
1682 .Cm online
1683 .Op Fl e
1684 .Ar pool Ar device Ns ...
1685 .Xc
1686 Brings the specified physical device online.
1687 This command is not applicable to spares or cache devices.
1688 .Bl -tag -width Ds
1689 .It Fl e
1690 Expand the device to use all available space.
1691 If the device is part of a mirror or raidz then all devices must be expanded
1692 before the new space will become available to the pool.
1693 .El
1694 .It Xo
1695 .Nm
1696 .Cm reguid
1697 .Ar pool
1698 .Xc
1699 Generates a new unique identifier for the pool.
1700 You must ensure that all devices in this pool are online and healthy before
1701 performing this action.
1702 .It Xo
1703 .Nm
1704 .Cm reopen
1705 .Ar pool
1706 .Xc
1707 Reopen all the vdevs associated with the pool.
1708 .It Xo
1709 .Nm
1710 .Cm remove
1711 .Ar pool Ar device Ns ...
1712 .Xc
1713 Removes the specified device from the pool.
1714 This command currently only supports removing hot spares, cache, and log
1715 devices.
1716 A mirrored log device can be removed by specifying the top-level mirror for the
1717 log.
1718 Non-log devices that are part of a mirrored configuration can be removed using
1719 the
1720 .Nm zpool Cm detach
1721 command.
1722 Non-redundant and raidz devices cannot be removed from a pool.
1723 .It Xo
1724 .Nm
1725 .Cm replace
1726 .Op Fl f
1727 .Op Fl o Ar property Ns = Ns Ar value
1728 .Ar pool Ar device Op Ar new_device
1729 .Xc
1730 Replaces
1731 .Ar old_device
1732 with
1733 .Ar new_device .
1734 This is equivalent to attaching
1735 .Ar new_device ,
1736 waiting for it to resilver, and then detaching
1737 .Ar old_device .
1738 .Pp
1739 The size of
1740 .Ar new_device
1741 must be greater than or equal to the minimum size of all the devices in a mirror
1742 or raidz configuration.
1743 .Pp
1744 .Ar new_device
1745 is required if the pool is not redundant.
1746 If
1747 .Ar new_device
1748 is not specified, it defaults to
1749 .Ar old_device .
1750 This form of replacement is useful after an existing disk has failed and has
1751 been physically replaced.
1752 In this case, the new disk may have the same
1753 .Pa /dev
1754 path as the old device, even though it is actually a different disk.
1755 ZFS recognizes this.
1756 .Bl -tag -width Ds
1757 .It Fl f
1758 Forces use of
1759 .Ar new_device ,
1760 even if its appears to be in use.
1761 Not all devices can be overridden in this manner.
1762 .It Fl o Ar property Ns = Ns Ar value
1763 Sets the given pool properties. See the
1764 .Sx Properties
1765 section for a list of valid properties that can be set.
1766 The only property supported at the moment is
1767 .Sy ashift .
1768 .El
1769 .It Xo
1770 .Nm
1771 .Cm scrub
1772 .Op Fl s
1773 .Ar pool Ns ...
1774 .Xc
1775 Begins a scrub.
1776 The scrub examines all data in the specified pools to verify that it checksums
1777 correctly.
1778 For replicated
1779 .Pq mirror or raidz
1780 devices, ZFS automatically repairs any damage discovered during the scrub.
1781 The
1782 .Nm zpool Cm status
1783 command reports the progress of the scrub and summarizes the results of the
1784 scrub upon completion.
1785 .Pp
1786 Scrubbing and resilvering are very similar operations.
1787 The difference is that resilvering only examines data that ZFS knows to be out
1788 of date
1789 .Po
1790 for example, when attaching a new device to a mirror or replacing an existing
1791 device
1792 .Pc ,
1793 whereas scrubbing examines all data to discover silent errors due to hardware
1794 faults or disk failure.
1795 .Pp
1796 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1797 one at a time.
1798 If a scrub is already in progress, the
1799 .Nm zpool Cm scrub
1800 command terminates it and starts a new scrub.
1801 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1802 resilver completes.
1803 .Bl -tag -width Ds
1804 .It Fl s
1805 Stop scrubbing.
1806 .El
1807 .It Xo
1808 .Nm
1809 .Cm set
1810 .Ar property Ns = Ns Ar value
1811 .Ar pool
1812 .Xc
1813 Sets the given property on the specified pool.
1814 See the
1815 .Sx Properties
1816 section for more information on what properties can be set and acceptable
1817 values.
1818 .It Xo
1819 .Nm
1820 .Cm split
1821 .Op Fl gLnP
1822 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1823 .Op Fl R Ar root
1824 .Ar pool newpool
1825 .Op Ar device ...
1826 .Xc
1827 Splits devices off
1828 .Ar pool
1829 creating
1830 .Ar newpool .
1831 All vdevs in
1832 .Ar pool
1833 must be mirrors and the pool must not be in the process of resilvering.
1834 At the time of the split,
1835 .Ar newpool
1836 will be a replica of
1837 .Ar pool .
1838 By default, the
1839 last device in each mirror is split from
1840 .Ar pool
1841 to create
1842 .Ar newpool .
1843 .Pp
1844 The optional device specification causes the specified device(s) to be
1845 included in the new
1846 .Ar pool
1847 and, should any devices remain unspecified,
1848 the last device in each mirror is used as would be by default.
1849 .Bl -tag -width Ds
1850 .It Fl g
1851 Display vdev GUIDs instead of the normal device names. These GUIDs
1852 can be used in place of device names for the zpool
1853 detach/offline/remove/replace commands.
1854 .It Fl L
1855 Display real paths for vdevs resolving all symbolic links. This can
1856 be used to look up the current block device name regardless of the
1857 .Pa /dev/disk/
1858 path used to open it.
1859 .It Fl n
1860 Do dry run, do not actually perform the split.
1861 Print out the expected configuration of
1862 .Ar newpool .
1863 .It Fl P
1864 Display full paths for vdevs instead of only the last component of
1865 the path. This can be used in conjunction with the
1866 .Fl L flag.
1867 .It Fl o Ar property Ns = Ns Ar value
1868 Sets the specified property for
1869 .Ar newpool .
1870 See the
1871 .Sx Properties
1872 section for more information on the available pool properties.
1873 .It Fl R Ar root
1874 Set
1875 .Sy altroot
1876 for
1877 .Ar newpool
1878 to
1879 .Ar root
1880 and automatically import it.
1881 .El
1882 .It Xo
1883 .Nm
1884 .Cm status
1885 .Op Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1886 .Op Fl gLPvxD
1887 .Op Fl T Sy u Ns | Ns Sy d
1888 .Oo Ar pool Oc Ns ...
1889 .Op Ar interval Op Ar count
1890 .Xc
1891 Displays the detailed health status for the given pools.
1892 If no
1893 .Ar pool
1894 is specified, then the status of each pool in the system is displayed.
1895 For more information on pool and device health, see the
1896 .Sx Device Failure and Recovery
1897 section.
1898 .Pp
1899 If a scrub or resilver is in progress, this command reports the percentage done
1900 and the estimated time to completion.
1901 Both of these are only approximate, because the amount of data in the pool and
1902 the other workloads on the system can change.
1903 .Bl -tag -width Ds
1904 .It Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1905 Run a script (or scripts) on each vdev and include the output as a new column
1906 in the
1907 .Nm zpool Cm status
1908 output. See the
1909 .Fl c
1910 option of
1911 .Nm zpool Cm iostat
1912 for complete details.
1913 .It Fl g
1914 Display vdev GUIDs instead of the normal device names. These GUIDs
1915 can be used in place of device names for the zpool
1916 detach/offline/remove/replace commands.
1917 .It Fl L
1918 Display real paths for vdevs resolving all symbolic links. This can
1919 be used to look up the current block device name regardless of the
1920 .Pa /dev/disk/
1921 path used to open it.
1922 .It Fl p
1923 Display numbers in parsable (exact) values. Time values are in
1924 nanoseconds.
1925 .It Fl D
1926 Display a histogram of deduplication statistics, showing the allocated
1927 .Pq physically present on disk
1928 and referenced
1929 .Pq logically referenced in the pool
1930 block counts and sizes by reference count.
1931 .It Fl T Sy u Ns | Ns Sy d
1932 Display a time stamp.
1933 Specify
1934 .Fl u
1935 for a printed representation of the internal representation of time.
1936 See
1937 .Xr time 2 .
1938 Specify
1939 .Fl d
1940 for standard date format.
1941 See
1942 .Xr date 1 .
1943 .It Fl v
1944 Displays verbose data error information, printing out a complete list of all
1945 data errors since the last complete pool scrub.
1946 .It Fl x
1947 Only display status for pools that are exhibiting errors or are otherwise
1948 unavailable.
1949 Warnings about pools not using the latest on-disk format will not be included.
1950 .El
1951 .It Xo
1952 .Nm
1953 .Cm sync
1954 .Op Ar pool ...
1955 .Xc
1956 This command forces all in-core dirty data to be written to the primary
1957 pool storage and not the ZIL. It will also update administrative
1958 information including quota reporting. Without arguments,
1959 .Sy zpool sync
1960 will sync all pools on the system. Otherwise, it will sync only the
1961 specified pool(s).
1962 .It Xo
1963 .Nm
1964 .Cm upgrade
1965 .Xc
1966 Displays pools which do not have all supported features enabled and pools
1967 formatted using a legacy ZFS version number.
1968 These pools can continue to be used, but some features may not be available.
1969 Use
1970 .Nm zpool Cm upgrade Fl a
1971 to enable all features on all pools.
1972 .It Xo
1973 .Nm
1974 .Cm upgrade
1975 .Fl v
1976 .Xc
1977 Displays legacy ZFS versions supported by the current software.
1978 See
1979 .Xr zpool-features 5
1980 for a description of feature flags features supported by the current software.
1981 .It Xo
1982 .Nm
1983 .Cm upgrade
1984 .Op Fl V Ar version
1985 .Fl a Ns | Ns Ar pool Ns ...
1986 .Xc
1987 Enables all supported features on the given pool.
1988 Once this is done, the pool will no longer be accessible on systems that do not
1989 support feature flags.
1990 See
1991 .Xr zfs-features 5
1992 for details on compatibility with systems that support feature flags, but do not
1993 support all features enabled on the pool.
1994 .Bl -tag -width Ds
1995 .It Fl a
1996 Enables all supported features on all pools.
1997 .It Fl V Ar version
1998 Upgrade to the specified legacy version.
1999 If the
2000 .Fl V
2001 flag is specified, no features will be enabled on the pool.
2002 This option can only be used to increase the version number up to the last
2003 supported legacy version number.
2004 .El
2005 .El
2006 .Sh EXIT STATUS
2007 The following exit values are returned:
2008 .Bl -tag -width Ds
2009 .It Sy 0
2010 Successful completion.
2011 .It Sy 1
2012 An error occurred.
2013 .It Sy 2
2014 Invalid command line options were specified.
2015 .El
2016 .Sh EXAMPLES
2017 .Bl -tag -width Ds
2018 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2019 The following command creates a pool with a single raidz root vdev that
2020 consists of six disks.
2021 .Bd -literal
2022 # zpool create tank raidz sda sdb sdc sdd sde sdf
2023 .Ed
2024 .It Sy Example 2 No Creating a Mirrored Storage Pool
2025 The following command creates a pool with two mirrors, where each mirror
2026 contains two disks.
2027 .Bd -literal
2028 # zpool create tank mirror sda sdb mirror sdc sdd
2029 .Ed
2030 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2031 The following command creates an unmirrored pool using two disk partitions.
2032 .Bd -literal
2033 # zpool create tank sda1 sdb2
2034 .Ed
2035 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2036 The following command creates an unmirrored pool using files.
2037 While not recommended, a pool based on files can be useful for experimental
2038 purposes.
2039 .Bd -literal
2040 # zpool create tank /path/to/file/a /path/to/file/b
2041 .Ed
2042 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2043 The following command adds two mirrored disks to the pool
2044 .Em tank ,
2045 assuming the pool is already made up of two-way mirrors.
2046 The additional space is immediately available to any datasets within the pool.
2047 .Bd -literal
2048 # zpool add tank mirror sda sdb
2049 .Ed
2050 .It Sy Example 6 No Listing Available ZFS Storage Pools
2051 The following command lists all available pools on the system.
2052 In this case, the pool
2053 .Em zion
2054 is faulted due to a missing device.
2055 The results from this command are similar to the following:
2056 .Bd -literal
2057 # zpool list
2058 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2059 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
2060 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
2061 zion - - - - - - - FAULTED -
2062 .Ed
2063 .It Sy Example 7 No Destroying a ZFS Storage Pool
2064 The following command destroys the pool
2065 .Em tank
2066 and any datasets contained within.
2067 .Bd -literal
2068 # zpool destroy -f tank
2069 .Ed
2070 .It Sy Example 8 No Exporting a ZFS Storage Pool
2071 The following command exports the devices in pool
2072 .Em tank
2073 so that they can be relocated or later imported.
2074 .Bd -literal
2075 # zpool export tank
2076 .Ed
2077 .It Sy Example 9 No Importing a ZFS Storage Pool
2078 The following command displays available pools, and then imports the pool
2079 .Em tank
2080 for use on the system.
2081 The results from this command are similar to the following:
2082 .Bd -literal
2083 # zpool import
2084 pool: tank
2085 id: 15451357997522795478
2086 state: ONLINE
2087 action: The pool can be imported using its name or numeric identifier.
2088 config:
2089
2090 tank ONLINE
2091 mirror ONLINE
2092 sda ONLINE
2093 sdb ONLINE
2094
2095 # zpool import tank
2096 .Ed
2097 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2098 The following command upgrades all ZFS Storage pools to the current version of
2099 the software.
2100 .Bd -literal
2101 # zpool upgrade -a
2102 This system is currently running ZFS version 2.
2103 .Ed
2104 .It Sy Example 11 No Managing Hot Spares
2105 The following command creates a new pool with an available hot spare:
2106 .Bd -literal
2107 # zpool create tank mirror sda sdb spare sdc
2108 .Ed
2109 .Pp
2110 If one of the disks were to fail, the pool would be reduced to the degraded
2111 state.
2112 The failed device can be replaced using the following command:
2113 .Bd -literal
2114 # zpool replace tank sda sdd
2115 .Ed
2116 .Pp
2117 Once the data has been resilvered, the spare is automatically removed and is
2118 made available for use should another device fails.
2119 The hot spare can be permanently removed from the pool using the following
2120 command:
2121 .Bd -literal
2122 # zpool remove tank sdc
2123 .Ed
2124 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2125 The following command creates a ZFS storage pool consisting of two, two-way
2126 mirrors and mirrored log devices:
2127 .Bd -literal
2128 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2129 sde sdf
2130 .Ed
2131 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2132 The following command adds two disks for use as cache devices to a ZFS storage
2133 pool:
2134 .Bd -literal
2135 # zpool add pool cache sdc sdd
2136 .Ed
2137 .Pp
2138 Once added, the cache devices gradually fill with content from main memory.
2139 Depending on the size of your cache devices, it could take over an hour for
2140 them to fill.
2141 Capacity and reads can be monitored using the
2142 .Cm iostat
2143 option as follows:
2144 .Bd -literal
2145 # zpool iostat -v pool 5
2146 .Ed
2147 .It Sy Example 14 No Removing a Mirrored Log Device
2148 The following command removes the mirrored log device
2149 .Sy mirror-2 .
2150 Given this configuration:
2151 .Bd -literal
2152 pool: tank
2153 state: ONLINE
2154 scrub: none requested
2155 config:
2156
2157 NAME STATE READ WRITE CKSUM
2158 tank ONLINE 0 0 0
2159 mirror-0 ONLINE 0 0 0
2160 sda ONLINE 0 0 0
2161 sdb ONLINE 0 0 0
2162 mirror-1 ONLINE 0 0 0
2163 sdc ONLINE 0 0 0
2164 sdd ONLINE 0 0 0
2165 logs
2166 mirror-2 ONLINE 0 0 0
2167 sde ONLINE 0 0 0
2168 sdf ONLINE 0 0 0
2169 .Ed
2170 .Pp
2171 The command to remove the mirrored log
2172 .Sy mirror-2
2173 is:
2174 .Bd -literal
2175 # zpool remove tank mirror-2
2176 .Ed
2177 .It Sy Example 15 No Displaying expanded space on a device
2178 The following command displays the detailed information for the pool
2179 .Em data .
2180 This pool is comprised of a single raidz vdev where one of its devices
2181 increased its capacity by 10GB.
2182 In this example, the pool will not be able to utilize this extra capacity until
2183 all the devices under the raidz vdev have been expanded.
2184 .Bd -literal
2185 # zpool list -v data
2186 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2187 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2188 raidz1 23.9G 14.6G 9.30G 48% -
2189 sda - - - - -
2190 sdb - - - - 10G
2191 sdc - - - - -
2192 .Ed
2193 .It Sy Example 16 No Adding output columns
2194 Additional columns can be added to the
2195 .Nm zpool Cm status
2196 and
2197 .Nm zpool Cm iostat
2198 output with
2199 .Fl c
2200 option.
2201 .Bd -literal
2202 # zpool status -c vendor,model,size
2203 NAME STATE READ WRITE CKSUM vendor model size
2204 tank ONLINE 0 0 0
2205 mirror-0 ONLINE 0 0 0
2206 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2207 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2208 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2209 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2210 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2211 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2212
2213 # zpool iostat -vc slaves
2214 capacity operations bandwidth
2215 pool alloc free read write read write slaves
2216 ---------- ----- ----- ----- ----- ----- ----- ---------
2217 tank 20.4G 7.23T 26 152 20.7M 21.6M
2218 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2219 U1 - - 0 31 1.46K 20.6M sdb sdff
2220 U10 - - 0 1 3.77K 13.3K sdas sdgw
2221 U11 - - 0 1 288K 13.3K sdat sdgx
2222 U12 - - 0 1 78.4K 13.3K sdau sdgy
2223 U13 - - 0 1 128K 13.3K sdav sdgz
2224 U14 - - 0 1 63.2K 13.3K sdfk sdg
2225 .Ed
2226 .El
2227 .Sh ENVIRONMENT VARIABLES
2228 .Bl -tag -width "ZFS_ABORT"
2229 .It Ev ZFS_ABORT
2230 Cause
2231 .Nm zpool
2232 to dump core on exit for the purposes of running
2233 .Sy::findleaks .
2234 .El
2235 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2236 .It Ev ZPOOL_IMPORT_PATH
2237 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2238 .Nm zpool
2239 looks for device nodes and files.
2240 Similar to the
2241 .Fl d
2242 option in
2243 .Nm zpool import .
2244 .El
2245 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2246 .It Ev ZPOOL_VDEV_NAME_GUID
2247 Cause
2248 .Nm zpool subcommands to output vdev guids by default. This behavior
2249 is identical to the
2250 .Nm zpool status -g
2251 command line option.
2252 .El
2253 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2254 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2255 Cause
2256 .Nm zpool
2257 subcommands to follow links for vdev names by default. This behavior is identical to the
2258 .Nm zpool status -L
2259 command line option.
2260 .El
2261 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2262 .It Ev ZPOOL_VDEV_NAME_PATH
2263 Cause
2264 .Nm zpool
2265 subcommands to output full vdev path names by default. This
2266 behavior is identical to the
2267 .Nm zpool status -p
2268 command line option.
2269 .El
2270 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2271 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2272 Older ZFS on Linux implementations had issues when attempting to display pool
2273 config VDEV names if a
2274 .Sy devid
2275 NVP value is present in the pool's config.
2276 .Pp
2277 For example, a pool that originated on illumos platform would have a devid
2278 value in the config and
2279 .Nm zpool status
2280 would fail when listing the config.
2281 This would also be true for future Linux based pools.
2282 .Pp
2283 A pool can be stripped of any
2284 .Sy devid
2285 values on import or prevented from adding
2286 them on
2287 .Nm zpool create
2288 or
2289 .Nm zpool add
2290 by setting
2291 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2292 .El
2293 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2294 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2295 Allow a privilaged user to run the
2296 .Nm zpool status/iostat
2297 with the
2298 .Fl c
2299 option. Normally, only unprivilaged users are allowed to run
2300 .Fl c .
2301 .El
2302 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2303 .It Ev ZPOOL_SCRIPTS_PATH
2304 The search path for scripts when running
2305 .Nm zpool status/iostat
2306 with the
2307 .Fl c
2308 option. This is a colon-separated list of directories and overrides the default
2309 .Pa ~/.zpool.d
2310 and
2311 .Pa /etc/zfs/zpool.d
2312 search paths.
2313 .El
2314 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2315 .It Ev ZPOOL_SCRIPTS_ENABLED
2316 Allow a user to run
2317 .Nm zpool status/iostat
2318 with the
2319 .Fl c
2320 option. If
2321 .Sy ZPOOL_SCRIPTS_ENABLED
2322 is not set, it is assumed that the user is allowed to run
2323 .Nm zpool status/iostat -c .
2324 .Sh INTERFACE STABILITY
2325 .Sy Evolving
2326 .Sh SEE ALSO
2327 .Xr zed 8
2328 .Xr zfs 8 ,
2329 .Xr zfs-events 5 ,
2330 .Xr zfs-module-parameters 5 ,
2331 .Xr zpool-features 5