]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Fix typoes in zpool man page
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd April 27, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm clear
51 .Ar pool
52 .Op Ar device
53 .Nm
54 .Cm create
55 .Op Fl dfn
56 .Op Fl m Ar mountpoint
57 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
58 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
59 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
60 .Op Fl R Ar root
61 .Ar pool vdev Ns ...
62 .Nm
63 .Cm destroy
64 .Op Fl f
65 .Ar pool
66 .Nm
67 .Cm detach
68 .Ar pool device
69 .Nm
70 .Cm events
71 .Op Fl vHf Oo Ar pool Oc | Fl c
72 .Nm
73 .Cm export
74 .Op Fl a
75 .Op Fl f
76 .Ar pool Ns ...
77 .Nm
78 .Cm get
79 .Op Fl Hp
80 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
81 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
82 .Ar pool Ns ...
83 .Nm
84 .Cm history
85 .Op Fl il
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm import
89 .Op Fl D
90 .Op Fl d Ar dir Ns | Ns device
91 .Nm
92 .Cm import
93 .Fl a
94 .Op Fl DflmN
95 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
96 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
97 .Op Fl o Ar mntopts
98 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
99 .Op Fl R Ar root
100 .Nm
101 .Cm import
102 .Op Fl Dflm
103 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
104 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
105 .Op Fl o Ar mntopts
106 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
107 .Op Fl R Ar root
108 .Op Fl s
109 .Ar pool Ns | Ns Ar id
110 .Op Ar newpool Oo Fl t Oc
111 .Nm
112 .Cm iostat
113 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
114 .Op Fl T Sy u Ns | Ns Sy d
115 .Op Fl ghHLpPvy
116 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
117 .Op Ar interval Op Ar count
118 .Nm
119 .Cm labelclear
120 .Op Fl f
121 .Ar device
122 .Nm
123 .Cm list
124 .Op Fl HgLpPv
125 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
126 .Op Fl T Sy u Ns | Ns Sy d
127 .Oo Ar pool Oc Ns ...
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm offline
131 .Op Fl f
132 .Op Fl t
133 .Ar pool Ar device Ns ...
134 .Nm
135 .Cm online
136 .Op Fl e
137 .Ar pool Ar device Ns ...
138 .Nm
139 .Cm reguid
140 .Ar pool
141 .Nm
142 .Cm reopen
143 .Op Fl n
144 .Ar pool
145 .Nm
146 .Cm remove
147 .Op Fl np
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm remove
151 .Fl s
152 .Ar pool
153 .Nm
154 .Cm replace
155 .Op Fl f
156 .Oo Fl o Ar property Ns = Ns Ar value Oc
157 .Ar pool Ar device Op Ar new_device
158 .Nm
159 .Cm scrub
160 .Op Fl s | Fl p
161 .Ar pool Ns ...
162 .Nm
163 .Cm set
164 .Ar property Ns = Ns Ar value
165 .Ar pool
166 .Nm
167 .Cm split
168 .Op Fl gLlnP
169 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
170 .Op Fl R Ar root
171 .Ar pool newpool
172 .Oo Ar device Oc Ns ...
173 .Nm
174 .Cm status
175 .Oo Fl c Ar SCRIPT Oc
176 .Op Fl gLPvxD
177 .Op Fl T Sy u Ns | Ns Sy d
178 .Oo Ar pool Oc Ns ...
179 .Op Ar interval Op Ar count
180 .Nm
181 .Cm sync
182 .Oo Ar pool Oc Ns ...
183 .Nm
184 .Cm upgrade
185 .Nm
186 .Cm upgrade
187 .Fl v
188 .Nm
189 .Cm upgrade
190 .Op Fl V Ar version
191 .Fl a Ns | Ns Ar pool Ns ...
192 .Sh DESCRIPTION
193 The
194 .Nm
195 command configures ZFS storage pools.
196 A storage pool is a collection of devices that provides physical storage and
197 data replication for ZFS datasets.
198 All datasets within a storage pool share the same space.
199 See
200 .Xr zfs 8
201 for information on managing datasets.
202 .Ss Virtual Devices (vdevs)
203 A "virtual device" describes a single device or a collection of devices
204 organized according to certain performance and fault characteristics.
205 The following virtual devices are supported:
206 .Bl -tag -width Ds
207 .It Sy disk
208 A block device, typically located under
209 .Pa /dev .
210 ZFS can use individual slices or partitions, though the recommended mode of
211 operation is to use whole disks.
212 A disk can be specified by a full path, or it can be a shorthand name
213 .Po the relative portion of the path under
214 .Pa /dev
215 .Pc .
216 A whole disk can be specified by omitting the slice or partition designation.
217 For example,
218 .Pa sda
219 is equivalent to
220 .Pa /dev/sda .
221 When given a whole disk, ZFS automatically labels the disk, if necessary.
222 .It Sy file
223 A regular file.
224 The use of files as a backing store is strongly discouraged.
225 It is designed primarily for experimental purposes, as the fault tolerance of a
226 file is only as good as the file system of which it is a part.
227 A file must be specified by a full path.
228 .It Sy mirror
229 A mirror of two or more devices.
230 Data is replicated in an identical fashion across all components of a mirror.
231 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
232 failing before data integrity is compromised.
233 .It Sy raidz , raidz1 , raidz2 , raidz3
234 A variation on RAID-5 that allows for better distribution of parity and
235 eliminates the RAID-5
236 .Qq write hole
237 .Pq in which data and parity become inconsistent after a power loss .
238 Data and parity is striped across all disks within a raidz group.
239 .Pp
240 A raidz group can have single-, double-, or triple-parity, meaning that the
241 raidz group can sustain one, two, or three failures, respectively, without
242 losing any data.
243 The
244 .Sy raidz1
245 vdev type specifies a single-parity raidz group; the
246 .Sy raidz2
247 vdev type specifies a double-parity raidz group; and the
248 .Sy raidz3
249 vdev type specifies a triple-parity raidz group.
250 The
251 .Sy raidz
252 vdev type is an alias for
253 .Sy raidz1 .
254 .Pp
255 A raidz group with N disks of size X with P parity disks can hold approximately
256 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
257 compromised.
258 The minimum number of devices in a raidz group is one more than the number of
259 parity disks.
260 The recommended number is between 3 and 9 to help increase performance.
261 .It Sy spare
262 A special pseudo-vdev which keeps track of available hot spares for a pool.
263 For more information, see the
264 .Sx Hot Spares
265 section.
266 .It Sy log
267 A separate intent log device.
268 If more than one log device is specified, then writes are load-balanced between
269 devices.
270 Log devices can be mirrored.
271 However, raidz vdev types are not supported for the intent log.
272 For more information, see the
273 .Sx Intent Log
274 section.
275 .It Sy cache
276 A device used to cache storage pool data.
277 A cache device cannot be configured as a mirror or raidz group.
278 For more information, see the
279 .Sx Cache Devices
280 section.
281 .El
282 .Pp
283 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
284 contain files or disks.
285 Mirrors of mirrors
286 .Pq or other combinations
287 are not allowed.
288 .Pp
289 A pool can have any number of virtual devices at the top of the configuration
290 .Po known as
291 .Qq root vdevs
292 .Pc .
293 Data is dynamically distributed across all top-level devices to balance data
294 among devices.
295 As new virtual devices are added, ZFS automatically places data on the newly
296 available devices.
297 .Pp
298 Virtual devices are specified one at a time on the command line, separated by
299 whitespace.
300 The keywords
301 .Sy mirror
302 and
303 .Sy raidz
304 are used to distinguish where a group ends and another begins.
305 For example, the following creates two root vdevs, each a mirror of two disks:
306 .Bd -literal
307 # zpool create mypool mirror sda sdb mirror sdc sdd
308 .Ed
309 .Ss Device Failure and Recovery
310 ZFS supports a rich set of mechanisms for handling device failure and data
311 corruption.
312 All metadata and data is checksummed, and ZFS automatically repairs bad data
313 from a good copy when corruption is detected.
314 .Pp
315 In order to take advantage of these features, a pool must make use of some form
316 of redundancy, using either mirrored or raidz groups.
317 While ZFS supports running in a non-redundant configuration, where each root
318 vdev is simply a disk or file, this is strongly discouraged.
319 A single case of bit corruption can render some or all of your data unavailable.
320 .Pp
321 A pool's health status is described by one of three states: online, degraded,
322 or faulted.
323 An online pool has all devices operating normally.
324 A degraded pool is one in which one or more devices have failed, but the data is
325 still available due to a redundant configuration.
326 A faulted pool has corrupted metadata, or one or more faulted devices, and
327 insufficient replicas to continue functioning.
328 .Pp
329 The health of the top-level vdev, such as mirror or raidz device, is
330 potentially impacted by the state of its associated vdevs, or component
331 devices.
332 A top-level vdev or component device is in one of the following states:
333 .Bl -tag -width "DEGRADED"
334 .It Sy DEGRADED
335 One or more top-level vdevs is in the degraded state because one or more
336 component devices are offline.
337 Sufficient replicas exist to continue functioning.
338 .Pp
339 One or more component devices is in the degraded or faulted state, but
340 sufficient replicas exist to continue functioning.
341 The underlying conditions are as follows:
342 .Bl -bullet
343 .It
344 The number of checksum errors exceeds acceptable levels and the device is
345 degraded as an indication that something may be wrong.
346 ZFS continues to use the device as necessary.
347 .It
348 The number of I/O errors exceeds acceptable levels.
349 The device could not be marked as faulted because there are insufficient
350 replicas to continue functioning.
351 .El
352 .It Sy FAULTED
353 One or more top-level vdevs is in the faulted state because one or more
354 component devices are offline.
355 Insufficient replicas exist to continue functioning.
356 .Pp
357 One or more component devices is in the faulted state, and insufficient
358 replicas exist to continue functioning.
359 The underlying conditions are as follows:
360 .Bl -bullet
361 .It
362 The device could be opened, but the contents did not match expected values.
363 .It
364 The number of I/O errors exceeds acceptable levels and the device is faulted to
365 prevent further use of the device.
366 .El
367 .It Sy OFFLINE
368 The device was explicitly taken offline by the
369 .Nm zpool Cm offline
370 command.
371 .It Sy ONLINE
372 The device is online and functioning.
373 .It Sy REMOVED
374 The device was physically removed while the system was running.
375 Device removal detection is hardware-dependent and may not be supported on all
376 platforms.
377 .It Sy UNAVAIL
378 The device could not be opened.
379 If a pool is imported when a device was unavailable, then the device will be
380 identified by a unique identifier instead of its path since the path was never
381 correct in the first place.
382 .El
383 .Pp
384 If a device is removed and later re-attached to the system, ZFS attempts
385 to put the device online automatically.
386 Device attach detection is hardware-dependent and might not be supported on all
387 platforms.
388 .Ss Hot Spares
389 ZFS allows devices to be associated with pools as
390 .Qq hot spares .
391 These devices are not actively used in the pool, but when an active device
392 fails, it is automatically replaced by a hot spare.
393 To create a pool with hot spares, specify a
394 .Sy spare
395 vdev with any number of devices.
396 For example,
397 .Bd -literal
398 # zpool create pool mirror sda sdb spare sdc sdd
399 .Ed
400 .Pp
401 Spares can be shared across multiple pools, and can be added with the
402 .Nm zpool Cm add
403 command and removed with the
404 .Nm zpool Cm remove
405 command.
406 Once a spare replacement is initiated, a new
407 .Sy spare
408 vdev is created within the configuration that will remain there until the
409 original device is replaced.
410 At this point, the hot spare becomes available again if another device fails.
411 .Pp
412 If a pool has a shared spare that is currently being used, the pool can not be
413 exported since other pools may use this shared spare, which may lead to
414 potential data corruption.
415 .Pp
416 An in-progress spare replacement can be cancelled by detaching the hot spare.
417 If the original faulted device is detached, then the hot spare assumes its
418 place in the configuration, and is removed from the spare list of all active
419 pools.
420 .Pp
421 Spares cannot replace log devices.
422 .Ss Intent Log
423 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
424 transactions.
425 For instance, databases often require their transactions to be on stable storage
426 devices when returning from a system call.
427 NFS and other applications can also use
428 .Xr fsync 2
429 to ensure data stability.
430 By default, the intent log is allocated from blocks within the main pool.
431 However, it might be possible to get better performance using separate intent
432 log devices such as NVRAM or a dedicated disk.
433 For example:
434 .Bd -literal
435 # zpool create pool sda sdb log sdc
436 .Ed
437 .Pp
438 Multiple log devices can also be specified, and they can be mirrored.
439 See the
440 .Sx EXAMPLES
441 section for an example of mirroring multiple log devices.
442 .Pp
443 Log devices can be added, replaced, attached, detached and removed. In
444 addition, log devices are imported and exported as part of the pool
445 that contains them.
446 Mirrored devices can be removed by specifying the top-level mirror vdev.
447 .Ss Cache Devices
448 Devices can be added to a storage pool as
449 .Qq cache devices .
450 These devices provide an additional layer of caching between main memory and
451 disk.
452 For read-heavy workloads, where the working set size is much larger than what
453 can be cached in main memory, using cache devices allow much more of this
454 working set to be served from low latency media.
455 Using cache devices provides the greatest performance improvement for random
456 read-workloads of mostly static content.
457 .Pp
458 To create a pool with cache devices, specify a
459 .Sy cache
460 vdev with any number of devices.
461 For example:
462 .Bd -literal
463 # zpool create pool sda sdb cache sdc sdd
464 .Ed
465 .Pp
466 Cache devices cannot be mirrored or part of a raidz configuration.
467 If a read error is encountered on a cache device, that read I/O is reissued to
468 the original storage pool device, which might be part of a mirrored or raidz
469 configuration.
470 .Pp
471 The content of the cache devices is considered volatile, as is the case with
472 other system caches.
473 .Ss Properties
474 Each pool has several properties associated with it.
475 Some properties are read-only statistics while others are configurable and
476 change the behavior of the pool.
477 .Pp
478 The following are read-only properties:
479 .Bl -tag -width Ds
480 .It Cm allocated
481 Amount of storage used within the pool.
482 .It Sy capacity
483 Percentage of pool space used.
484 This property can also be referred to by its shortened column name,
485 .Sy cap .
486 .It Sy expandsize
487 Amount of uninitialized space within the pool or device that can be used to
488 increase the total capacity of the pool.
489 Uninitialized space consists of any space on an EFI labeled vdev which has not
490 been brought online
491 .Po e.g, using
492 .Nm zpool Cm online Fl e
493 .Pc .
494 This space occurs when a LUN is dynamically expanded.
495 .It Sy fragmentation
496 The amount of fragmentation in the pool.
497 .It Sy free
498 The amount of free space available in the pool.
499 .It Sy freeing
500 After a file system or snapshot is destroyed, the space it was using is
501 returned to the pool asynchronously.
502 .Sy freeing
503 is the amount of space remaining to be reclaimed.
504 Over time
505 .Sy freeing
506 will decrease while
507 .Sy free
508 increases.
509 .It Sy health
510 The current health of the pool.
511 Health can be one of
512 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
513 .It Sy guid
514 A unique identifier for the pool.
515 .It Sy size
516 Total size of the storage pool.
517 .It Sy unsupported@ Ns Em feature_guid
518 Information about unsupported features that are enabled on the pool.
519 See
520 .Xr zpool-features 5
521 for details.
522 .El
523 .Pp
524 The space usage properties report actual physical space available to the
525 storage pool.
526 The physical space can be different from the total amount of space that any
527 contained datasets can actually use.
528 The amount of space used in a raidz configuration depends on the characteristics
529 of the data being written.
530 In addition, ZFS reserves some space for internal accounting that the
531 .Xr zfs 8
532 command takes into account, but the
533 .Nm
534 command does not.
535 For non-full pools of a reasonable size, these effects should be invisible.
536 For small pools, or pools that are close to being completely full, these
537 discrepancies may become more noticeable.
538 .Pp
539 The following property can be set at creation time and import time:
540 .Bl -tag -width Ds
541 .It Sy altroot
542 Alternate root directory.
543 If set, this directory is prepended to any mount points within the pool.
544 This can be used when examining an unknown pool where the mount points cannot be
545 trusted, or in an alternate boot environment, where the typical paths are not
546 valid.
547 .Sy altroot
548 is not a persistent property.
549 It is valid only while the system is up.
550 Setting
551 .Sy altroot
552 defaults to using
553 .Sy cachefile Ns = Ns Sy none ,
554 though this may be overridden using an explicit setting.
555 .El
556 .Pp
557 The following property can be set only at import time:
558 .Bl -tag -width Ds
559 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
560 If set to
561 .Sy on ,
562 the pool will be imported in read-only mode.
563 This property can also be referred to by its shortened column name,
564 .Sy rdonly .
565 .El
566 .Pp
567 The following properties can be set at creation time and import time, and later
568 changed with the
569 .Nm zpool Cm set
570 command:
571 .Bl -tag -width Ds
572 .It Sy ashift Ns = Ns Sy ashift
573 Pool sector size exponent, to the power of
574 .Sy 2
575 (internally referred to as
576 .Sy ashift
577 ). Values from 9 to 16, inclusive, are valid; also, the special
578 value 0 (the default) means to auto-detect using the kernel's block
579 layer and a ZFS internal exception list. I/O operations will be aligned
580 to the specified size boundaries. Additionally, the minimum (disk)
581 write size will be set to the specified size, so this represents a
582 space vs. performance trade-off. For optimal performance, the pool
583 sector size should be greater than or equal to the sector size of the
584 underlying disks. The typical case for setting this property is when
585 performance is important and the underlying disks use 4KiB sectors but
586 report 512B sectors to the OS (for compatibility reasons); in that
587 case, set
588 .Sy ashift=12
589 (which is 1<<12 = 4096). When set, this property is
590 used as the default hint value in subsequent vdev operations (add,
591 attach and replace). Changing this value will not modify any existing
592 vdev, not even on disk replacement; however it can be used, for
593 instance, to replace a dying 512B sectors disk with a newer 4KiB
594 sectors device: this will probably result in bad performance but at the
595 same time could prevent loss of data.
596 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
597 Controls automatic pool expansion when the underlying LUN is grown.
598 If set to
599 .Sy on ,
600 the pool will be resized according to the size of the expanded device.
601 If the device is part of a mirror or raidz then all devices within that
602 mirror/raidz group must be expanded before the new space is made available to
603 the pool.
604 The default behavior is
605 .Sy off .
606 This property can also be referred to by its shortened column name,
607 .Sy expand .
608 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
609 Controls automatic device replacement.
610 If set to
611 .Sy off ,
612 device replacement must be initiated by the administrator by using the
613 .Nm zpool Cm replace
614 command.
615 If set to
616 .Sy on ,
617 any new device, found in the same physical location as a device that previously
618 belonged to the pool, is automatically formatted and replaced.
619 The default behavior is
620 .Sy off .
621 This property can also be referred to by its shortened column name,
622 .Sy replace .
623 Autoreplace can also be used with virtual disks (like device
624 mapper) provided that you use the /dev/disk/by-vdev paths setup by
625 vdev_id.conf. See the
626 .Xr vdev_id 8
627 man page for more details.
628 Autoreplace and autoonline require the ZFS Event Daemon be configured and
629 running. See the
630 .Xr zed 8
631 man page for more details.
632 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
633 Identifies the default bootable dataset for the root pool. This property is
634 expected to be set mainly by the installation and upgrade programs.
635 Not all Linux distribution boot processes use the bootfs property.
636 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
637 Controls the location of where the pool configuration is cached.
638 Discovering all pools on system startup requires a cached copy of the
639 configuration data that is stored on the root file system.
640 All pools in this cache are automatically imported when the system boots.
641 Some environments, such as install and clustering, need to cache this
642 information in a different location so that pools are not automatically
643 imported.
644 Setting this property caches the pool configuration in a different location that
645 can later be imported with
646 .Nm zpool Cm import Fl c .
647 Setting it to the special value
648 .Sy none
649 creates a temporary pool that is never cached, and the special value
650 .Qq
651 .Pq empty string
652 uses the default location.
653 .Pp
654 Multiple pools can share the same cache file.
655 Because the kernel destroys and recreates this file when pools are added and
656 removed, care should be taken when attempting to access this file.
657 When the last pool using a
658 .Sy cachefile
659 is exported or destroyed, the file will be empty.
660 .It Sy comment Ns = Ns Ar text
661 A text string consisting of printable ASCII characters that will be stored
662 such that it is available even if the pool becomes faulted.
663 An administrator can provide additional information about a pool using this
664 property.
665 .It Sy dedupditto Ns = Ns Ar number
666 Threshold for the number of block ditto copies.
667 If the reference count for a deduplicated block increases above this number, a
668 new ditto copy of this block is automatically stored.
669 The default setting is
670 .Sy 0
671 which causes no ditto copies to be created for deduplicated blocks.
672 The minimum legal nonzero setting is
673 .Sy 100 .
674 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
675 Controls whether a non-privileged user is granted access based on the dataset
676 permissions defined on the dataset.
677 See
678 .Xr zfs 8
679 for more information on ZFS delegated administration.
680 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
681 Controls the system behavior in the event of catastrophic pool failure.
682 This condition is typically a result of a loss of connectivity to the underlying
683 storage device(s) or a failure of all devices within the pool.
684 The behavior of such an event is determined as follows:
685 .Bl -tag -width "continue"
686 .It Sy wait
687 Blocks all I/O access until the device connectivity is recovered and the errors
688 are cleared.
689 This is the default behavior.
690 .It Sy continue
691 Returns
692 .Er EIO
693 to any new write I/O requests but allows reads to any of the remaining healthy
694 devices.
695 Any write requests that have yet to be committed to disk would be blocked.
696 .It Sy panic
697 Prints out a message to the console and generates a system crash dump.
698 .El
699 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
700 The value of this property is the current state of
701 .Ar feature_name .
702 The only valid value when setting this property is
703 .Sy enabled
704 which moves
705 .Ar feature_name
706 to the enabled state.
707 See
708 .Xr zpool-features 5
709 for details on feature states.
710 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
711 Controls whether information about snapshots associated with this pool is
712 output when
713 .Nm zfs Cm list
714 is run without the
715 .Fl t
716 option.
717 The default value is
718 .Sy off .
719 This property can also be referred to by its shortened name,
720 .Sy listsnaps .
721 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
722 Controls whether a pool activity check should be performed during
723 .Nm zpool Cm import .
724 When a pool is determined to be active it cannot be imported, even with the
725 .Fl f
726 option. This property is intended to be used in failover configurations
727 where multiple hosts have access to a pool on shared storage. When this
728 property is on, periodic writes to storage occur to show the pool is in use.
729 See
730 .Sy zfs_multihost_interval
731 in the
732 .Xr zfs-module-parameters 5
733 man page. In order to enable this property each host must set a unique hostid.
734 See
735 .Xr genhostid 1
736 .Xr zgenhostid 8
737 .Xr spl-module-parameters 5
738 for additional details. The default value is
739 .Sy off .
740 .It Sy version Ns = Ns Ar version
741 The current on-disk version of the pool.
742 This can be increased, but never decreased.
743 The preferred method of updating pools is with the
744 .Nm zpool Cm upgrade
745 command, though this property can be used when a specific version is needed for
746 backwards compatibility.
747 Once feature flags are enabled on a pool this property will no longer have a
748 value.
749 .El
750 .Ss Subcommands
751 All subcommands that modify state are logged persistently to the pool in their
752 original form.
753 .Pp
754 The
755 .Nm
756 command provides subcommands to create and destroy storage pools, add capacity
757 to storage pools, and provide information about the storage pools.
758 The following subcommands are supported:
759 .Bl -tag -width Ds
760 .It Xo
761 .Nm
762 .Fl ?
763 .Xc
764 Displays a help message.
765 .It Xo
766 .Nm
767 .Cm add
768 .Op Fl fgLnP
769 .Oo Fl o Ar property Ns = Ns Ar value Oc
770 .Ar pool vdev Ns ...
771 .Xc
772 Adds the specified virtual devices to the given pool.
773 The
774 .Ar vdev
775 specification is described in the
776 .Sx Virtual Devices
777 section.
778 The behavior of the
779 .Fl f
780 option, and the device checks performed are described in the
781 .Nm zpool Cm create
782 subcommand.
783 .Bl -tag -width Ds
784 .It Fl f
785 Forces use of
786 .Ar vdev Ns s ,
787 even if they appear in use or specify a conflicting replication level.
788 Not all devices can be overridden in this manner.
789 .It Fl g
790 Display
791 .Ar vdev ,
792 GUIDs instead of the normal device names. These GUIDs can be used in place of
793 device names for the zpool detach/offline/remove/replace commands.
794 .It Fl L
795 Display real paths for
796 .Ar vdev Ns s
797 resolving all symbolic links. This can be used to look up the current block
798 device name regardless of the /dev/disk/ path used to open it.
799 .It Fl n
800 Displays the configuration that would be used without actually adding the
801 .Ar vdev Ns s .
802 The actual pool creation can still fail due to insufficient privileges or
803 device sharing.
804 .It Fl P
805 Display real paths for
806 .Ar vdev Ns s
807 instead of only the last component of the path. This can be used in
808 conjunction with the
809 .Fl L
810 flag.
811 .It Fl o Ar property Ns = Ns Ar value
812 Sets the given pool properties. See the
813 .Sx Properties
814 section for a list of valid properties that can be set. The only property
815 supported at the moment is ashift.
816 .El
817 .It Xo
818 .Nm
819 .Cm attach
820 .Op Fl f
821 .Oo Fl o Ar property Ns = Ns Ar value Oc
822 .Ar pool device new_device
823 .Xc
824 Attaches
825 .Ar new_device
826 to the existing
827 .Ar device .
828 The existing device cannot be part of a raidz configuration.
829 If
830 .Ar device
831 is not currently part of a mirrored configuration,
832 .Ar device
833 automatically transforms into a two-way mirror of
834 .Ar device
835 and
836 .Ar new_device .
837 If
838 .Ar device
839 is part of a two-way mirror, attaching
840 .Ar new_device
841 creates a three-way mirror, and so on.
842 In either case,
843 .Ar new_device
844 begins to resilver immediately.
845 .Bl -tag -width Ds
846 .It Fl f
847 Forces use of
848 .Ar new_device ,
849 even if its appears to be in use.
850 Not all devices can be overridden in this manner.
851 .It Fl o Ar property Ns = Ns Ar value
852 Sets the given pool properties. See the
853 .Sx Properties
854 section for a list of valid properties that can be set. The only property
855 supported at the moment is ashift.
856 .El
857 .It Xo
858 .Nm
859 .Cm clear
860 .Ar pool
861 .Op Ar device
862 .Xc
863 Clears device errors in a pool.
864 If no arguments are specified, all device errors within the pool are cleared.
865 If one or more devices is specified, only those errors associated with the
866 specified device or devices are cleared.
867 .It Xo
868 .Nm
869 .Cm create
870 .Op Fl dfn
871 .Op Fl m Ar mountpoint
872 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
873 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
874 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
875 .Op Fl R Ar root
876 .Op Fl t Ar tname
877 .Ar pool vdev Ns ...
878 .Xc
879 Creates a new storage pool containing the virtual devices specified on the
880 command line.
881 The pool name must begin with a letter, and can only contain
882 alphanumeric characters as well as underscore
883 .Pq Qq Sy _ ,
884 dash
885 .Pq Qq Sy \&- ,
886 colon
887 .Pq Qq Sy \&: ,
888 space
889 .Pq Qq Sy \&\ ,
890 and period
891 .Pq Qq Sy \&. .
892 The pool names
893 .Sy mirror ,
894 .Sy raidz ,
895 .Sy spare
896 and
897 .Sy log
898 are reserved, as are names beginning with
899 .Sy mirror ,
900 .Sy raidz ,
901 .Sy spare ,
902 and the pattern
903 .Sy c[0-9] .
904 The
905 .Ar vdev
906 specification is described in the
907 .Sx Virtual Devices
908 section.
909 .Pp
910 The command verifies that each device specified is accessible and not currently
911 in use by another subsystem.
912 There are some uses, such as being currently mounted, or specified as the
913 dedicated dump device, that prevents a device from ever being used by ZFS.
914 Other uses, such as having a preexisting UFS file system, can be overridden with
915 the
916 .Fl f
917 option.
918 .Pp
919 The command also checks that the replication strategy for the pool is
920 consistent.
921 An attempt to combine redundant and non-redundant storage in a single pool, or
922 to mix disks and files, results in an error unless
923 .Fl f
924 is specified.
925 The use of differently sized devices within a single raidz or mirror group is
926 also flagged as an error unless
927 .Fl f
928 is specified.
929 .Pp
930 Unless the
931 .Fl R
932 option is specified, the default mount point is
933 .Pa / Ns Ar pool .
934 The mount point must not exist or must be empty, or else the root dataset
935 cannot be mounted.
936 This can be overridden with the
937 .Fl m
938 option.
939 .Pp
940 By default all supported features are enabled on the new pool unless the
941 .Fl d
942 option is specified.
943 .Bl -tag -width Ds
944 .It Fl d
945 Do not enable any features on the new pool.
946 Individual features can be enabled by setting their corresponding properties to
947 .Sy enabled
948 with the
949 .Fl o
950 option.
951 See
952 .Xr zpool-features 5
953 for details about feature properties.
954 .It Fl f
955 Forces use of
956 .Ar vdev Ns s ,
957 even if they appear in use or specify a conflicting replication level.
958 Not all devices can be overridden in this manner.
959 .It Fl m Ar mountpoint
960 Sets the mount point for the root dataset.
961 The default mount point is
962 .Pa /pool
963 or
964 .Pa altroot/pool
965 if
966 .Ar altroot
967 is specified.
968 The mount point must be an absolute path,
969 .Sy legacy ,
970 or
971 .Sy none .
972 For more information on dataset mount points, see
973 .Xr zfs 8 .
974 .It Fl n
975 Displays the configuration that would be used without actually creating the
976 pool.
977 The actual pool creation can still fail due to insufficient privileges or
978 device sharing.
979 .It Fl o Ar property Ns = Ns Ar value
980 Sets the given pool properties.
981 See the
982 .Sx Properties
983 section for a list of valid properties that can be set.
984 .It Fl o Ar feature@feature Ns = Ns Ar value
985 Sets the given pool feature. See the
986 .Xr zpool-features 5
987 section for a list of valid features that can be set.
988 Value can be either disabled or enabled.
989 .It Fl O Ar file-system-property Ns = Ns Ar value
990 Sets the given file system properties in the root file system of the pool.
991 See the
992 .Sx Properties
993 section of
994 .Xr zfs 8
995 for a list of valid properties that can be set.
996 .It Fl R Ar root
997 Equivalent to
998 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
999 .It Fl t Ar tname
1000 Sets the in-core pool name to
1001 .Sy tname
1002 while the on-disk name will be the name specified as the pool name
1003 .Sy pool .
1004 This will set the default cachefile property to none. This is intended
1005 to handle name space collisions when creating pools for other systems,
1006 such as virtual machines or physical machines whose pools live on network
1007 block devices.
1008 .El
1009 .It Xo
1010 .Nm
1011 .Cm destroy
1012 .Op Fl f
1013 .Ar pool
1014 .Xc
1015 Destroys the given pool, freeing up any devices for other use.
1016 This command tries to unmount any active datasets before destroying the pool.
1017 .Bl -tag -width Ds
1018 .It Fl f
1019 Forces any active datasets contained within the pool to be unmounted.
1020 .El
1021 .It Xo
1022 .Nm
1023 .Cm detach
1024 .Ar pool device
1025 .Xc
1026 Detaches
1027 .Ar device
1028 from a mirror.
1029 The operation is refused if there are no other valid replicas of the data.
1030 If device may be re-added to the pool later on then consider the
1031 .Sy zpool offline
1032 command instead.
1033 .It Xo
1034 .Nm
1035 .Cm events
1036 .Op Fl vHf Oo Ar pool Oc | Fl c
1037 .Xc
1038 Lists all recent events generated by the ZFS kernel modules. These events
1039 are consumed by the
1040 .Xr zed 8
1041 and used to automate administrative tasks such as replacing a failed device
1042 with a hot spare. For more information about the subclasses and event payloads
1043 that can be generated see the
1044 .Xr zfs-events 5
1045 man page.
1046 .Bl -tag -width Ds
1047 .It Fl c
1048 Clear all previous events.
1049 .It Fl f
1050 Follow mode.
1051 .It Fl H
1052 Scripted mode. Do not display headers, and separate fields by a
1053 single tab instead of arbitrary space.
1054 .It Fl v
1055 Print the entire payload for each event.
1056 .El
1057 .It Xo
1058 .Nm
1059 .Cm export
1060 .Op Fl a
1061 .Op Fl f
1062 .Ar pool Ns ...
1063 .Xc
1064 Exports the given pools from the system.
1065 All devices are marked as exported, but are still considered in use by other
1066 subsystems.
1067 The devices can be moved between systems
1068 .Pq even those of different endianness
1069 and imported as long as a sufficient number of devices are present.
1070 .Pp
1071 Before exporting the pool, all datasets within the pool are unmounted.
1072 A pool can not be exported if it has a shared spare that is currently being
1073 used.
1074 .Pp
1075 For pools to be portable, you must give the
1076 .Nm
1077 command whole disks, not just partitions, so that ZFS can label the disks with
1078 portable EFI labels.
1079 Otherwise, disk drivers on platforms of different endianness will not recognize
1080 the disks.
1081 .Bl -tag -width Ds
1082 .It Fl a
1083 Exports all pools imported on the system.
1084 .It Fl f
1085 Forcefully unmount all datasets, using the
1086 .Nm unmount Fl f
1087 command.
1088 .Pp
1089 This command will forcefully export the pool even if it has a shared spare that
1090 is currently being used.
1091 This may lead to potential data corruption.
1092 .El
1093 .It Xo
1094 .Nm
1095 .Cm get
1096 .Op Fl Hp
1097 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1098 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1099 .Ar pool Ns ...
1100 .Xc
1101 Retrieves the given list of properties
1102 .Po
1103 or all properties if
1104 .Sy all
1105 is used
1106 .Pc
1107 for the specified storage pool(s).
1108 These properties are displayed with the following fields:
1109 .Bd -literal
1110 name Name of storage pool
1111 property Property name
1112 value Property value
1113 source Property source, either 'default' or 'local'.
1114 .Ed
1115 .Pp
1116 See the
1117 .Sx Properties
1118 section for more information on the available pool properties.
1119 .Bl -tag -width Ds
1120 .It Fl H
1121 Scripted mode.
1122 Do not display headers, and separate fields by a single tab instead of arbitrary
1123 space.
1124 .It Fl o Ar field
1125 A comma-separated list of columns to display.
1126 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1127 is the default value.
1128 .It Fl p
1129 Display numbers in parsable (exact) values.
1130 .El
1131 .It Xo
1132 .Nm
1133 .Cm history
1134 .Op Fl il
1135 .Oo Ar pool Oc Ns ...
1136 .Xc
1137 Displays the command history of the specified pool(s) or all pools if no pool is
1138 specified.
1139 .Bl -tag -width Ds
1140 .It Fl i
1141 Displays internally logged ZFS events in addition to user initiated events.
1142 .It Fl l
1143 Displays log records in long format, which in addition to standard format
1144 includes, the user name, the hostname, and the zone in which the operation was
1145 performed.
1146 .El
1147 .It Xo
1148 .Nm
1149 .Cm import
1150 .Op Fl D
1151 .Op Fl d Ar dir Ns | Ns device
1152 .Xc
1153 Lists pools available to import.
1154 If the
1155 .Fl d
1156 option is not specified, this command searches for devices in
1157 .Pa /dev .
1158 The
1159 .Fl d
1160 option can be specified multiple times, and all directories are searched.
1161 If the device appears to be part of an exported pool, this command displays a
1162 summary of the pool with the name of the pool, a numeric identifier, as well as
1163 the vdev layout and current health of the device for each device or file.
1164 Destroyed pools, pools that were previously destroyed with the
1165 .Nm zpool Cm destroy
1166 command, are not listed unless the
1167 .Fl D
1168 option is specified.
1169 .Pp
1170 The numeric identifier is unique, and can be used instead of the pool name when
1171 multiple exported pools of the same name are available.
1172 .Bl -tag -width Ds
1173 .It Fl c Ar cachefile
1174 Reads configuration from the given
1175 .Ar cachefile
1176 that was created with the
1177 .Sy cachefile
1178 pool property.
1179 This
1180 .Ar cachefile
1181 is used instead of searching for devices.
1182 .It Fl d Ar dir Ns | Ns Ar device
1183 Uses
1184 .Ar device
1185 or searches for devices or files in
1186 .Ar dir .
1187 The
1188 .Fl d
1189 option can be specified multiple times.
1190 .It Fl D
1191 Lists destroyed pools only.
1192 .El
1193 .It Xo
1194 .Nm
1195 .Cm import
1196 .Fl a
1197 .Op Fl DflmN
1198 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1199 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1200 .Op Fl o Ar mntopts
1201 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1202 .Op Fl R Ar root
1203 .Op Fl s
1204 .Xc
1205 Imports all pools found in the search directories.
1206 Identical to the previous command, except that all pools with a sufficient
1207 number of devices available are imported.
1208 Destroyed pools, pools that were previously destroyed with the
1209 .Nm zpool Cm destroy
1210 command, will not be imported unless the
1211 .Fl D
1212 option is specified.
1213 .Bl -tag -width Ds
1214 .It Fl a
1215 Searches for and imports all pools found.
1216 .It Fl c Ar cachefile
1217 Reads configuration from the given
1218 .Ar cachefile
1219 that was created with the
1220 .Sy cachefile
1221 pool property.
1222 This
1223 .Ar cachefile
1224 is used instead of searching for devices.
1225 .It Fl d Ar dir Ns | Ns Ar device
1226 Uses
1227 .Ar device
1228 or searches for devices or files in
1229 .Ar dir .
1230 The
1231 .Fl d
1232 option can be specified multiple times.
1233 This option is incompatible with the
1234 .Fl c
1235 option.
1236 .It Fl D
1237 Imports destroyed pools only.
1238 The
1239 .Fl f
1240 option is also required.
1241 .It Fl f
1242 Forces import, even if the pool appears to be potentially active.
1243 .It Fl F
1244 Recovery mode for a non-importable pool.
1245 Attempt to return the pool to an importable state by discarding the last few
1246 transactions.
1247 Not all damaged pools can be recovered by using this option.
1248 If successful, the data from the discarded transactions is irretrievably lost.
1249 This option is ignored if the pool is importable or already imported.
1250 .It Fl l
1251 Indicates that this command will request encryption keys for all encrypted
1252 datasets it attempts to mount as it is bringing the pool online. Note that if
1253 any datasets have a
1254 .Sy keylocation
1255 of
1256 .Sy prompt
1257 this command will block waiting for the keys to be entered. Without this flag
1258 encrypted datasets will be left unavailable until the keys are loaded.
1259 .It Fl m
1260 Allows a pool to import when there is a missing log device.
1261 Recent transactions can be lost because the log device will be discarded.
1262 .It Fl n
1263 Used with the
1264 .Fl F
1265 recovery option.
1266 Determines whether a non-importable pool can be made importable again, but does
1267 not actually perform the pool recovery.
1268 For more details about pool recovery mode, see the
1269 .Fl F
1270 option, above.
1271 .It Fl N
1272 Import the pool without mounting any file systems.
1273 .It Fl o Ar mntopts
1274 Comma-separated list of mount options to use when mounting datasets within the
1275 pool.
1276 See
1277 .Xr zfs 8
1278 for a description of dataset properties and mount options.
1279 .It Fl o Ar property Ns = Ns Ar value
1280 Sets the specified property on the imported pool.
1281 See the
1282 .Sx Properties
1283 section for more information on the available pool properties.
1284 .It Fl R Ar root
1285 Sets the
1286 .Sy cachefile
1287 property to
1288 .Sy none
1289 and the
1290 .Sy altroot
1291 property to
1292 .Ar root .
1293 .It Fl s
1294 Scan using the default search path, the libblkid cache will not be
1295 consulted. A custom search path may be specified by setting the
1296 ZPOOL_IMPORT_PATH environment variable.
1297 .It Fl X
1298 Used with the
1299 .Fl F
1300 recovery option. Determines whether extreme
1301 measures to find a valid txg should take place. This allows the pool to
1302 be rolled back to a txg which is no longer guaranteed to be consistent.
1303 Pools imported at an inconsistent txg may contain uncorrectable
1304 checksum errors. For more details about pool recovery mode, see the
1305 .Fl F
1306 option, above. WARNING: This option can be extremely hazardous to the
1307 health of your pool and should only be used as a last resort.
1308 .It Fl T
1309 Specify the txg to use for rollback. Implies
1310 .Fl FX .
1311 For more details
1312 about pool recovery mode, see the
1313 .Fl X
1314 option, above. WARNING: This option can be extremely hazardous to the
1315 health of your pool and should only be used as a last resort.
1316 .El
1317 .It Xo
1318 .Nm
1319 .Cm import
1320 .Op Fl Dflm
1321 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1322 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1323 .Op Fl o Ar mntopts
1324 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1325 .Op Fl R Ar root
1326 .Op Fl s
1327 .Ar pool Ns | Ns Ar id
1328 .Op Ar newpool
1329 .Xc
1330 Imports a specific pool.
1331 A pool can be identified by its name or the numeric identifier.
1332 If
1333 .Ar newpool
1334 is specified, the pool is imported using the name
1335 .Ar newpool .
1336 Otherwise, it is imported with the same name as its exported name.
1337 .Pp
1338 If a device is removed from a system without running
1339 .Nm zpool Cm export
1340 first, the device appears as potentially active.
1341 It cannot be determined if this was a failed export, or whether the device is
1342 really in use from another host.
1343 To import a pool in this state, the
1344 .Fl f
1345 option is required.
1346 .Bl -tag -width Ds
1347 .It Fl c Ar cachefile
1348 Reads configuration from the given
1349 .Ar cachefile
1350 that was created with the
1351 .Sy cachefile
1352 pool property.
1353 This
1354 .Ar cachefile
1355 is used instead of searching for devices.
1356 .It Fl d Ar dir Ns | Ns Ar device
1357 Uses
1358 .Ar device
1359 or searches for devices or files in
1360 .Ar dir .
1361 The
1362 .Fl d
1363 option can be specified multiple times.
1364 This option is incompatible with the
1365 .Fl c
1366 option.
1367 .It Fl D
1368 Imports destroyed pool.
1369 The
1370 .Fl f
1371 option is also required.
1372 .It Fl f
1373 Forces import, even if the pool appears to be potentially active.
1374 .It Fl F
1375 Recovery mode for a non-importable pool.
1376 Attempt to return the pool to an importable state by discarding the last few
1377 transactions.
1378 Not all damaged pools can be recovered by using this option.
1379 If successful, the data from the discarded transactions is irretrievably lost.
1380 This option is ignored if the pool is importable or already imported.
1381 .It Fl l
1382 Indicates that this command will request encryption keys for all encrypted
1383 datasets it attempts to mount as it is bringing the pool online. Note that if
1384 any datasets have a
1385 .Sy keylocation
1386 of
1387 .Sy prompt
1388 this command will block waiting for the keys to be entered. Without this flag
1389 encrypted datasets will be left unavailable until the keys are loaded.
1390 .It Fl m
1391 Allows a pool to import when there is a missing log device.
1392 Recent transactions can be lost because the log device will be discarded.
1393 .It Fl n
1394 Used with the
1395 .Fl F
1396 recovery option.
1397 Determines whether a non-importable pool can be made importable again, but does
1398 not actually perform the pool recovery.
1399 For more details about pool recovery mode, see the
1400 .Fl F
1401 option, above.
1402 .It Fl o Ar mntopts
1403 Comma-separated list of mount options to use when mounting datasets within the
1404 pool.
1405 See
1406 .Xr zfs 8
1407 for a description of dataset properties and mount options.
1408 .It Fl o Ar property Ns = Ns Ar value
1409 Sets the specified property on the imported pool.
1410 See the
1411 .Sx Properties
1412 section for more information on the available pool properties.
1413 .It Fl R Ar root
1414 Sets the
1415 .Sy cachefile
1416 property to
1417 .Sy none
1418 and the
1419 .Sy altroot
1420 property to
1421 .Ar root .
1422 .It Fl s
1423 Scan using the default search path, the libblkid cache will not be
1424 consulted. A custom search path may be specified by setting the
1425 ZPOOL_IMPORT_PATH environment variable.
1426 .It Fl X
1427 Used with the
1428 .Fl F
1429 recovery option. Determines whether extreme
1430 measures to find a valid txg should take place. This allows the pool to
1431 be rolled back to a txg which is no longer guaranteed to be consistent.
1432 Pools imported at an inconsistent txg may contain uncorrectable
1433 checksum errors. For more details about pool recovery mode, see the
1434 .Fl F
1435 option, above. WARNING: This option can be extremely hazardous to the
1436 health of your pool and should only be used as a last resort.
1437 .It Fl T
1438 Specify the txg to use for rollback. Implies
1439 .Fl FX .
1440 For more details
1441 about pool recovery mode, see the
1442 .Fl X
1443 option, above. WARNING: This option can be extremely hazardous to the
1444 health of your pool and should only be used as a last resort.
1445 .It Fl t
1446 Used with
1447 .Sy newpool .
1448 Specifies that
1449 .Sy newpool
1450 is temporary. Temporary pool names last until export. Ensures that
1451 the original pool name will be used in all label updates and therefore
1452 is retained upon export.
1453 Will also set -o cachefile=none when not explicitly specified.
1454 .El
1455 .It Xo
1456 .Nm
1457 .Cm iostat
1458 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1459 .Op Fl T Sy u Ns | Ns Sy d
1460 .Op Fl ghHLpPvy
1461 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1462 .Op Ar interval Op Ar count
1463 .Xc
1464 Displays I/O statistics for the given pools/vdevs. You can pass in a
1465 list of pools, a pool and list of vdevs in that pool, or a list of any
1466 vdevs from any pool. If no items are specified, statistics for every
1467 pool in the system are shown.
1468 When given an
1469 .Ar interval ,
1470 the statistics are printed every
1471 .Ar interval
1472 seconds until ^C is pressed. If count is specified, the command exits
1473 after count reports are printed. The first report printed is always
1474 the statistics since boot regardless of whether
1475 .Ar interval
1476 and
1477 .Ar count
1478 are passed. However, this behavior can be suppressed with the
1479 .Fl y
1480 flag. Also note that the units of
1481 .Sy K ,
1482 .Sy M ,
1483 .Sy G ...
1484 that are printed in the report are in base 1024. To get the raw
1485 values, use the
1486 .Fl p
1487 flag.
1488 .Bl -tag -width Ds
1489 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1490 Run a script (or scripts) on each vdev and include the output as a new column
1491 in the
1492 .Nm zpool Cm iostat
1493 output. Users can run any script found in their
1494 .Pa ~/.zpool.d
1495 directory or from the system
1496 .Pa /etc/zfs/zpool.d
1497 directory. Script names containing the slash (/) character are not allowed.
1498 The default search path can be overridden by setting the
1499 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1500 .Fl c
1501 if they have the ZPOOL_SCRIPTS_AS_ROOT
1502 environment variable set. If a script requires the use of a privileged
1503 command, like
1504 .Xr smartctl 8 ,
1505 then it's recommended you allow the user access to it in
1506 .Pa /etc/sudoers
1507 or add the user to the
1508 .Pa /etc/sudoers.d/zfs
1509 file.
1510 .Pp
1511 If
1512 .Fl c
1513 is passed without a script name, it prints a list of all scripts.
1514 .Fl c
1515 also sets verbose mode
1516 .No \&( Ns Fl v Ns No \&).
1517 .Pp
1518 Script output should be in the form of "name=value". The column name is
1519 set to "name" and the value is set to "value". Multiple lines can be
1520 used to output multiple columns. The first line of output not in the
1521 "name=value" format is displayed without a column title, and no more
1522 output after that is displayed. This can be useful for printing error
1523 messages. Blank or NULL values are printed as a '-' to make output
1524 awk-able.
1525 .Pp
1526 The following environment variables are set before running each script:
1527 .Bl -tag -width "VDEV_PATH"
1528 .It Sy VDEV_PATH
1529 Full path to the vdev
1530 .El
1531 .Bl -tag -width "VDEV_UPATH"
1532 .It Sy VDEV_UPATH
1533 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1534 multipath, or partitioned vdevs.
1535 .El
1536 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1537 .It Sy VDEV_ENC_SYSFS_PATH
1538 The sysfs path to the enclosure for the vdev (if any).
1539 .El
1540 .It Fl T Sy u Ns | Ns Sy d
1541 Display a time stamp.
1542 Specify
1543 .Sy u
1544 for a printed representation of the internal representation of time.
1545 See
1546 .Xr time 2 .
1547 Specify
1548 .Sy d
1549 for standard date format.
1550 See
1551 .Xr date 1 .
1552 .It Fl g
1553 Display vdev GUIDs instead of the normal device names. These GUIDs
1554 can be used in place of device names for the zpool
1555 detach/offline/remove/replace commands.
1556 .It Fl H
1557 Scripted mode. Do not display headers, and separate fields by a
1558 single tab instead of arbitrary space.
1559 .It Fl L
1560 Display real paths for vdevs resolving all symbolic links. This can
1561 be used to look up the current block device name regardless of the
1562 .Pa /dev/disk/
1563 path used to open it.
1564 .It Fl p
1565 Display numbers in parsable (exact) values. Time values are in
1566 nanoseconds.
1567 .It Fl P
1568 Display full paths for vdevs instead of only the last component of
1569 the path. This can be used in conjunction with the
1570 .Fl L
1571 flag.
1572 .It Fl r
1573 Print request size histograms for the leaf ZIOs. This includes
1574 histograms of individual ZIOs (
1575 .Ar ind )
1576 and aggregate ZIOs (
1577 .Ar agg ).
1578 These stats can be useful for seeing how well the ZFS IO aggregator is
1579 working. Do not confuse these request size stats with the block layer
1580 requests; it's possible ZIOs can be broken up before being sent to the
1581 block device.
1582 .It Fl v
1583 Verbose statistics Reports usage statistics for individual vdevs within the
1584 pool, in addition to the pool-wide statistics.
1585 .It Fl y
1586 Omit statistics since boot.
1587 Normally the first line of output reports the statistics since boot.
1588 This option suppresses that first line of output.
1589 .It Fl w
1590 Display latency histograms:
1591 .Pp
1592 .Ar total_wait :
1593 Total IO time (queuing + disk IO time).
1594 .Ar disk_wait :
1595 Disk IO time (time reading/writing the disk).
1596 .Ar syncq_wait :
1597 Amount of time IO spent in synchronous priority queues. Does not include
1598 disk time.
1599 .Ar asyncq_wait :
1600 Amount of time IO spent in asynchronous priority queues. Does not include
1601 disk time.
1602 .Ar scrub :
1603 Amount of time IO spent in scrub queue. Does not include disk time.
1604 .It Fl l
1605 Include average latency statistics:
1606 .Pp
1607 .Ar total_wait :
1608 Average total IO time (queuing + disk IO time).
1609 .Ar disk_wait :
1610 Average disk IO time (time reading/writing the disk).
1611 .Ar syncq_wait :
1612 Average amount of time IO spent in synchronous priority queues. Does
1613 not include disk time.
1614 .Ar asyncq_wait :
1615 Average amount of time IO spent in asynchronous priority queues.
1616 Does not include disk time.
1617 .Ar scrub :
1618 Average queuing time in scrub queue. Does not include disk time.
1619 .It Fl q
1620 Include active queue statistics. Each priority queue has both
1621 pending (
1622 .Ar pend )
1623 and active (
1624 .Ar activ )
1625 IOs. Pending IOs are waiting to
1626 be issued to the disk, and active IOs have been issued to disk and are
1627 waiting for completion. These stats are broken out by priority queue:
1628 .Pp
1629 .Ar syncq_read/write :
1630 Current number of entries in synchronous priority
1631 queues.
1632 .Ar asyncq_read/write :
1633 Current number of entries in asynchronous priority queues.
1634 .Ar scrubq_read :
1635 Current number of entries in scrub queue.
1636 .Pp
1637 All queue statistics are instantaneous measurements of the number of
1638 entries in the queues. If you specify an interval, the measurements
1639 will be sampled from the end of the interval.
1640 .El
1641 .It Xo
1642 .Nm
1643 .Cm labelclear
1644 .Op Fl f
1645 .Ar device
1646 .Xc
1647 Removes ZFS label information from the specified
1648 .Ar device .
1649 The
1650 .Ar device
1651 must not be part of an active pool configuration.
1652 .Bl -tag -width Ds
1653 .It Fl f
1654 Treat exported or foreign devices as inactive.
1655 .El
1656 .It Xo
1657 .Nm
1658 .Cm list
1659 .Op Fl HgLpPv
1660 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1661 .Op Fl T Sy u Ns | Ns Sy d
1662 .Oo Ar pool Oc Ns ...
1663 .Op Ar interval Op Ar count
1664 .Xc
1665 Lists the given pools along with a health status and space usage.
1666 If no
1667 .Ar pool Ns s
1668 are specified, all pools in the system are listed.
1669 When given an
1670 .Ar interval ,
1671 the information is printed every
1672 .Ar interval
1673 seconds until ^C is pressed.
1674 If
1675 .Ar count
1676 is specified, the command exits after
1677 .Ar count
1678 reports are printed.
1679 .Bl -tag -width Ds
1680 .It Fl g
1681 Display vdev GUIDs instead of the normal device names. These GUIDs
1682 can be used in place of device names for the zpool
1683 detach/offline/remove/replace commands.
1684 .It Fl H
1685 Scripted mode.
1686 Do not display headers, and separate fields by a single tab instead of arbitrary
1687 space.
1688 .It Fl o Ar property
1689 Comma-separated list of properties to display.
1690 See the
1691 .Sx Properties
1692 section for a list of valid properties.
1693 The default list is
1694 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1695 .Cm dedupratio , health , altroot .
1696 .It Fl L
1697 Display real paths for vdevs resolving all symbolic links. This can
1698 be used to look up the current block device name regardless of the
1699 /dev/disk/ path used to open it.
1700 .It Fl p
1701 Display numbers in parsable
1702 .Pq exact
1703 values.
1704 .It Fl P
1705 Display full paths for vdevs instead of only the last component of
1706 the path. This can be used in conjunction with the
1707 .Fl L
1708 flag.
1709 .It Fl T Sy u Ns | Ns Sy d
1710 Display a time stamp.
1711 Specify
1712 .Fl u
1713 for a printed representation of the internal representation of time.
1714 See
1715 .Xr time 2 .
1716 Specify
1717 .Fl d
1718 for standard date format.
1719 See
1720 .Xr date 1 .
1721 .It Fl v
1722 Verbose statistics.
1723 Reports usage statistics for individual vdevs within the pool, in addition to
1724 the pool-wise statistics.
1725 .El
1726 .It Xo
1727 .Nm
1728 .Cm offline
1729 .Op Fl f
1730 .Op Fl t
1731 .Ar pool Ar device Ns ...
1732 .Xc
1733 Takes the specified physical device offline.
1734 While the
1735 .Ar device
1736 is offline, no attempt is made to read or write to the device.
1737 This command is not applicable to spares.
1738 .Bl -tag -width Ds
1739 .It Fl f
1740 Force fault. Instead of offlining the disk, put it into a faulted
1741 state. The fault will persist across imports unless the
1742 .Fl t
1743 flag was specified.
1744 .It Fl t
1745 Temporary.
1746 Upon reboot, the specified physical device reverts to its previous state.
1747 .El
1748 .It Xo
1749 .Nm
1750 .Cm online
1751 .Op Fl e
1752 .Ar pool Ar device Ns ...
1753 .Xc
1754 Brings the specified physical device online.
1755 This command is not applicable to spares.
1756 .Bl -tag -width Ds
1757 .It Fl e
1758 Expand the device to use all available space.
1759 If the device is part of a mirror or raidz then all devices must be expanded
1760 before the new space will become available to the pool.
1761 .El
1762 .It Xo
1763 .Nm
1764 .Cm reguid
1765 .Ar pool
1766 .Xc
1767 Generates a new unique identifier for the pool.
1768 You must ensure that all devices in this pool are online and healthy before
1769 performing this action.
1770 .It Xo
1771 .Nm
1772 .Cm reopen
1773 .Op Fl n
1774 .Ar pool
1775 .Xc
1776 Reopen all the vdevs associated with the pool.
1777 .Bl -tag -width Ds
1778 .It Fl n
1779 Do not restart an in-progress scrub operation. This is not recommended and can
1780 result in partially resilvered devices unless a second scrub is performed.
1781 .El
1782 .It Xo
1783 .Nm
1784 .Cm remove
1785 .Op Fl np
1786 .Ar pool Ar device Ns ...
1787 .Xc
1788 Removes the specified device from the pool.
1789 This command currently only supports removing hot spares, cache, log
1790 devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1791 .sp
1792 Removing a top-level vdev reduces the total amount of space in the storage pool.
1793 The specified device will be evacuated by copying all allocated space from it to
1794 the other devices in the pool.
1795 In this case, the
1796 .Nm zpool Cm remove
1797 command initiates the removal and returns, while the evacuation continues in
1798 the background.
1799 The removal progress can be monitored with
1800 .Nm zpool Cm status.
1801 This feature must be enabled to be used, see
1802 .Xr zpool-features 5
1803 .Pp
1804 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1805 same.
1806 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1807 the
1808 .Nm zpool Cm detach
1809 command.
1810 .Bl -tag -width Ds
1811 .It Fl n
1812 Do not actually perform the removal ("no-op").
1813 Instead, print the estimated amount of memory that will be used by the
1814 mapping table after the removal completes.
1815 This is nonzero only for top-level vdevs.
1816 .El
1817 .Bl -tag -width Ds
1818 .It Fl p
1819 Used in conjunction with the
1820 .Fl n
1821 flag, displays numbers as parsable (exact) values.
1822 .El
1823 .It Xo
1824 .Nm
1825 .Cm remove
1826 .Fl s
1827 .Ar pool
1828 .Xc
1829 Stops and cancels an in-progress removal of a top-level vdev.
1830 .It Xo
1831 .Nm
1832 .Cm replace
1833 .Op Fl f
1834 .Op Fl o Ar property Ns = Ns Ar value
1835 .Ar pool Ar device Op Ar new_device
1836 .Xc
1837 Replaces
1838 .Ar old_device
1839 with
1840 .Ar new_device .
1841 This is equivalent to attaching
1842 .Ar new_device ,
1843 waiting for it to resilver, and then detaching
1844 .Ar old_device .
1845 .Pp
1846 The size of
1847 .Ar new_device
1848 must be greater than or equal to the minimum size of all the devices in a mirror
1849 or raidz configuration.
1850 .Pp
1851 .Ar new_device
1852 is required if the pool is not redundant.
1853 If
1854 .Ar new_device
1855 is not specified, it defaults to
1856 .Ar old_device .
1857 This form of replacement is useful after an existing disk has failed and has
1858 been physically replaced.
1859 In this case, the new disk may have the same
1860 .Pa /dev
1861 path as the old device, even though it is actually a different disk.
1862 ZFS recognizes this.
1863 .Bl -tag -width Ds
1864 .It Fl f
1865 Forces use of
1866 .Ar new_device ,
1867 even if its appears to be in use.
1868 Not all devices can be overridden in this manner.
1869 .It Fl o Ar property Ns = Ns Ar value
1870 Sets the given pool properties. See the
1871 .Sx Properties
1872 section for a list of valid properties that can be set.
1873 The only property supported at the moment is
1874 .Sy ashift .
1875 .El
1876 .It Xo
1877 .Nm
1878 .Cm scrub
1879 .Op Fl s | Fl p
1880 .Ar pool Ns ...
1881 .Xc
1882 Begins a scrub or resumes a paused scrub.
1883 The scrub examines all data in the specified pools to verify that it checksums
1884 correctly.
1885 For replicated
1886 .Pq mirror or raidz
1887 devices, ZFS automatically repairs any damage discovered during the scrub.
1888 The
1889 .Nm zpool Cm status
1890 command reports the progress of the scrub and summarizes the results of the
1891 scrub upon completion.
1892 .Pp
1893 Scrubbing and resilvering are very similar operations.
1894 The difference is that resilvering only examines data that ZFS knows to be out
1895 of date
1896 .Po
1897 for example, when attaching a new device to a mirror or replacing an existing
1898 device
1899 .Pc ,
1900 whereas scrubbing examines all data to discover silent errors due to hardware
1901 faults or disk failure.
1902 .Pp
1903 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1904 one at a time.
1905 If a scrub is paused, the
1906 .Nm zpool Cm scrub
1907 resumes it.
1908 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1909 resilver completes.
1910 .Bl -tag -width Ds
1911 .It Fl s
1912 Stop scrubbing.
1913 .El
1914 .Bl -tag -width Ds
1915 .It Fl p
1916 Pause scrubbing.
1917 Scrub pause state and progress are periodically synced to disk.
1918 If the system is restarted or pool is exported during a paused scrub,
1919 even after import, scrub will remain paused until it is resumed.
1920 Once resumed the scrub will pick up from the place where it was last
1921 checkpointed to disk.
1922 To resume a paused scrub issue
1923 .Nm zpool Cm scrub
1924 again.
1925 .El
1926 .It Xo
1927 .Nm
1928 .Cm set
1929 .Ar property Ns = Ns Ar value
1930 .Ar pool
1931 .Xc
1932 Sets the given property on the specified pool.
1933 See the
1934 .Sx Properties
1935 section for more information on what properties can be set and acceptable
1936 values.
1937 .It Xo
1938 .Nm
1939 .Cm split
1940 .Op Fl gLlnP
1941 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1942 .Op Fl R Ar root
1943 .Ar pool newpool
1944 .Op Ar device ...
1945 .Xc
1946 Splits devices off
1947 .Ar pool
1948 creating
1949 .Ar newpool .
1950 All vdevs in
1951 .Ar pool
1952 must be mirrors and the pool must not be in the process of resilvering.
1953 At the time of the split,
1954 .Ar newpool
1955 will be a replica of
1956 .Ar pool .
1957 By default, the
1958 last device in each mirror is split from
1959 .Ar pool
1960 to create
1961 .Ar newpool .
1962 .Pp
1963 The optional device specification causes the specified device(s) to be
1964 included in the new
1965 .Ar pool
1966 and, should any devices remain unspecified,
1967 the last device in each mirror is used as would be by default.
1968 .Bl -tag -width Ds
1969 .It Fl g
1970 Display vdev GUIDs instead of the normal device names. These GUIDs
1971 can be used in place of device names for the zpool
1972 detach/offline/remove/replace commands.
1973 .It Fl L
1974 Display real paths for vdevs resolving all symbolic links. This can
1975 be used to look up the current block device name regardless of the
1976 .Pa /dev/disk/
1977 path used to open it.
1978 .It Fl l
1979 Indicates that this command will request encryption keys for all encrypted
1980 datasets it attempts to mount as it is bringing the new pool online. Note that
1981 if any datasets have a
1982 .Sy keylocation
1983 of
1984 .Sy prompt
1985 this command will block waiting for the keys to be entered. Without this flag
1986 encrypted datasets will be left unavailable until the keys are loaded.
1987 .It Fl n
1988 Do dry run, do not actually perform the split.
1989 Print out the expected configuration of
1990 .Ar newpool .
1991 .It Fl P
1992 Display full paths for vdevs instead of only the last component of
1993 the path. This can be used in conjunction with the
1994 .Fl L
1995 flag.
1996 .It Fl o Ar property Ns = Ns Ar value
1997 Sets the specified property for
1998 .Ar newpool .
1999 See the
2000 .Sx Properties
2001 section for more information on the available pool properties.
2002 .It Fl R Ar root
2003 Set
2004 .Sy altroot
2005 for
2006 .Ar newpool
2007 to
2008 .Ar root
2009 and automatically import it.
2010 .El
2011 .It Xo
2012 .Nm
2013 .Cm status
2014 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2015 .Op Fl gLPvxD
2016 .Op Fl T Sy u Ns | Ns Sy d
2017 .Oo Ar pool Oc Ns ...
2018 .Op Ar interval Op Ar count
2019 .Xc
2020 Displays the detailed health status for the given pools.
2021 If no
2022 .Ar pool
2023 is specified, then the status of each pool in the system is displayed.
2024 For more information on pool and device health, see the
2025 .Sx Device Failure and Recovery
2026 section.
2027 .Pp
2028 If a scrub or resilver is in progress, this command reports the percentage done
2029 and the estimated time to completion.
2030 Both of these are only approximate, because the amount of data in the pool and
2031 the other workloads on the system can change.
2032 .Bl -tag -width Ds
2033 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2034 Run a script (or scripts) on each vdev and include the output as a new column
2035 in the
2036 .Nm zpool Cm status
2037 output. See the
2038 .Fl c
2039 option of
2040 .Nm zpool Cm iostat
2041 for complete details.
2042 .It Fl g
2043 Display vdev GUIDs instead of the normal device names. These GUIDs
2044 can be used in place of device names for the zpool
2045 detach/offline/remove/replace commands.
2046 .It Fl L
2047 Display real paths for vdevs resolving all symbolic links. This can
2048 be used to look up the current block device name regardless of the
2049 .Pa /dev/disk/
2050 path used to open it.
2051 .It Fl P
2052 Display full paths for vdevs instead of only the last component of
2053 the path. This can be used in conjunction with the
2054 .Fl L
2055 flag.
2056 .It Fl D
2057 Display a histogram of deduplication statistics, showing the allocated
2058 .Pq physically present on disk
2059 and referenced
2060 .Pq logically referenced in the pool
2061 block counts and sizes by reference count.
2062 .It Fl T Sy u Ns | Ns Sy d
2063 Display a time stamp.
2064 Specify
2065 .Fl u
2066 for a printed representation of the internal representation of time.
2067 See
2068 .Xr time 2 .
2069 Specify
2070 .Fl d
2071 for standard date format.
2072 See
2073 .Xr date 1 .
2074 .It Fl v
2075 Displays verbose data error information, printing out a complete list of all
2076 data errors since the last complete pool scrub.
2077 .It Fl x
2078 Only display status for pools that are exhibiting errors or are otherwise
2079 unavailable.
2080 Warnings about pools not using the latest on-disk format will not be included.
2081 .El
2082 .It Xo
2083 .Nm
2084 .Cm sync
2085 .Op Ar pool ...
2086 .Xc
2087 This command forces all in-core dirty data to be written to the primary
2088 pool storage and not the ZIL. It will also update administrative
2089 information including quota reporting. Without arguments,
2090 .Sy zpool sync
2091 will sync all pools on the system. Otherwise, it will sync only the
2092 specified pool(s).
2093 .It Xo
2094 .Nm
2095 .Cm upgrade
2096 .Xc
2097 Displays pools which do not have all supported features enabled and pools
2098 formatted using a legacy ZFS version number.
2099 These pools can continue to be used, but some features may not be available.
2100 Use
2101 .Nm zpool Cm upgrade Fl a
2102 to enable all features on all pools.
2103 .It Xo
2104 .Nm
2105 .Cm upgrade
2106 .Fl v
2107 .Xc
2108 Displays legacy ZFS versions supported by the current software.
2109 See
2110 .Xr zpool-features 5
2111 for a description of feature flags features supported by the current software.
2112 .It Xo
2113 .Nm
2114 .Cm upgrade
2115 .Op Fl V Ar version
2116 .Fl a Ns | Ns Ar pool Ns ...
2117 .Xc
2118 Enables all supported features on the given pool.
2119 Once this is done, the pool will no longer be accessible on systems that do not
2120 support feature flags.
2121 See
2122 .Xr zfs-features 5
2123 for details on compatibility with systems that support feature flags, but do not
2124 support all features enabled on the pool.
2125 .Bl -tag -width Ds
2126 .It Fl a
2127 Enables all supported features on all pools.
2128 .It Fl V Ar version
2129 Upgrade to the specified legacy version.
2130 If the
2131 .Fl V
2132 flag is specified, no features will be enabled on the pool.
2133 This option can only be used to increase the version number up to the last
2134 supported legacy version number.
2135 .El
2136 .El
2137 .Sh EXIT STATUS
2138 The following exit values are returned:
2139 .Bl -tag -width Ds
2140 .It Sy 0
2141 Successful completion.
2142 .It Sy 1
2143 An error occurred.
2144 .It Sy 2
2145 Invalid command line options were specified.
2146 .El
2147 .Sh EXAMPLES
2148 .Bl -tag -width Ds
2149 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2150 The following command creates a pool with a single raidz root vdev that
2151 consists of six disks.
2152 .Bd -literal
2153 # zpool create tank raidz sda sdb sdc sdd sde sdf
2154 .Ed
2155 .It Sy Example 2 No Creating a Mirrored Storage Pool
2156 The following command creates a pool with two mirrors, where each mirror
2157 contains two disks.
2158 .Bd -literal
2159 # zpool create tank mirror sda sdb mirror sdc sdd
2160 .Ed
2161 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2162 The following command creates an unmirrored pool using two disk partitions.
2163 .Bd -literal
2164 # zpool create tank sda1 sdb2
2165 .Ed
2166 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2167 The following command creates an unmirrored pool using files.
2168 While not recommended, a pool based on files can be useful for experimental
2169 purposes.
2170 .Bd -literal
2171 # zpool create tank /path/to/file/a /path/to/file/b
2172 .Ed
2173 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2174 The following command adds two mirrored disks to the pool
2175 .Em tank ,
2176 assuming the pool is already made up of two-way mirrors.
2177 The additional space is immediately available to any datasets within the pool.
2178 .Bd -literal
2179 # zpool add tank mirror sda sdb
2180 .Ed
2181 .It Sy Example 6 No Listing Available ZFS Storage Pools
2182 The following command lists all available pools on the system.
2183 In this case, the pool
2184 .Em zion
2185 is faulted due to a missing device.
2186 The results from this command are similar to the following:
2187 .Bd -literal
2188 # zpool list
2189 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2190 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2191 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2192 zion - - - - - - - FAULTED -
2193 .Ed
2194 .It Sy Example 7 No Destroying a ZFS Storage Pool
2195 The following command destroys the pool
2196 .Em tank
2197 and any datasets contained within.
2198 .Bd -literal
2199 # zpool destroy -f tank
2200 .Ed
2201 .It Sy Example 8 No Exporting a ZFS Storage Pool
2202 The following command exports the devices in pool
2203 .Em tank
2204 so that they can be relocated or later imported.
2205 .Bd -literal
2206 # zpool export tank
2207 .Ed
2208 .It Sy Example 9 No Importing a ZFS Storage Pool
2209 The following command displays available pools, and then imports the pool
2210 .Em tank
2211 for use on the system.
2212 The results from this command are similar to the following:
2213 .Bd -literal
2214 # zpool import
2215 pool: tank
2216 id: 15451357997522795478
2217 state: ONLINE
2218 action: The pool can be imported using its name or numeric identifier.
2219 config:
2220
2221 tank ONLINE
2222 mirror ONLINE
2223 sda ONLINE
2224 sdb ONLINE
2225
2226 # zpool import tank
2227 .Ed
2228 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2229 The following command upgrades all ZFS Storage pools to the current version of
2230 the software.
2231 .Bd -literal
2232 # zpool upgrade -a
2233 This system is currently running ZFS version 2.
2234 .Ed
2235 .It Sy Example 11 No Managing Hot Spares
2236 The following command creates a new pool with an available hot spare:
2237 .Bd -literal
2238 # zpool create tank mirror sda sdb spare sdc
2239 .Ed
2240 .Pp
2241 If one of the disks were to fail, the pool would be reduced to the degraded
2242 state.
2243 The failed device can be replaced using the following command:
2244 .Bd -literal
2245 # zpool replace tank sda sdd
2246 .Ed
2247 .Pp
2248 Once the data has been resilvered, the spare is automatically removed and is
2249 made available for use should another device fail.
2250 The hot spare can be permanently removed from the pool using the following
2251 command:
2252 .Bd -literal
2253 # zpool remove tank sdc
2254 .Ed
2255 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2256 The following command creates a ZFS storage pool consisting of two, two-way
2257 mirrors and mirrored log devices:
2258 .Bd -literal
2259 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2260 sde sdf
2261 .Ed
2262 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2263 The following command adds two disks for use as cache devices to a ZFS storage
2264 pool:
2265 .Bd -literal
2266 # zpool add pool cache sdc sdd
2267 .Ed
2268 .Pp
2269 Once added, the cache devices gradually fill with content from main memory.
2270 Depending on the size of your cache devices, it could take over an hour for
2271 them to fill.
2272 Capacity and reads can be monitored using the
2273 .Cm iostat
2274 option as follows:
2275 .Bd -literal
2276 # zpool iostat -v pool 5
2277 .Ed
2278 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2279 The following commands remove the mirrored log device
2280 .Sy mirror-2
2281 and mirrored top-level data device
2282 .Sy mirror-1 .
2283 .Pp
2284 Given this configuration:
2285 .Bd -literal
2286 pool: tank
2287 state: ONLINE
2288 scrub: none requested
2289 config:
2290
2291 NAME STATE READ WRITE CKSUM
2292 tank ONLINE 0 0 0
2293 mirror-0 ONLINE 0 0 0
2294 sda ONLINE 0 0 0
2295 sdb ONLINE 0 0 0
2296 mirror-1 ONLINE 0 0 0
2297 sdc ONLINE 0 0 0
2298 sdd ONLINE 0 0 0
2299 logs
2300 mirror-2 ONLINE 0 0 0
2301 sde ONLINE 0 0 0
2302 sdf ONLINE 0 0 0
2303 .Ed
2304 .Pp
2305 The command to remove the mirrored log
2306 .Sy mirror-2
2307 is:
2308 .Bd -literal
2309 # zpool remove tank mirror-2
2310 .Ed
2311 .Pp
2312 The command to remove the mirrored data
2313 .Sy mirror-1
2314 is:
2315 .Bd -literal
2316 # zpool remove tank mirror-1
2317 .Ed
2318 .It Sy Example 15 No Displaying expanded space on a device
2319 The following command displays the detailed information for the pool
2320 .Em data .
2321 This pool is comprised of a single raidz vdev where one of its devices
2322 increased its capacity by 10GB.
2323 In this example, the pool will not be able to utilize this extra capacity until
2324 all the devices under the raidz vdev have been expanded.
2325 .Bd -literal
2326 # zpool list -v data
2327 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2328 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2329 raidz1 23.9G 14.6G 9.30G - 48%
2330 sda - - - - -
2331 sdb - - - 10G -
2332 sdc - - - - -
2333 .Ed
2334 .It Sy Example 16 No Adding output columns
2335 Additional columns can be added to the
2336 .Nm zpool Cm status
2337 and
2338 .Nm zpool Cm iostat
2339 output with
2340 .Fl c
2341 option.
2342 .Bd -literal
2343 # zpool status -c vendor,model,size
2344 NAME STATE READ WRITE CKSUM vendor model size
2345 tank ONLINE 0 0 0
2346 mirror-0 ONLINE 0 0 0
2347 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2348 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2349 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2350 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2351 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2352 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2353
2354 # zpool iostat -vc slaves
2355 capacity operations bandwidth
2356 pool alloc free read write read write slaves
2357 ---------- ----- ----- ----- ----- ----- ----- ---------
2358 tank 20.4G 7.23T 26 152 20.7M 21.6M
2359 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2360 U1 - - 0 31 1.46K 20.6M sdb sdff
2361 U10 - - 0 1 3.77K 13.3K sdas sdgw
2362 U11 - - 0 1 288K 13.3K sdat sdgx
2363 U12 - - 0 1 78.4K 13.3K sdau sdgy
2364 U13 - - 0 1 128K 13.3K sdav sdgz
2365 U14 - - 0 1 63.2K 13.3K sdfk sdg
2366 .Ed
2367 .El
2368 .Sh ENVIRONMENT VARIABLES
2369 .Bl -tag -width "ZFS_ABORT"
2370 .It Ev ZFS_ABORT
2371 Cause
2372 .Nm zpool
2373 to dump core on exit for the purposes of running
2374 .Sy ::findleaks .
2375 .El
2376 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2377 .It Ev ZPOOL_IMPORT_PATH
2378 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2379 .Nm zpool
2380 looks for device nodes and files.
2381 Similar to the
2382 .Fl d
2383 option in
2384 .Nm zpool import .
2385 .El
2386 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2387 .It Ev ZPOOL_VDEV_NAME_GUID
2388 Cause
2389 .Nm zpool subcommands to output vdev guids by default. This behavior
2390 is identical to the
2391 .Nm zpool status -g
2392 command line option.
2393 .El
2394 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2395 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2396 Cause
2397 .Nm zpool
2398 subcommands to follow links for vdev names by default. This behavior is identical to the
2399 .Nm zpool status -L
2400 command line option.
2401 .El
2402 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2403 .It Ev ZPOOL_VDEV_NAME_PATH
2404 Cause
2405 .Nm zpool
2406 subcommands to output full vdev path names by default. This
2407 behavior is identical to the
2408 .Nm zpool status -p
2409 command line option.
2410 .El
2411 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2412 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2413 Older ZFS on Linux implementations had issues when attempting to display pool
2414 config VDEV names if a
2415 .Sy devid
2416 NVP value is present in the pool's config.
2417 .Pp
2418 For example, a pool that originated on illumos platform would have a devid
2419 value in the config and
2420 .Nm zpool status
2421 would fail when listing the config.
2422 This would also be true for future Linux based pools.
2423 .Pp
2424 A pool can be stripped of any
2425 .Sy devid
2426 values on import or prevented from adding
2427 them on
2428 .Nm zpool create
2429 or
2430 .Nm zpool add
2431 by setting
2432 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2433 .El
2434 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2435 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2436 Allow a privileged user to run the
2437 .Nm zpool status/iostat
2438 with the
2439 .Fl c
2440 option. Normally, only unprivileged users are allowed to run
2441 .Fl c .
2442 .El
2443 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2444 .It Ev ZPOOL_SCRIPTS_PATH
2445 The search path for scripts when running
2446 .Nm zpool status/iostat
2447 with the
2448 .Fl c
2449 option. This is a colon-separated list of directories and overrides the default
2450 .Pa ~/.zpool.d
2451 and
2452 .Pa /etc/zfs/zpool.d
2453 search paths.
2454 .El
2455 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2456 .It Ev ZPOOL_SCRIPTS_ENABLED
2457 Allow a user to run
2458 .Nm zpool status/iostat
2459 with the
2460 .Fl c
2461 option. If
2462 .Sy ZPOOL_SCRIPTS_ENABLED
2463 is not set, it is assumed that the user is allowed to run
2464 .Nm zpool status/iostat -c .
2465 .El
2466 .Sh INTERFACE STABILITY
2467 .Sy Evolving
2468 .Sh SEE ALSO
2469 .Xr zfs-events 5 ,
2470 .Xr zfs-module-parameters 5 ,
2471 .Xr zpool-features 5 ,
2472 .Xr zed 8 ,
2473 .Xr zfs 8