]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Minor fixes in zpool iostat -c documentation (#6370)
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright 2016 Nexenta Systems, Inc.
25 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
26 .\" Copyright (c) 2017 Datto Inc.
27 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
28 .\" Copyright (c) 2017 Datto Inc.
29 .\"
30 .Dd June 28, 2017
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm clear
51 .Ar pool
52 .Op Ar device
53 .Nm
54 .Cm create
55 .Op Fl dfn
56 .Op Fl m Ar mountpoint
57 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
58 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
59 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
60 .Op Fl R Ar root
61 .Ar pool vdev Ns ...
62 .Nm
63 .Cm destroy
64 .Op Fl f
65 .Ar pool
66 .Nm
67 .Cm detach
68 .Ar pool device
69 .Nm
70 .Cm events
71 .Op Fl vHfc
72 .Op Ar pool
73 .Nm
74 .Cm export
75 .Op Fl a
76 .Op Fl f
77 .Ar pool Ns ...
78 .Nm
79 .Cm get
80 .Op Fl Hp
81 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
82 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
83 .Ar pool Ns ...
84 .Nm
85 .Cm history
86 .Op Fl il
87 .Oo Ar pool Oc Ns ...
88 .Nm
89 .Cm import
90 .Op Fl D
91 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
92 .Nm
93 .Cm import
94 .Fl a
95 .Op Fl DfmN
96 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
97 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
98 .Op Fl o Ar mntopts
99 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
100 .Op Fl R Ar root
101 .Nm
102 .Cm import
103 .Op Fl Dfm
104 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
105 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
106 .Op Fl o Ar mntopts
107 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
108 .Op Fl R Ar root
109 .Op Fl s
110 .Ar pool Ns | Ns Ar id
111 .Op Ar newpool Oo Fl t Oc
112 .Nm
113 .Cm iostat
114 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
115 .Op Fl T Sy u Ns | Ns Sy d
116 .Op Fl ghHLpPvy
117 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
118 .Op Ar interval Op Ar count
119 .Nm
120 .Cm labelclear
121 .Op Fl f
122 .Ar device
123 .Nm
124 .Cm list
125 .Op Fl HgLpPv
126 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
127 .Op Fl T Sy u Ns | Ns Sy d
128 .Oo Ar pool Oc Ns ...
129 .Op Ar interval Op Ar count
130 .Nm
131 .Cm offline
132 .Op Fl f
133 .Op Fl t
134 .Ar pool Ar device Ns ...
135 .Nm
136 .Cm online
137 .Op Fl e
138 .Ar pool Ar device Ns ...
139 .Nm
140 .Cm reguid
141 .Ar pool
142 .Nm
143 .Cm reopen
144 .Ar pool
145 .Nm
146 .Cm remove
147 .Ar pool Ar device Ns ...
148 .Nm
149 .Cm replace
150 .Op Fl f
151 .Oo Fl o Ar property Ns = Ns Ar value Oc
152 .Ar pool Ar device Op Ar new_device
153 .Nm
154 .Cm scrub
155 .Op Fl s | Fl p
156 .Ar pool Ns ...
157 .Nm
158 .Cm set
159 .Ar property Ns = Ns Ar value
160 .Ar pool
161 .Nm
162 .Cm split
163 .Op Fl gLnP
164 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
165 .Op Fl R Ar root
166 .Ar pool newpool
167 .Oo Ar device Oc Ns ...
168 .Nm
169 .Cm status
170 .Oo Fl c Ar SCRIPT Oc
171 .Op Fl gLPvxD
172 .Op Fl T Sy u Ns | Ns Sy d
173 .Oo Ar pool Oc Ns ...
174 .Op Ar interval Op Ar count
175 .Nm
176 .Cm sync
177 .Oo Ar pool Oc Ns ...
178 .Nm
179 .Cm upgrade
180 .Nm
181 .Cm upgrade
182 .Fl v
183 .Nm
184 .Cm upgrade
185 .Op Fl V Ar version
186 .Fl a Ns | Ns Ar pool Ns ...
187 .Sh DESCRIPTION
188 The
189 .Nm
190 command configures ZFS storage pools.
191 A storage pool is a collection of devices that provides physical storage and
192 data replication for ZFS datasets.
193 All datasets within a storage pool share the same space.
194 See
195 .Xr zfs 8
196 for information on managing datasets.
197 .Ss Virtual Devices (vdevs)
198 A "virtual device" describes a single device or a collection of devices
199 organized according to certain performance and fault characteristics.
200 The following virtual devices are supported:
201 .Bl -tag -width Ds
202 .It Sy disk
203 A block device, typically located under
204 .Pa /dev .
205 ZFS can use individual slices or partitions, though the recommended mode of
206 operation is to use whole disks.
207 A disk can be specified by a full path, or it can be a shorthand name
208 .Po the relative portion of the path under
209 .Pa /dev
210 .Pc .
211 A whole disk can be specified by omitting the slice or partition designation.
212 For example,
213 .Pa sda
214 is equivalent to
215 .Pa /dev/sda .
216 When given a whole disk, ZFS automatically labels the disk, if necessary.
217 .It Sy file
218 A regular file.
219 The use of files as a backing store is strongly discouraged.
220 It is designed primarily for experimental purposes, as the fault tolerance of a
221 file is only as good as the file system of which it is a part.
222 A file must be specified by a full path.
223 .It Sy mirror
224 A mirror of two or more devices.
225 Data is replicated in an identical fashion across all components of a mirror.
226 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
227 failing before data integrity is compromised.
228 .It Sy raidz , raidz1 , raidz2 , raidz3
229 A variation on RAID-5 that allows for better distribution of parity and
230 eliminates the RAID-5
231 .Qq write hole
232 .Pq in which data and parity become inconsistent after a power loss .
233 Data and parity is striped across all disks within a raidz group.
234 .Pp
235 A raidz group can have single-, double-, or triple-parity, meaning that the
236 raidz group can sustain one, two, or three failures, respectively, without
237 losing any data.
238 The
239 .Sy raidz1
240 vdev type specifies a single-parity raidz group; the
241 .Sy raidz2
242 vdev type specifies a double-parity raidz group; and the
243 .Sy raidz3
244 vdev type specifies a triple-parity raidz group.
245 The
246 .Sy raidz
247 vdev type is an alias for
248 .Sy raidz1 .
249 .Pp
250 A raidz group with N disks of size X with P parity disks can hold approximately
251 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
252 compromised.
253 The minimum number of devices in a raidz group is one more than the number of
254 parity disks.
255 The recommended number is between 3 and 9 to help increase performance.
256 .It Sy spare
257 A special pseudo-vdev which keeps track of available hot spares for a pool.
258 For more information, see the
259 .Sx Hot Spares
260 section.
261 .It Sy log
262 A separate intent log device.
263 If more than one log device is specified, then writes are load-balanced between
264 devices.
265 Log devices can be mirrored.
266 However, raidz vdev types are not supported for the intent log.
267 For more information, see the
268 .Sx Intent Log
269 section.
270 .It Sy cache
271 A device used to cache storage pool data.
272 A cache device cannot be configured as a mirror or raidz group.
273 For more information, see the
274 .Sx Cache Devices
275 section.
276 .El
277 .Pp
278 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
279 contain files or disks.
280 Mirrors of mirrors
281 .Pq or other combinations
282 are not allowed.
283 .Pp
284 A pool can have any number of virtual devices at the top of the configuration
285 .Po known as
286 .Qq root vdevs
287 .Pc .
288 Data is dynamically distributed across all top-level devices to balance data
289 among devices.
290 As new virtual devices are added, ZFS automatically places data on the newly
291 available devices.
292 .Pp
293 Virtual devices are specified one at a time on the command line, separated by
294 whitespace.
295 The keywords
296 .Sy mirror
297 and
298 .Sy raidz
299 are used to distinguish where a group ends and another begins.
300 For example, the following creates two root vdevs, each a mirror of two disks:
301 .Bd -literal
302 # zpool create mypool mirror sda sdb mirror sdc sdd
303 .Ed
304 .Ss Device Failure and Recovery
305 ZFS supports a rich set of mechanisms for handling device failure and data
306 corruption.
307 All metadata and data is checksummed, and ZFS automatically repairs bad data
308 from a good copy when corruption is detected.
309 .Pp
310 In order to take advantage of these features, a pool must make use of some form
311 of redundancy, using either mirrored or raidz groups.
312 While ZFS supports running in a non-redundant configuration, where each root
313 vdev is simply a disk or file, this is strongly discouraged.
314 A single case of bit corruption can render some or all of your data unavailable.
315 .Pp
316 A pool's health status is described by one of three states: online, degraded,
317 or faulted.
318 An online pool has all devices operating normally.
319 A degraded pool is one in which one or more devices have failed, but the data is
320 still available due to a redundant configuration.
321 A faulted pool has corrupted metadata, or one or more faulted devices, and
322 insufficient replicas to continue functioning.
323 .Pp
324 The health of the top-level vdev, such as mirror or raidz device, is
325 potentially impacted by the state of its associated vdevs, or component
326 devices.
327 A top-level vdev or component device is in one of the following states:
328 .Bl -tag -width "DEGRADED"
329 .It Sy DEGRADED
330 One or more top-level vdevs is in the degraded state because one or more
331 component devices are offline.
332 Sufficient replicas exist to continue functioning.
333 .Pp
334 One or more component devices is in the degraded or faulted state, but
335 sufficient replicas exist to continue functioning.
336 The underlying conditions are as follows:
337 .Bl -bullet
338 .It
339 The number of checksum errors exceeds acceptable levels and the device is
340 degraded as an indication that something may be wrong.
341 ZFS continues to use the device as necessary.
342 .It
343 The number of I/O errors exceeds acceptable levels.
344 The device could not be marked as faulted because there are insufficient
345 replicas to continue functioning.
346 .El
347 .It Sy FAULTED
348 One or more top-level vdevs is in the faulted state because one or more
349 component devices are offline.
350 Insufficient replicas exist to continue functioning.
351 .Pp
352 One or more component devices is in the faulted state, and insufficient
353 replicas exist to continue functioning.
354 The underlying conditions are as follows:
355 .Bl -bullet
356 .It
357 The device could be opened, but the contents did not match expected values.
358 .It
359 The number of I/O errors exceeds acceptable levels and the device is faulted to
360 prevent further use of the device.
361 .El
362 .It Sy OFFLINE
363 The device was explicitly taken offline by the
364 .Nm zpool Cm offline
365 command.
366 .It Sy ONLINE
367 The device is online and functioning.
368 .It Sy REMOVED
369 The device was physically removed while the system was running.
370 Device removal detection is hardware-dependent and may not be supported on all
371 platforms.
372 .It Sy UNAVAIL
373 The device could not be opened.
374 If a pool is imported when a device was unavailable, then the device will be
375 identified by a unique identifier instead of its path since the path was never
376 correct in the first place.
377 .El
378 .Pp
379 If a device is removed and later re-attached to the system, ZFS attempts
380 to put the device online automatically.
381 Device attach detection is hardware-dependent and might not be supported on all
382 platforms.
383 .Ss Hot Spares
384 ZFS allows devices to be associated with pools as
385 .Qq hot spares .
386 These devices are not actively used in the pool, but when an active device
387 fails, it is automatically replaced by a hot spare.
388 To create a pool with hot spares, specify a
389 .Sy spare
390 vdev with any number of devices.
391 For example,
392 .Bd -literal
393 # zpool create pool mirror sda sdb spare sdc sdd
394 .Ed
395 .Pp
396 Spares can be shared across multiple pools, and can be added with the
397 .Nm zpool Cm add
398 command and removed with the
399 .Nm zpool Cm remove
400 command.
401 Once a spare replacement is initiated, a new
402 .Sy spare
403 vdev is created within the configuration that will remain there until the
404 original device is replaced.
405 At this point, the hot spare becomes available again if another device fails.
406 .Pp
407 If a pool has a shared spare that is currently being used, the pool can not be
408 exported since other pools may use this shared spare, which may lead to
409 potential data corruption.
410 .Pp
411 An in-progress spare replacement can be canceled by detaching the hot spare.
412 If the original faulted device is detached, then the hot spare assumes its
413 place in the configuration, and is removed from the spare list of all active
414 pools.
415 .Pp
416 Spares cannot replace log devices.
417 .Ss Intent Log
418 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
419 transactions.
420 For instance, databases often require their transactions to be on stable storage
421 devices when returning from a system call.
422 NFS and other applications can also use
423 .Xr fsync 2
424 to ensure data stability.
425 By default, the intent log is allocated from blocks within the main pool.
426 However, it might be possible to get better performance using separate intent
427 log devices such as NVRAM or a dedicated disk.
428 For example:
429 .Bd -literal
430 # zpool create pool sda sdb log sdc
431 .Ed
432 .Pp
433 Multiple log devices can also be specified, and they can be mirrored.
434 See the
435 .Sx EXAMPLES
436 section for an example of mirroring multiple log devices.
437 .Pp
438 Log devices can be added, replaced, attached, detached, and imported and
439 exported as part of the larger pool.
440 Mirrored log devices can be removed by specifying the top-level mirror for the
441 log.
442 .Ss Cache Devices
443 Devices can be added to a storage pool as
444 .Qq cache devices .
445 These devices provide an additional layer of caching between main memory and
446 disk.
447 For read-heavy workloads, where the working set size is much larger than what
448 can be cached in main memory, using cache devices allow much more of this
449 working set to be served from low latency media.
450 Using cache devices provides the greatest performance improvement for random
451 read-workloads of mostly static content.
452 .Pp
453 To create a pool with cache devices, specify a
454 .Sy cache
455 vdev with any number of devices.
456 For example:
457 .Bd -literal
458 # zpool create pool sda sdb cache sdc sdd
459 .Ed
460 .Pp
461 Cache devices cannot be mirrored or part of a raidz configuration.
462 If a read error is encountered on a cache device, that read I/O is reissued to
463 the original storage pool device, which might be part of a mirrored or raidz
464 configuration.
465 .Pp
466 The content of the cache devices is considered volatile, as is the case with
467 other system caches.
468 .Ss Properties
469 Each pool has several properties associated with it.
470 Some properties are read-only statistics while others are configurable and
471 change the behavior of the pool.
472 .Pp
473 The following are read-only properties:
474 .Bl -tag -width Ds
475 .It Sy available
476 Amount of storage available within the pool.
477 This property can also be referred to by its shortened column name,
478 .Sy avail .
479 .It Sy capacity
480 Percentage of pool space used.
481 This property can also be referred to by its shortened column name,
482 .Sy cap .
483 .It Sy expandsize
484 Amount of uninitialized space within the pool or device that can be used to
485 increase the total capacity of the pool.
486 Uninitialized space consists of any space on an EFI labeled vdev which has not
487 been brought online
488 .Po e.g, using
489 .Nm zpool Cm online Fl e
490 .Pc .
491 This space occurs when a LUN is dynamically expanded.
492 .It Sy fragmentation
493 The amount of fragmentation in the pool.
494 .It Sy free
495 The amount of free space available in the pool.
496 .It Sy freeing
497 After a file system or snapshot is destroyed, the space it was using is
498 returned to the pool asynchronously.
499 .Sy freeing
500 is the amount of space remaining to be reclaimed.
501 Over time
502 .Sy freeing
503 will decrease while
504 .Sy free
505 increases.
506 .It Sy health
507 The current health of the pool.
508 Health can be one of
509 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
510 .It Sy guid
511 A unique identifier for the pool.
512 .It Sy size
513 Total size of the storage pool.
514 .It Sy unsupported@ Ns Em feature_guid
515 Information about unsupported features that are enabled on the pool.
516 See
517 .Xr zpool-features 5
518 for details.
519 .It Sy used
520 Amount of storage space used within the pool.
521 .El
522 .Pp
523 The space usage properties report actual physical space available to the
524 storage pool.
525 The physical space can be different from the total amount of space that any
526 contained datasets can actually use.
527 The amount of space used in a raidz configuration depends on the characteristics
528 of the data being written.
529 In addition, ZFS reserves some space for internal accounting that the
530 .Xr zfs 8
531 command takes into account, but the
532 .Nm
533 command does not.
534 For non-full pools of a reasonable size, these effects should be invisible.
535 For small pools, or pools that are close to being completely full, these
536 discrepancies may become more noticeable.
537 .Pp
538 The following property can be set at creation time and import time:
539 .Bl -tag -width Ds
540 .It Sy altroot
541 Alternate root directory.
542 If set, this directory is prepended to any mount points within the pool.
543 This can be used when examining an unknown pool where the mount points cannot be
544 trusted, or in an alternate boot environment, where the typical paths are not
545 valid.
546 .Sy altroot
547 is not a persistent property.
548 It is valid only while the system is up.
549 Setting
550 .Sy altroot
551 defaults to using
552 .Sy cachefile Ns = Ns Sy none ,
553 though this may be overridden using an explicit setting.
554 .El
555 .Pp
556 The following property can be set only at import time:
557 .Bl -tag -width Ds
558 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
559 If set to
560 .Sy on ,
561 the pool will be imported in read-only mode.
562 This property can also be referred to by its shortened column name,
563 .Sy rdonly .
564 .El
565 .Pp
566 The following properties can be set at creation time and import time, and later
567 changed with the
568 .Nm zpool Cm set
569 command:
570 .Bl -tag -width Ds
571 .It Sy ashift Ns = Ns Sy ashift
572 Pool sector size exponent, to the power of
573 .Sy 2
574 (internally referred to as
575 .Sy ashift
576 ). Values from 9 to 16, inclusive, are valid; also, the special
577 value 0 (the default) means to auto-detect using the kernel's block
578 layer and a ZFS internal exception list. I/O operations will be aligned
579 to the specified size boundaries. Additionally, the minimum (disk)
580 write size will be set to the specified size, so this represents a
581 space vs. performance trade-off. For optimal performance, the pool
582 sector size should be greater than or equal to the sector size of the
583 underlying disks. The typical case for setting this property is when
584 performance is important and the underlying disks use 4KiB sectors but
585 report 512B sectors to the OS (for compatibility reasons); in that
586 case, set
587 .Sy ashift=12
588 (which is 1<<12 = 4096). When set, this property is
589 used as the default hint value in subsequent vdev operations (add,
590 attach and replace). Changing this value will not modify any existing
591 vdev, not even on disk replacement; however it can be used, for
592 instance, to replace a dying 512B sectors disk with a newer 4KiB
593 sectors device: this will probably result in bad performance but at the
594 same time could prevent loss of data.
595 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
596 Controls automatic pool expansion when the underlying LUN is grown.
597 If set to
598 .Sy on ,
599 the pool will be resized according to the size of the expanded device.
600 If the device is part of a mirror or raidz then all devices within that
601 mirror/raidz group must be expanded before the new space is made available to
602 the pool.
603 The default behavior is
604 .Sy off .
605 This property can also be referred to by its shortened column name,
606 .Sy expand .
607 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
608 Controls automatic device replacement.
609 If set to
610 .Sy off ,
611 device replacement must be initiated by the administrator by using the
612 .Nm zpool Cm replace
613 command.
614 If set to
615 .Sy on ,
616 any new device, found in the same physical location as a device that previously
617 belonged to the pool, is automatically formatted and replaced.
618 The default behavior is
619 .Sy off .
620 This property can also be referred to by its shortened column name,
621 .Sy replace .
622 Autoreplace can also be used with virtual disks (like device
623 mapper) provided that you use the /dev/disk/by-vdev paths setup by
624 vdev_id.conf. See the
625 .Xr vdev_id 8
626 man page for more details.
627 Autoreplace and autoonline require the ZFS Event Daemon be configured and
628 running. See the
629 .Xr zed 8
630 man page for more details.
631 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
632 Identifies the default bootable dataset for the root pool. This property is
633 expected to be set mainly by the installation and upgrade programs.
634 Not all Linux distribution boot processes use the bootfs property.
635 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
636 Controls the location of where the pool configuration is cached.
637 Discovering all pools on system startup requires a cached copy of the
638 configuration data that is stored on the root file system.
639 All pools in this cache are automatically imported when the system boots.
640 Some environments, such as install and clustering, need to cache this
641 information in a different location so that pools are not automatically
642 imported.
643 Setting this property caches the pool configuration in a different location that
644 can later be imported with
645 .Nm zpool Cm import Fl c .
646 Setting it to the special value
647 .Sy none
648 creates a temporary pool that is never cached, and the special value
649 .Qq
650 .Pq empty string
651 uses the default location.
652 .Pp
653 Multiple pools can share the same cache file.
654 Because the kernel destroys and recreates this file when pools are added and
655 removed, care should be taken when attempting to access this file.
656 When the last pool using a
657 .Sy cachefile
658 is exported or destroyed, the file is removed.
659 .It Sy comment Ns = Ns Ar text
660 A text string consisting of printable ASCII characters that will be stored
661 such that it is available even if the pool becomes faulted.
662 An administrator can provide additional information about a pool using this
663 property.
664 .It Sy dedupditto Ns = Ns Ar number
665 Threshold for the number of block ditto copies.
666 If the reference count for a deduplicated block increases above this number, a
667 new ditto copy of this block is automatically stored.
668 The default setting is
669 .Sy 0
670 which causes no ditto copies to be created for deduplicated blocks.
671 The minimum legal nonzero setting is
672 .Sy 100 .
673 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
674 Controls whether a non-privileged user is granted access based on the dataset
675 permissions defined on the dataset.
676 See
677 .Xr zfs 8
678 for more information on ZFS delegated administration.
679 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
680 Controls the system behavior in the event of catastrophic pool failure.
681 This condition is typically a result of a loss of connectivity to the underlying
682 storage device(s) or a failure of all devices within the pool.
683 The behavior of such an event is determined as follows:
684 .Bl -tag -width "continue"
685 .It Sy wait
686 Blocks all I/O access until the device connectivity is recovered and the errors
687 are cleared.
688 This is the default behavior.
689 .It Sy continue
690 Returns
691 .Er EIO
692 to any new write I/O requests but allows reads to any of the remaining healthy
693 devices.
694 Any write requests that have yet to be committed to disk would be blocked.
695 .It Sy panic
696 Prints out a message to the console and generates a system crash dump.
697 .El
698 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
699 The value of this property is the current state of
700 .Ar feature_name .
701 The only valid value when setting this property is
702 .Sy enabled
703 which moves
704 .Ar feature_name
705 to the enabled state.
706 See
707 .Xr zpool-features 5
708 for details on feature states.
709 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
710 Controls whether information about snapshots associated with this pool is
711 output when
712 .Nm zfs Cm list
713 is run without the
714 .Fl t
715 option.
716 The default value is
717 .Sy off .
718 This property can also be referred to by its shortened name,
719 .Sy listsnaps .
720 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
721 Controls whether a pool activity check should be performed during
722 .Nm zpool Cm import .
723 When a pool is determined to be active it cannot be imported, even with the
724 .Fl f
725 option. This property is intended to be used in failover configurations
726 where multiple hosts have access to a pool on shared storage. When this
727 property is on, periodic writes to storage occur to show the pool is in use.
728 See
729 .Sy zfs_multihost_interval
730 in the
731 .Xr zfs-module-parameters 5
732 man page. In order to enable this property each host must set a unique hostid.
733 See
734 .Xr genhostid 1
735 and
736 .Xr spl-module-paramters 5
737 for additional details. The default value is
738 .Sy off .
739 .It Sy version Ns = Ns Ar version
740 The current on-disk version of the pool.
741 This can be increased, but never decreased.
742 The preferred method of updating pools is with the
743 .Nm zpool Cm upgrade
744 command, though this property can be used when a specific version is needed for
745 backwards compatibility.
746 Once feature flags are enabled on a pool this property will no longer have a
747 value.
748 .El
749 .Ss Subcommands
750 All subcommands that modify state are logged persistently to the pool in their
751 original form.
752 .Pp
753 The
754 .Nm
755 command provides subcommands to create and destroy storage pools, add capacity
756 to storage pools, and provide information about the storage pools.
757 The following subcommands are supported:
758 .Bl -tag -width Ds
759 .It Xo
760 .Nm
761 .Fl ?
762 .Xc
763 Displays a help message.
764 .It Xo
765 .Nm
766 .Cm add
767 .Op Fl fgLnP
768 .Oo Fl o Ar property Ns = Ns Ar value Oc
769 .Ar pool vdev Ns ...
770 .Xc
771 Adds the specified virtual devices to the given pool.
772 The
773 .Ar vdev
774 specification is described in the
775 .Sx Virtual Devices
776 section.
777 The behavior of the
778 .Fl f
779 option, and the device checks performed are described in the
780 .Nm zpool Cm create
781 subcommand.
782 .Bl -tag -width Ds
783 .It Fl f
784 Forces use of
785 .Ar vdev Ns s ,
786 even if they appear in use or specify a conflicting replication level.
787 Not all devices can be overridden in this manner.
788 .It Fl g
789 Display
790 .Ar vdev ,
791 GUIDs instead of the normal device names. These GUIDs can be used in place of
792 device names for the zpool detach/offline/remove/replace commands.
793 .It Fl L
794 Display real paths for
795 .Ar vdev Ns s
796 resolving all symbolic links. This can be used to look up the current block
797 device name regardless of the /dev/disk/ path used to open it.
798 .It Fl n
799 Displays the configuration that would be used without actually adding the
800 .Ar vdev Ns s .
801 The actual pool creation can still fail due to insufficient privileges or
802 device sharing.
803 .It Fl P
804 Display real paths for
805 .Ar vdev Ns s
806 instead of only the last component of the path. This can be used in
807 conjunction with the -L flag.
808 .It Fl o Ar property Ns = Ns Ar value
809 Sets the given pool properties. See the
810 .Sx Properties
811 section for a list of valid properties that can be set. The only property
812 supported at the moment is ashift.
813 .El
814 .It Xo
815 .Nm
816 .Cm attach
817 .Op Fl f
818 .Oo Fl o Ar property Ns = Ns Ar value Oc
819 .Ar pool device new_device
820 .Xc
821 Attaches
822 .Ar new_device
823 to the existing
824 .Ar device .
825 The existing device cannot be part of a raidz configuration.
826 If
827 .Ar device
828 is not currently part of a mirrored configuration,
829 .Ar device
830 automatically transforms into a two-way mirror of
831 .Ar device
832 and
833 .Ar new_device .
834 If
835 .Ar device
836 is part of a two-way mirror, attaching
837 .Ar new_device
838 creates a three-way mirror, and so on.
839 In either case,
840 .Ar new_device
841 begins to resilver immediately.
842 .Bl -tag -width Ds
843 .It Fl f
844 Forces use of
845 .Ar new_device ,
846 even if its appears to be in use.
847 Not all devices can be overridden in this manner.
848 .It Fl o Ar property Ns = Ns Ar value
849 Sets the given pool properties. See the
850 .Sx Properties
851 section for a list of valid properties that can be set. The only property
852 supported at the moment is ashift.
853 .El
854 .It Xo
855 .Nm
856 .Cm clear
857 .Ar pool
858 .Op Ar device
859 .Xc
860 Clears device errors in a pool.
861 If no arguments are specified, all device errors within the pool are cleared.
862 If one or more devices is specified, only those errors associated with the
863 specified device or devices are cleared.
864 .It Xo
865 .Nm
866 .Cm create
867 .Op Fl dfn
868 .Op Fl m Ar mountpoint
869 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
870 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
871 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
872 .Op Fl R Ar root
873 .Op Fl t Ar tname
874 .Ar pool vdev Ns ...
875 .Xc
876 Creates a new storage pool containing the virtual devices specified on the
877 command line.
878 The pool name must begin with a letter, and can only contain
879 alphanumeric characters as well as underscore
880 .Pq Qq Sy _ ,
881 dash
882 .Pq Qq Sy \&. ,
883 colon
884 .Pq Qq Sy \&: ,
885 space
886 .Pq Qq Sy - ,
887 and period
888 .Pq Qq Sy \&. .
889 The pool names
890 .Sy mirror ,
891 .Sy raidz ,
892 .Sy spare
893 and
894 .Sy log
895 are reserved, as are names beginning with the pattern
896 .Sy c[0-9] .
897 The
898 .Ar vdev
899 specification is described in the
900 .Sx Virtual Devices
901 section.
902 .Pp
903 The command verifies that each device specified is accessible and not currently
904 in use by another subsystem.
905 There are some uses, such as being currently mounted, or specified as the
906 dedicated dump device, that prevents a device from ever being used by ZFS.
907 Other uses, such as having a preexisting UFS file system, can be overridden with
908 the
909 .Fl f
910 option.
911 .Pp
912 The command also checks that the replication strategy for the pool is
913 consistent.
914 An attempt to combine redundant and non-redundant storage in a single pool, or
915 to mix disks and files, results in an error unless
916 .Fl f
917 is specified.
918 The use of differently sized devices within a single raidz or mirror group is
919 also flagged as an error unless
920 .Fl f
921 is specified.
922 .Pp
923 Unless the
924 .Fl R
925 option is specified, the default mount point is
926 .Pa / Ns Ar pool .
927 The mount point must not exist or must be empty, or else the root dataset
928 cannot be mounted.
929 This can be overridden with the
930 .Fl m
931 option.
932 .Pp
933 By default all supported features are enabled on the new pool unless the
934 .Fl d
935 option is specified.
936 .Bl -tag -width Ds
937 .It Fl d
938 Do not enable any features on the new pool.
939 Individual features can be enabled by setting their corresponding properties to
940 .Sy enabled
941 with the
942 .Fl o
943 option.
944 See
945 .Xr zpool-features 5
946 for details about feature properties.
947 .It Fl f
948 Forces use of
949 .Ar vdev Ns s ,
950 even if they appear in use or specify a conflicting replication level.
951 Not all devices can be overridden in this manner.
952 .It Fl m Ar mountpoint
953 Sets the mount point for the root dataset.
954 The default mount point is
955 .Pa /pool
956 or
957 .Pa altroot/pool
958 if
959 .Ar altroot
960 is specified.
961 The mount point must be an absolute path,
962 .Sy legacy ,
963 or
964 .Sy none .
965 For more information on dataset mount points, see
966 .Xr zfs 8 .
967 .It Fl n
968 Displays the configuration that would be used without actually creating the
969 pool.
970 The actual pool creation can still fail due to insufficient privileges or
971 device sharing.
972 .It Fl o Ar property Ns = Ns Ar value
973 Sets the given pool properties.
974 See the
975 .Sx Properties
976 section for a list of valid properties that can be set.
977 .It Fl o Ar feature@feature Ns = Ns Ar value
978 Sets the given pool feature. See the
979 .Xr zpool-features 5
980 section for a list of valid features that can be set.
981 Value can be either disabled or enabled.
982 .It Fl O Ar file-system-property Ns = Ns Ar value
983 Sets the given file system properties in the root file system of the pool.
984 See the
985 .Sx Properties
986 section of
987 .Xr zfs 8
988 for a list of valid properties that can be set.
989 .It Fl R Ar root
990 Equivalent to
991 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
992 .It Fl t Ar tname
993 Sets the in-core pool name to
994 .Sy tname
995 while the on-disk name will be the name specified as the pool name
996 .Sy pool .
997 This will set the default cachefile property to none. This is intended
998 to handle name space collisions when creating pools for other systems,
999 such as virtual machines or physical machines whose pools live on network
1000 block devices.
1001 .El
1002 .It Xo
1003 .Nm
1004 .Cm destroy
1005 .Op Fl f
1006 .Ar pool
1007 .Xc
1008 Destroys the given pool, freeing up any devices for other use.
1009 This command tries to unmount any active datasets before destroying the pool.
1010 .Bl -tag -width Ds
1011 .It Fl f
1012 Forces any active datasets contained within the pool to be unmounted.
1013 .El
1014 .It Xo
1015 .Nm
1016 .Cm detach
1017 .Ar pool device
1018 .Xc
1019 Detaches
1020 .Ar device
1021 from a mirror.
1022 The operation is refused if there are no other valid replicas of the data.
1023 If device may be re-added to the pool later on then consider the
1024 .Sy zpool offline
1025 command instead.
1026 .It Xo
1027 .Nm
1028 .Cm events
1029 .Op Fl cfHv
1030 .Op Ar pool Ns ...
1031 .Xc
1032 Lists all recent events generated by the ZFS kernel modules. These events
1033 are consumed by the
1034 .Xr zed 8
1035 and used to automate administrative tasks such as replacing a failed device
1036 with a hot spare. For more information about the subclasses and event payloads
1037 that can be generated see the
1038 .Xr zfs-events 5
1039 man page.
1040 .Bl -tag -width Ds
1041 .It Fl c
1042 Clear all previous events.
1043 .It Fl f
1044 Follow mode.
1045 .It Fl H
1046 Scripted mode. Do not display headers, and separate fields by a
1047 single tab instead of arbitrary space.
1048 .It Fl v
1049 Print the entire payload for each event.
1050 .El
1051 .It Xo
1052 .Nm
1053 .Cm export
1054 .Op Fl a
1055 .Op Fl f
1056 .Ar pool Ns ...
1057 .Xc
1058 Exports the given pools from the system.
1059 All devices are marked as exported, but are still considered in use by other
1060 subsystems.
1061 The devices can be moved between systems
1062 .Pq even those of different endianness
1063 and imported as long as a sufficient number of devices are present.
1064 .Pp
1065 Before exporting the pool, all datasets within the pool are unmounted.
1066 A pool can not be exported if it has a shared spare that is currently being
1067 used.
1068 .Pp
1069 For pools to be portable, you must give the
1070 .Nm
1071 command whole disks, not just partitions, so that ZFS can label the disks with
1072 portable EFI labels.
1073 Otherwise, disk drivers on platforms of different endianness will not recognize
1074 the disks.
1075 .Bl -tag -width Ds
1076 .It Fl a
1077 Exports all pools imported on the system.
1078 .It Fl f
1079 Forcefully unmount all datasets, using the
1080 .Nm unmount Fl f
1081 command.
1082 .Pp
1083 This command will forcefully export the pool even if it has a shared spare that
1084 is currently being used.
1085 This may lead to potential data corruption.
1086 .El
1087 .It Xo
1088 .Nm
1089 .Cm get
1090 .Op Fl Hp
1091 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1092 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1093 .Ar pool Ns ...
1094 .Xc
1095 Retrieves the given list of properties
1096 .Po
1097 or all properties if
1098 .Sy all
1099 is used
1100 .Pc
1101 for the specified storage pool(s).
1102 These properties are displayed with the following fields:
1103 .Bd -literal
1104 name Name of storage pool
1105 property Property name
1106 value Property value
1107 source Property source, either 'default' or 'local'.
1108 .Ed
1109 .Pp
1110 See the
1111 .Sx Properties
1112 section for more information on the available pool properties.
1113 .Bl -tag -width Ds
1114 .It Fl H
1115 Scripted mode.
1116 Do not display headers, and separate fields by a single tab instead of arbitrary
1117 space.
1118 .It Fl o Ar field
1119 A comma-separated list of columns to display.
1120 .Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
1121 is the default value.
1122 .It Fl p
1123 Display numbers in parsable (exact) values.
1124 .El
1125 .It Xo
1126 .Nm
1127 .Cm history
1128 .Op Fl il
1129 .Oo Ar pool Oc Ns ...
1130 .Xc
1131 Displays the command history of the specified pool(s) or all pools if no pool is
1132 specified.
1133 .Bl -tag -width Ds
1134 .It Fl i
1135 Displays internally logged ZFS events in addition to user initiated events.
1136 .It Fl l
1137 Displays log records in long format, which in addition to standard format
1138 includes, the user name, the hostname, and the zone in which the operation was
1139 performed.
1140 .El
1141 .It Xo
1142 .Nm
1143 .Cm import
1144 .Op Fl D
1145 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1146 .Xc
1147 Lists pools available to import.
1148 If the
1149 .Fl d
1150 option is not specified, this command searches for devices in
1151 .Pa /dev .
1152 The
1153 .Fl d
1154 option can be specified multiple times, and all directories are searched.
1155 If the device appears to be part of an exported pool, this command displays a
1156 summary of the pool with the name of the pool, a numeric identifier, as well as
1157 the vdev layout and current health of the device for each device or file.
1158 Destroyed pools, pools that were previously destroyed with the
1159 .Nm zpool Cm destroy
1160 command, are not listed unless the
1161 .Fl D
1162 option is specified.
1163 .Pp
1164 The numeric identifier is unique, and can be used instead of the pool name when
1165 multiple exported pools of the same name are available.
1166 .Bl -tag -width Ds
1167 .It Fl c Ar cachefile
1168 Reads configuration from the given
1169 .Ar cachefile
1170 that was created with the
1171 .Sy cachefile
1172 pool property.
1173 This
1174 .Ar cachefile
1175 is used instead of searching for devices.
1176 .It Fl d Ar dir
1177 Searches for devices or files in
1178 .Ar dir .
1179 The
1180 .Fl d
1181 option can be specified multiple times.
1182 .It Fl D
1183 Lists destroyed pools only.
1184 .El
1185 .It Xo
1186 .Nm
1187 .Cm import
1188 .Fl a
1189 .Op Fl DfmN
1190 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1191 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1192 .Op Fl o Ar mntopts
1193 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1194 .Op Fl R Ar root
1195 .Op Fl s
1196 .Xc
1197 Imports all pools found in the search directories.
1198 Identical to the previous command, except that all pools with a sufficient
1199 number of devices available are imported.
1200 Destroyed pools, pools that were previously destroyed with the
1201 .Nm zpool Cm destroy
1202 command, will not be imported unless the
1203 .Fl D
1204 option is specified.
1205 .Bl -tag -width Ds
1206 .It Fl a
1207 Searches for and imports all pools found.
1208 .It Fl c Ar cachefile
1209 Reads configuration from the given
1210 .Ar cachefile
1211 that was created with the
1212 .Sy cachefile
1213 pool property.
1214 This
1215 .Ar cachefile
1216 is used instead of searching for devices.
1217 .It Fl d Ar dir
1218 Searches for devices or files in
1219 .Ar dir .
1220 The
1221 .Fl d
1222 option can be specified multiple times.
1223 This option is incompatible with the
1224 .Fl c
1225 option.
1226 .It Fl D
1227 Imports destroyed pools only.
1228 The
1229 .Fl f
1230 option is also required.
1231 .It Fl f
1232 Forces import, even if the pool appears to be potentially active.
1233 .It Fl F
1234 Recovery mode for a non-importable pool.
1235 Attempt to return the pool to an importable state by discarding the last few
1236 transactions.
1237 Not all damaged pools can be recovered by using this option.
1238 If successful, the data from the discarded transactions is irretrievably lost.
1239 This option is ignored if the pool is importable or already imported.
1240 .It Fl m
1241 Allows a pool to import when there is a missing log device.
1242 Recent transactions can be lost because the log device will be discarded.
1243 .It Fl n
1244 Used with the
1245 .Fl F
1246 recovery option.
1247 Determines whether a non-importable pool can be made importable again, but does
1248 not actually perform the pool recovery.
1249 For more details about pool recovery mode, see the
1250 .Fl F
1251 option, above.
1252 .It Fl N
1253 Import the pool without mounting any file systems.
1254 .It Fl o Ar mntopts
1255 Comma-separated list of mount options to use when mounting datasets within the
1256 pool.
1257 See
1258 .Xr zfs 8
1259 for a description of dataset properties and mount options.
1260 .It Fl o Ar property Ns = Ns Ar value
1261 Sets the specified property on the imported pool.
1262 See the
1263 .Sx Properties
1264 section for more information on the available pool properties.
1265 .It Fl R Ar root
1266 Sets the
1267 .Sy cachefile
1268 property to
1269 .Sy none
1270 and the
1271 .Sy altroot
1272 property to
1273 .Ar root .
1274 .It Fl s
1275 Scan using the default search path, the libblkid cache will not be
1276 consulted. A custom search path may be specified by setting the
1277 ZPOOL_IMPORT_PATH environment variable.
1278 .It Fl X
1279 Used with the
1280 .Fl F
1281 recovery option. Determines whether extreme
1282 measures to find a valid txg should take place. This allows the pool to
1283 be rolled back to a txg which is no longer guaranteed to be consistent.
1284 Pools imported at an inconsistent txg may contain uncorrectable
1285 checksum errors. For more details about pool recovery mode, see the
1286 .Fl F
1287 option, above. WARNING: This option can be extremely hazardous to the
1288 health of your pool and should only be used as a last resort.
1289 .It Fl T
1290 Specify the txg to use for rollback. Implies
1291 .Fl FX .
1292 For more details
1293 about pool recovery mode, see the
1294 .Fl X
1295 option, above. WARNING: This option can be extremely hazardous to the
1296 health of your pool and should only be used as a last resort.
1297 .El
1298 .It Xo
1299 .Nm
1300 .Cm import
1301 .Op Fl Dfm
1302 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1303 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1304 .Op Fl o Ar mntopts
1305 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1306 .Op Fl R Ar root
1307 .Op Fl s
1308 .Ar pool Ns | Ns Ar id
1309 .Op Ar newpool
1310 .Xc
1311 Imports a specific pool.
1312 A pool can be identified by its name or the numeric identifier.
1313 If
1314 .Ar newpool
1315 is specified, the pool is imported using the name
1316 .Ar newpool .
1317 Otherwise, it is imported with the same name as its exported name.
1318 .Pp
1319 If a device is removed from a system without running
1320 .Nm zpool Cm export
1321 first, the device appears as potentially active.
1322 It cannot be determined if this was a failed export, or whether the device is
1323 really in use from another host.
1324 To import a pool in this state, the
1325 .Fl f
1326 option is required.
1327 .Bl -tag -width Ds
1328 .It Fl c Ar cachefile
1329 Reads configuration from the given
1330 .Ar cachefile
1331 that was created with the
1332 .Sy cachefile
1333 pool property.
1334 This
1335 .Ar cachefile
1336 is used instead of searching for devices.
1337 .It Fl d Ar dir
1338 Searches for devices or files in
1339 .Ar dir .
1340 The
1341 .Fl d
1342 option can be specified multiple times.
1343 This option is incompatible with the
1344 .Fl c
1345 option.
1346 .It Fl D
1347 Imports destroyed pool.
1348 The
1349 .Fl f
1350 option is also required.
1351 .It Fl f
1352 Forces import, even if the pool appears to be potentially active.
1353 .It Fl F
1354 Recovery mode for a non-importable pool.
1355 Attempt to return the pool to an importable state by discarding the last few
1356 transactions.
1357 Not all damaged pools can be recovered by using this option.
1358 If successful, the data from the discarded transactions is irretrievably lost.
1359 This option is ignored if the pool is importable or already imported.
1360 .It Fl m
1361 Allows a pool to import when there is a missing log device.
1362 Recent transactions can be lost because the log device will be discarded.
1363 .It Fl n
1364 Used with the
1365 .Fl F
1366 recovery option.
1367 Determines whether a non-importable pool can be made importable again, but does
1368 not actually perform the pool recovery.
1369 For more details about pool recovery mode, see the
1370 .Fl F
1371 option, above.
1372 .It Fl o Ar mntopts
1373 Comma-separated list of mount options to use when mounting datasets within the
1374 pool.
1375 See
1376 .Xr zfs 8
1377 for a description of dataset properties and mount options.
1378 .It Fl o Ar property Ns = Ns Ar value
1379 Sets the specified property on the imported pool.
1380 See the
1381 .Sx Properties
1382 section for more information on the available pool properties.
1383 .It Fl R Ar root
1384 Sets the
1385 .Sy cachefile
1386 property to
1387 .Sy none
1388 and the
1389 .Sy altroot
1390 property to
1391 .Ar root .
1392 .It Fl s
1393 Scan using the default search path, the libblkid cache will not be
1394 consulted. A custom search path may be specified by setting the
1395 ZPOOL_IMPORT_PATH environment variable.
1396 .It Fl X
1397 Used with the
1398 .Fl F
1399 recovery option. Determines whether extreme
1400 measures to find a valid txg should take place. This allows the pool to
1401 be rolled back to a txg which is no longer guaranteed to be consistent.
1402 Pools imported at an inconsistent txg may contain uncorrectable
1403 checksum errors. For more details about pool recovery mode, see the
1404 .Fl F
1405 option, above. WARNING: This option can be extremely hazardous to the
1406 health of your pool and should only be used as a last resort.
1407 .It Fl T
1408 Specify the txg to use for rollback. Implies
1409 .Fl FX .
1410 For more details
1411 about pool recovery mode, see the
1412 .Fl X
1413 option, above. WARNING: This option can be extremely hazardous to the
1414 health of your pool and should only be used as a last resort.
1415 .It Fl s
1416 Used with
1417 .Sy newpool .
1418 Specifies that
1419 .Sy newpool
1420 is temporary. Temporary pool names last until export. Ensures that
1421 the original pool name will be used in all label updates and therefore
1422 is retained upon export.
1423 Will also set -o cachefile=none when not explicitly specified.
1424 .El
1425 .It Xo
1426 .Nm
1427 .Cm iostat
1428 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1429 .Op Fl T Sy u Ns | Ns Sy d
1430 .Op Fl ghHLpPvy
1431 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1432 .Op Ar interval Op Ar count
1433 .Xc
1434 Displays I/O statistics for the given pools/vdevs. You can pass in a
1435 list of pools, a pool and list of vdevs in that pool, or a list of any
1436 vdevs from any pool. If no items are specified, statistics for every
1437 pool in the system are shown.
1438 When given an
1439 .Ar interval ,
1440 the statistics are printed every
1441 .Ar interval
1442 seconds until ^C is pressed. If count is specified, the command exits
1443 after count reports are printed. The first report printed is always
1444 the statistics since boot regardless of whether
1445 .Ar interval
1446 and
1447 .Ar count
1448 are passed. However, this behavior can be suppressed with the
1449 .Fl y
1450 flag. Also note that the units of
1451 .Sy K ,
1452 .Sy M ,
1453 .Sy G ...
1454 that are printed in the report are in base 1024. To get the raw
1455 values, use the
1456 .Fl p
1457 flag.
1458 .Bl -tag -width Ds
1459 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1460 Run a script (or scripts) on each vdev and include the output as a new column
1461 in the
1462 .Nm zpool Cm iostat
1463 output. Users can run any script found in their
1464 .Pa ~/.zpool.d
1465 directory or from the system
1466 .Pa /etc/zfs/zpool.d
1467 directory. The default search path can be overridden by setting the
1468 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1469 .Fl c
1470 if they have the ZPOOL_SCRIPTS_AS_ROOT
1471 environment variable set. If a script requires the use of a privileged
1472 command, like
1473 .Xr smartctl 8 ,
1474 then it's recommended you allow the user access to it in
1475 .Pa /etc/sudoers
1476 or add the user to the
1477 .Pa /etc/sudoers.d/zfs
1478 file.
1479 .Pp
1480 If
1481 .Fl c
1482 is passed without a script name, it prints a list of all scripts.
1483 .Fl c
1484 also sets verbose mode
1485 .No ( Ns Fl v Ns No ).
1486 .Pp
1487 Script output should be in the form of "name=value". The column name is
1488 set to "name" and the value is set to "value". Multiple lines can be
1489 used to output multiple columns. The first line of output not in the
1490 "name=value" format is displayed without a column title, and no more
1491 output after that is displayed. This can be useful for printing error
1492 messages. Blank or NULL values are printed as a '-' to make output
1493 awk-able.
1494 .Pp
1495 The following environment variables are set before running each script:
1496 .Pp
1497 .Bl -tag -width "VDEV_PATH"
1498 .It Sy VDEV_PATH
1499 Full path to the vdev
1500 .El
1501 .Bl -tag -width "VDEV_UPATH"
1502 .It Sy VDEV_UPATH
1503 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1504 multipath, or partitioned vdevs.
1505 .El
1506 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1507 .It Sy VDEV_ENC_SYSFS_PATH
1508 The sysfs path to the enclosure for the vdev (if any).
1509 .El
1510 .It Fl T Sy u Ns | Ns Sy d
1511 Display a time stamp.
1512 Specify
1513 .Sy u
1514 for a printed representation of the internal representation of time.
1515 See
1516 .Xr time 2 .
1517 Specify
1518 .Sy d
1519 for standard date format.
1520 See
1521 .Xr date 1 .
1522 .It Fl g
1523 Display vdev GUIDs instead of the normal device names. These GUIDs
1524 can be used in place of device names for the zpool
1525 detach/offline/remove/replace commands.
1526 .It Fl H
1527 Scripted mode. Do not display headers, and separate fields by a
1528 single tab instead of arbitrary space.
1529 .It Fl L
1530 Display real paths for vdevs resolving all symbolic links. This can
1531 be used to look up the current block device name regardless of the
1532 .Pa /dev/disk/
1533 path used to open it.
1534 .It Fl p
1535 Display numbers in parsable (exact) values. Time values are in
1536 nanoseconds.
1537 .It Fl P
1538 Display full paths for vdevs instead of only the last component of
1539 the path. This can be used in conjunction with the
1540 .Fl L
1541 flag.
1542 .It Fl r
1543 Print request size histograms for the leaf ZIOs. This includes
1544 histograms of individual ZIOs (
1545 .Ar ind )
1546 and aggregate ZIOs (
1547 .Ar agg ).
1548 These stats can be useful for seeing how well the ZFS IO aggregator is
1549 working. Do not confuse these request size stats with the block layer
1550 requests; it's possible ZIOs can be broken up before being sent to the
1551 block device.
1552 .It Fl v
1553 Verbose statistics Reports usage statistics for individual vdevs within the
1554 pool, in addition to the pool-wide statistics.
1555 .It Fl y
1556 .It Fl w
1557 .It Fl l
1558 Include average latency statistics:
1559 .Pp
1560 .Ar total_wait :
1561 Average total IO time (queuing + disk IO time).
1562 .Ar disk_wait :
1563 Average disk IO time (time reading/writing the disk).
1564 .Ar syncq_wait :
1565 Average amount of time IO spent in synchronous priority queues. Does
1566 not include disk time.
1567 .Ar asyncq_wait :
1568 Average amount of time IO spent in asynchronous priority queues.
1569 Does not include disk time.
1570 .Ar scrub :
1571 Average queuing time in scrub queue. Does not include disk time.
1572 .It Fl q
1573 Include active queue statistics. Each priority queue has both
1574 pending (
1575 .Ar pend )
1576 and active (
1577 .Ar activ )
1578 IOs. Pending IOs are waiting to
1579 be issued to the disk, and active IOs have been issued to disk and are
1580 waiting for completion. These stats are broken out by priority queue:
1581 .Pp
1582 .Ar syncq_read/write :
1583 Current number of entries in synchronous priority
1584 queues.
1585 .Ar asyncq_read/write :
1586 Current number of entries in asynchronous priority queues.
1587 .Ar scrubq_read :
1588 Current number of entries in scrub queue.
1589 .Pp
1590 All queue statistics are instantaneous measurements of the number of
1591 entries in the queues. If you specify an interval, the measurements
1592 will be sampled from the end of the interval.
1593 .El
1594 .It Xo
1595 .Nm
1596 .Cm labelclear
1597 .Op Fl f
1598 .Ar device
1599 .Xc
1600 Removes ZFS label information from the specified
1601 .Ar device .
1602 The
1603 .Ar device
1604 must not be part of an active pool configuration.
1605 .Bl -tag -width Ds
1606 .It Fl f
1607 Treat exported or foreign devices as inactive.
1608 .El
1609 .It Xo
1610 .Nm
1611 .Cm list
1612 .Op Fl HgLpPv
1613 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1614 .Op Fl T Sy u Ns | Ns Sy d
1615 .Oo Ar pool Oc Ns ...
1616 .Op Ar interval Op Ar count
1617 .Xc
1618 Lists the given pools along with a health status and space usage.
1619 If no
1620 .Ar pool Ns s
1621 are specified, all pools in the system are listed.
1622 When given an
1623 .Ar interval ,
1624 the information is printed every
1625 .Ar interval
1626 seconds until ^C is pressed.
1627 If
1628 .Ar count
1629 is specified, the command exits after
1630 .Ar count
1631 reports are printed.
1632 .Bl -tag -width Ds
1633 .It Fl g
1634 Display vdev GUIDs instead of the normal device names. These GUIDs
1635 can be used in place of device names for the zpool
1636 detach/offline/remove/replace commands.
1637 .It Fl H
1638 Scripted mode.
1639 Do not display headers, and separate fields by a single tab instead of arbitrary
1640 space.
1641 .It Fl o Ar property
1642 Comma-separated list of properties to display.
1643 See the
1644 .Sx Properties
1645 section for a list of valid properties.
1646 The default list is
1647 .Sy name, size, alloc, free, fragmentation, expandsize, capacity,
1648 .Sy dedupratio, health, altroot .
1649 .It Fl L
1650 Display real paths for vdevs resolving all symbolic links. This can
1651 be used to look up the current block device name regardless of the
1652 /dev/disk/ path used to open it.
1653 .It Fl p
1654 Display numbers in parsable
1655 .Pq exact
1656 values.
1657 .It Fl P
1658 Display full paths for vdevs instead of only the last component of
1659 the path. This can be used in conjunction with the
1660 .Fl L flag.
1661 .It Fl T Sy u Ns | Ns Sy d
1662 Display a time stamp.
1663 Specify
1664 .Fl u
1665 for a printed representation of the internal representation of time.
1666 See
1667 .Xr time 2 .
1668 Specify
1669 .Fl d
1670 for standard date format.
1671 See
1672 .Xr date 1 .
1673 .It Fl v
1674 Verbose statistics.
1675 Reports usage statistics for individual vdevs within the pool, in addition to
1676 the pool-wise statistics.
1677 .El
1678 .It Xo
1679 .Nm
1680 .Cm offline
1681 .Op Fl f
1682 .Op Fl t
1683 .Ar pool Ar device Ns ...
1684 .Xc
1685 Takes the specified physical device offline.
1686 While the
1687 .Ar device
1688 is offline, no attempt is made to read or write to the device.
1689 This command is not applicable to spares.
1690 .Bl -tag -width Ds
1691 .It Fl f
1692 Force fault. Instead of offlining the disk, put it into a faulted
1693 state. The fault will persist across imports unless the
1694 .Fl t
1695 flag was specified.
1696 .It Fl t
1697 Temporary.
1698 Upon reboot, the specified physical device reverts to its previous state.
1699 .El
1700 .It Xo
1701 .Nm
1702 .Cm online
1703 .Op Fl e
1704 .Ar pool Ar device Ns ...
1705 .Xc
1706 Brings the specified physical device online.
1707 This command is not applicable to spares or cache devices.
1708 .Bl -tag -width Ds
1709 .It Fl e
1710 Expand the device to use all available space.
1711 If the device is part of a mirror or raidz then all devices must be expanded
1712 before the new space will become available to the pool.
1713 .El
1714 .It Xo
1715 .Nm
1716 .Cm reguid
1717 .Ar pool
1718 .Xc
1719 Generates a new unique identifier for the pool.
1720 You must ensure that all devices in this pool are online and healthy before
1721 performing this action.
1722 .It Xo
1723 .Nm
1724 .Cm reopen
1725 .Ar pool
1726 .Xc
1727 Reopen all the vdevs associated with the pool.
1728 .It Xo
1729 .Nm
1730 .Cm remove
1731 .Ar pool Ar device Ns ...
1732 .Xc
1733 Removes the specified device from the pool.
1734 This command currently only supports removing hot spares, cache, and log
1735 devices.
1736 A mirrored log device can be removed by specifying the top-level mirror for the
1737 log.
1738 Non-log devices that are part of a mirrored configuration can be removed using
1739 the
1740 .Nm zpool Cm detach
1741 command.
1742 Non-redundant and raidz devices cannot be removed from a pool.
1743 .It Xo
1744 .Nm
1745 .Cm replace
1746 .Op Fl f
1747 .Op Fl o Ar property Ns = Ns Ar value
1748 .Ar pool Ar device Op Ar new_device
1749 .Xc
1750 Replaces
1751 .Ar old_device
1752 with
1753 .Ar new_device .
1754 This is equivalent to attaching
1755 .Ar new_device ,
1756 waiting for it to resilver, and then detaching
1757 .Ar old_device .
1758 .Pp
1759 The size of
1760 .Ar new_device
1761 must be greater than or equal to the minimum size of all the devices in a mirror
1762 or raidz configuration.
1763 .Pp
1764 .Ar new_device
1765 is required if the pool is not redundant.
1766 If
1767 .Ar new_device
1768 is not specified, it defaults to
1769 .Ar old_device .
1770 This form of replacement is useful after an existing disk has failed and has
1771 been physically replaced.
1772 In this case, the new disk may have the same
1773 .Pa /dev
1774 path as the old device, even though it is actually a different disk.
1775 ZFS recognizes this.
1776 .Bl -tag -width Ds
1777 .It Fl f
1778 Forces use of
1779 .Ar new_device ,
1780 even if its appears to be in use.
1781 Not all devices can be overridden in this manner.
1782 .It Fl o Ar property Ns = Ns Ar value
1783 Sets the given pool properties. See the
1784 .Sx Properties
1785 section for a list of valid properties that can be set.
1786 The only property supported at the moment is
1787 .Sy ashift .
1788 .El
1789 .It Xo
1790 .Nm
1791 .Cm scrub
1792 .Op Fl s | Fl p
1793 .Ar pool Ns ...
1794 .Xc
1795 Begins a scrub or resumes a paused scrub.
1796 The scrub examines all data in the specified pools to verify that it checksums
1797 correctly.
1798 For replicated
1799 .Pq mirror or raidz
1800 devices, ZFS automatically repairs any damage discovered during the scrub.
1801 The
1802 .Nm zpool Cm status
1803 command reports the progress of the scrub and summarizes the results of the
1804 scrub upon completion.
1805 .Pp
1806 Scrubbing and resilvering are very similar operations.
1807 The difference is that resilvering only examines data that ZFS knows to be out
1808 of date
1809 .Po
1810 for example, when attaching a new device to a mirror or replacing an existing
1811 device
1812 .Pc ,
1813 whereas scrubbing examines all data to discover silent errors due to hardware
1814 faults or disk failure.
1815 .Pp
1816 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1817 one at a time.
1818 If a scrub is paused, the
1819 .Nm zpool Cm scrub
1820 resumes it.
1821 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1822 resilver completes.
1823 .Bl -tag -width Ds
1824 .It Fl s
1825 Stop scrubbing.
1826 .El
1827 .Bl -tag -width Ds
1828 .It Fl p
1829 Pause scrubbing.
1830 Scrub progress is periodically synced to disk so if the system
1831 is restarted or pool is exported during a paused scrub, the scrub will resume
1832 from the place where it was last checkpointed to disk.
1833 To resume a paused scrub issue
1834 .Nm zpool Cm scrub
1835 again.
1836 .El
1837 .It Xo
1838 .Nm
1839 .Cm set
1840 .Ar property Ns = Ns Ar value
1841 .Ar pool
1842 .Xc
1843 Sets the given property on the specified pool.
1844 See the
1845 .Sx Properties
1846 section for more information on what properties can be set and acceptable
1847 values.
1848 .It Xo
1849 .Nm
1850 .Cm split
1851 .Op Fl gLnP
1852 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1853 .Op Fl R Ar root
1854 .Ar pool newpool
1855 .Op Ar device ...
1856 .Xc
1857 Splits devices off
1858 .Ar pool
1859 creating
1860 .Ar newpool .
1861 All vdevs in
1862 .Ar pool
1863 must be mirrors and the pool must not be in the process of resilvering.
1864 At the time of the split,
1865 .Ar newpool
1866 will be a replica of
1867 .Ar pool .
1868 By default, the
1869 last device in each mirror is split from
1870 .Ar pool
1871 to create
1872 .Ar newpool .
1873 .Pp
1874 The optional device specification causes the specified device(s) to be
1875 included in the new
1876 .Ar pool
1877 and, should any devices remain unspecified,
1878 the last device in each mirror is used as would be by default.
1879 .Bl -tag -width Ds
1880 .It Fl g
1881 Display vdev GUIDs instead of the normal device names. These GUIDs
1882 can be used in place of device names for the zpool
1883 detach/offline/remove/replace commands.
1884 .It Fl L
1885 Display real paths for vdevs resolving all symbolic links. This can
1886 be used to look up the current block device name regardless of the
1887 .Pa /dev/disk/
1888 path used to open it.
1889 .It Fl n
1890 Do dry run, do not actually perform the split.
1891 Print out the expected configuration of
1892 .Ar newpool .
1893 .It Fl P
1894 Display full paths for vdevs instead of only the last component of
1895 the path. This can be used in conjunction with the
1896 .Fl L flag.
1897 .It Fl o Ar property Ns = Ns Ar value
1898 Sets the specified property for
1899 .Ar newpool .
1900 See the
1901 .Sx Properties
1902 section for more information on the available pool properties.
1903 .It Fl R Ar root
1904 Set
1905 .Sy altroot
1906 for
1907 .Ar newpool
1908 to
1909 .Ar root
1910 and automatically import it.
1911 .El
1912 .It Xo
1913 .Nm
1914 .Cm status
1915 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1916 .Op Fl gLPvxD
1917 .Op Fl T Sy u Ns | Ns Sy d
1918 .Oo Ar pool Oc Ns ...
1919 .Op Ar interval Op Ar count
1920 .Xc
1921 Displays the detailed health status for the given pools.
1922 If no
1923 .Ar pool
1924 is specified, then the status of each pool in the system is displayed.
1925 For more information on pool and device health, see the
1926 .Sx Device Failure and Recovery
1927 section.
1928 .Pp
1929 If a scrub or resilver is in progress, this command reports the percentage done
1930 and the estimated time to completion.
1931 Both of these are only approximate, because the amount of data in the pool and
1932 the other workloads on the system can change.
1933 .Bl -tag -width Ds
1934 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1935 Run a script (or scripts) on each vdev and include the output as a new column
1936 in the
1937 .Nm zpool Cm status
1938 output. See the
1939 .Fl c
1940 option of
1941 .Nm zpool Cm iostat
1942 for complete details.
1943 .It Fl g
1944 Display vdev GUIDs instead of the normal device names. These GUIDs
1945 can be used in place of device names for the zpool
1946 detach/offline/remove/replace commands.
1947 .It Fl L
1948 Display real paths for vdevs resolving all symbolic links. This can
1949 be used to look up the current block device name regardless of the
1950 .Pa /dev/disk/
1951 path used to open it.
1952 .It Fl p
1953 Display numbers in parsable (exact) values. Time values are in
1954 nanoseconds.
1955 .It Fl D
1956 Display a histogram of deduplication statistics, showing the allocated
1957 .Pq physically present on disk
1958 and referenced
1959 .Pq logically referenced in the pool
1960 block counts and sizes by reference count.
1961 .It Fl T Sy u Ns | Ns Sy d
1962 Display a time stamp.
1963 Specify
1964 .Fl u
1965 for a printed representation of the internal representation of time.
1966 See
1967 .Xr time 2 .
1968 Specify
1969 .Fl d
1970 for standard date format.
1971 See
1972 .Xr date 1 .
1973 .It Fl v
1974 Displays verbose data error information, printing out a complete list of all
1975 data errors since the last complete pool scrub.
1976 .It Fl x
1977 Only display status for pools that are exhibiting errors or are otherwise
1978 unavailable.
1979 Warnings about pools not using the latest on-disk format will not be included.
1980 .El
1981 .It Xo
1982 .Nm
1983 .Cm sync
1984 .Op Ar pool ...
1985 .Xc
1986 This command forces all in-core dirty data to be written to the primary
1987 pool storage and not the ZIL. It will also update administrative
1988 information including quota reporting. Without arguments,
1989 .Sy zpool sync
1990 will sync all pools on the system. Otherwise, it will sync only the
1991 specified pool(s).
1992 .It Xo
1993 .Nm
1994 .Cm upgrade
1995 .Xc
1996 Displays pools which do not have all supported features enabled and pools
1997 formatted using a legacy ZFS version number.
1998 These pools can continue to be used, but some features may not be available.
1999 Use
2000 .Nm zpool Cm upgrade Fl a
2001 to enable all features on all pools.
2002 .It Xo
2003 .Nm
2004 .Cm upgrade
2005 .Fl v
2006 .Xc
2007 Displays legacy ZFS versions supported by the current software.
2008 See
2009 .Xr zpool-features 5
2010 for a description of feature flags features supported by the current software.
2011 .It Xo
2012 .Nm
2013 .Cm upgrade
2014 .Op Fl V Ar version
2015 .Fl a Ns | Ns Ar pool Ns ...
2016 .Xc
2017 Enables all supported features on the given pool.
2018 Once this is done, the pool will no longer be accessible on systems that do not
2019 support feature flags.
2020 See
2021 .Xr zfs-features 5
2022 for details on compatibility with systems that support feature flags, but do not
2023 support all features enabled on the pool.
2024 .Bl -tag -width Ds
2025 .It Fl a
2026 Enables all supported features on all pools.
2027 .It Fl V Ar version
2028 Upgrade to the specified legacy version.
2029 If the
2030 .Fl V
2031 flag is specified, no features will be enabled on the pool.
2032 This option can only be used to increase the version number up to the last
2033 supported legacy version number.
2034 .El
2035 .El
2036 .Sh EXIT STATUS
2037 The following exit values are returned:
2038 .Bl -tag -width Ds
2039 .It Sy 0
2040 Successful completion.
2041 .It Sy 1
2042 An error occurred.
2043 .It Sy 2
2044 Invalid command line options were specified.
2045 .El
2046 .Sh EXAMPLES
2047 .Bl -tag -width Ds
2048 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2049 The following command creates a pool with a single raidz root vdev that
2050 consists of six disks.
2051 .Bd -literal
2052 # zpool create tank raidz sda sdb sdc sdd sde sdf
2053 .Ed
2054 .It Sy Example 2 No Creating a Mirrored Storage Pool
2055 The following command creates a pool with two mirrors, where each mirror
2056 contains two disks.
2057 .Bd -literal
2058 # zpool create tank mirror sda sdb mirror sdc sdd
2059 .Ed
2060 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2061 The following command creates an unmirrored pool using two disk partitions.
2062 .Bd -literal
2063 # zpool create tank sda1 sdb2
2064 .Ed
2065 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2066 The following command creates an unmirrored pool using files.
2067 While not recommended, a pool based on files can be useful for experimental
2068 purposes.
2069 .Bd -literal
2070 # zpool create tank /path/to/file/a /path/to/file/b
2071 .Ed
2072 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2073 The following command adds two mirrored disks to the pool
2074 .Em tank ,
2075 assuming the pool is already made up of two-way mirrors.
2076 The additional space is immediately available to any datasets within the pool.
2077 .Bd -literal
2078 # zpool add tank mirror sda sdb
2079 .Ed
2080 .It Sy Example 6 No Listing Available ZFS Storage Pools
2081 The following command lists all available pools on the system.
2082 In this case, the pool
2083 .Em zion
2084 is faulted due to a missing device.
2085 The results from this command are similar to the following:
2086 .Bd -literal
2087 # zpool list
2088 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2089 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
2090 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
2091 zion - - - - - - - FAULTED -
2092 .Ed
2093 .It Sy Example 7 No Destroying a ZFS Storage Pool
2094 The following command destroys the pool
2095 .Em tank
2096 and any datasets contained within.
2097 .Bd -literal
2098 # zpool destroy -f tank
2099 .Ed
2100 .It Sy Example 8 No Exporting a ZFS Storage Pool
2101 The following command exports the devices in pool
2102 .Em tank
2103 so that they can be relocated or later imported.
2104 .Bd -literal
2105 # zpool export tank
2106 .Ed
2107 .It Sy Example 9 No Importing a ZFS Storage Pool
2108 The following command displays available pools, and then imports the pool
2109 .Em tank
2110 for use on the system.
2111 The results from this command are similar to the following:
2112 .Bd -literal
2113 # zpool import
2114 pool: tank
2115 id: 15451357997522795478
2116 state: ONLINE
2117 action: The pool can be imported using its name or numeric identifier.
2118 config:
2119
2120 tank ONLINE
2121 mirror ONLINE
2122 sda ONLINE
2123 sdb ONLINE
2124
2125 # zpool import tank
2126 .Ed
2127 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2128 The following command upgrades all ZFS Storage pools to the current version of
2129 the software.
2130 .Bd -literal
2131 # zpool upgrade -a
2132 This system is currently running ZFS version 2.
2133 .Ed
2134 .It Sy Example 11 No Managing Hot Spares
2135 The following command creates a new pool with an available hot spare:
2136 .Bd -literal
2137 # zpool create tank mirror sda sdb spare sdc
2138 .Ed
2139 .Pp
2140 If one of the disks were to fail, the pool would be reduced to the degraded
2141 state.
2142 The failed device can be replaced using the following command:
2143 .Bd -literal
2144 # zpool replace tank sda sdd
2145 .Ed
2146 .Pp
2147 Once the data has been resilvered, the spare is automatically removed and is
2148 made available for use should another device fails.
2149 The hot spare can be permanently removed from the pool using the following
2150 command:
2151 .Bd -literal
2152 # zpool remove tank sdc
2153 .Ed
2154 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2155 The following command creates a ZFS storage pool consisting of two, two-way
2156 mirrors and mirrored log devices:
2157 .Bd -literal
2158 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2159 sde sdf
2160 .Ed
2161 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2162 The following command adds two disks for use as cache devices to a ZFS storage
2163 pool:
2164 .Bd -literal
2165 # zpool add pool cache sdc sdd
2166 .Ed
2167 .Pp
2168 Once added, the cache devices gradually fill with content from main memory.
2169 Depending on the size of your cache devices, it could take over an hour for
2170 them to fill.
2171 Capacity and reads can be monitored using the
2172 .Cm iostat
2173 option as follows:
2174 .Bd -literal
2175 # zpool iostat -v pool 5
2176 .Ed
2177 .It Sy Example 14 No Removing a Mirrored Log Device
2178 The following command removes the mirrored log device
2179 .Sy mirror-2 .
2180 Given this configuration:
2181 .Bd -literal
2182 pool: tank
2183 state: ONLINE
2184 scrub: none requested
2185 config:
2186
2187 NAME STATE READ WRITE CKSUM
2188 tank ONLINE 0 0 0
2189 mirror-0 ONLINE 0 0 0
2190 sda ONLINE 0 0 0
2191 sdb ONLINE 0 0 0
2192 mirror-1 ONLINE 0 0 0
2193 sdc ONLINE 0 0 0
2194 sdd ONLINE 0 0 0
2195 logs
2196 mirror-2 ONLINE 0 0 0
2197 sde ONLINE 0 0 0
2198 sdf ONLINE 0 0 0
2199 .Ed
2200 .Pp
2201 The command to remove the mirrored log
2202 .Sy mirror-2
2203 is:
2204 .Bd -literal
2205 # zpool remove tank mirror-2
2206 .Ed
2207 .It Sy Example 15 No Displaying expanded space on a device
2208 The following command displays the detailed information for the pool
2209 .Em data .
2210 This pool is comprised of a single raidz vdev where one of its devices
2211 increased its capacity by 10GB.
2212 In this example, the pool will not be able to utilize this extra capacity until
2213 all the devices under the raidz vdev have been expanded.
2214 .Bd -literal
2215 # zpool list -v data
2216 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2217 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2218 raidz1 23.9G 14.6G 9.30G 48% -
2219 sda - - - - -
2220 sdb - - - - 10G
2221 sdc - - - - -
2222 .Ed
2223 .It Sy Example 16 No Adding output columns
2224 Additional columns can be added to the
2225 .Nm zpool Cm status
2226 and
2227 .Nm zpool Cm iostat
2228 output with
2229 .Fl c
2230 option.
2231 .Bd -literal
2232 # zpool status -c vendor,model,size
2233 NAME STATE READ WRITE CKSUM vendor model size
2234 tank ONLINE 0 0 0
2235 mirror-0 ONLINE 0 0 0
2236 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2237 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2238 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2239 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2240 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2241 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2242
2243 # zpool iostat -vc slaves
2244 capacity operations bandwidth
2245 pool alloc free read write read write slaves
2246 ---------- ----- ----- ----- ----- ----- ----- ---------
2247 tank 20.4G 7.23T 26 152 20.7M 21.6M
2248 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2249 U1 - - 0 31 1.46K 20.6M sdb sdff
2250 U10 - - 0 1 3.77K 13.3K sdas sdgw
2251 U11 - - 0 1 288K 13.3K sdat sdgx
2252 U12 - - 0 1 78.4K 13.3K sdau sdgy
2253 U13 - - 0 1 128K 13.3K sdav sdgz
2254 U14 - - 0 1 63.2K 13.3K sdfk sdg
2255 .Ed
2256 .El
2257 .Sh ENVIRONMENT VARIABLES
2258 .Bl -tag -width "ZFS_ABORT"
2259 .It Ev ZFS_ABORT
2260 Cause
2261 .Nm zpool
2262 to dump core on exit for the purposes of running
2263 .Sy::findleaks .
2264 .El
2265 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2266 .It Ev ZPOOL_IMPORT_PATH
2267 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2268 .Nm zpool
2269 looks for device nodes and files.
2270 Similar to the
2271 .Fl d
2272 option in
2273 .Nm zpool import .
2274 .El
2275 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2276 .It Ev ZPOOL_VDEV_NAME_GUID
2277 Cause
2278 .Nm zpool subcommands to output vdev guids by default. This behavior
2279 is identical to the
2280 .Nm zpool status -g
2281 command line option.
2282 .El
2283 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2284 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2285 Cause
2286 .Nm zpool
2287 subcommands to follow links for vdev names by default. This behavior is identical to the
2288 .Nm zpool status -L
2289 command line option.
2290 .El
2291 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2292 .It Ev ZPOOL_VDEV_NAME_PATH
2293 Cause
2294 .Nm zpool
2295 subcommands to output full vdev path names by default. This
2296 behavior is identical to the
2297 .Nm zpool status -p
2298 command line option.
2299 .El
2300 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2301 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2302 Older ZFS on Linux implementations had issues when attempting to display pool
2303 config VDEV names if a
2304 .Sy devid
2305 NVP value is present in the pool's config.
2306 .Pp
2307 For example, a pool that originated on illumos platform would have a devid
2308 value in the config and
2309 .Nm zpool status
2310 would fail when listing the config.
2311 This would also be true for future Linux based pools.
2312 .Pp
2313 A pool can be stripped of any
2314 .Sy devid
2315 values on import or prevented from adding
2316 them on
2317 .Nm zpool create
2318 or
2319 .Nm zpool add
2320 by setting
2321 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2322 .El
2323 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2324 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2325 Allow a privileged user to run the
2326 .Nm zpool status/iostat
2327 with the
2328 .Fl c
2329 option. Normally, only unprivileged users are allowed to run
2330 .Fl c .
2331 .El
2332 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2333 .It Ev ZPOOL_SCRIPTS_PATH
2334 The search path for scripts when running
2335 .Nm zpool status/iostat
2336 with the
2337 .Fl c
2338 option. This is a colon-separated list of directories and overrides the default
2339 .Pa ~/.zpool.d
2340 and
2341 .Pa /etc/zfs/zpool.d
2342 search paths.
2343 .El
2344 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2345 .It Ev ZPOOL_SCRIPTS_ENABLED
2346 Allow a user to run
2347 .Nm zpool status/iostat
2348 with the
2349 .Fl c
2350 option. If
2351 .Sy ZPOOL_SCRIPTS_ENABLED
2352 is not set, it is assumed that the user is allowed to run
2353 .Nm zpool status/iostat -c .
2354 .Sh INTERFACE STABILITY
2355 .Sy Evolving
2356 .Sh SEE ALSO
2357 .Xr zed 8 ,
2358 .Xr zfs 8 ,
2359 .Xr zfs-events 5 ,
2360 .Xr zfs-module-parameters 5 ,
2361 .Xr zpool-features 5