]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Defer new resilvers until the current one ends
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd April 27, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm iostat
119 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
120 .Op Fl T Sy u Ns | Ns Sy d
121 .Op Fl ghHLpPvy
122 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
123 .Op Ar interval Op Ar count
124 .Nm
125 .Cm labelclear
126 .Op Fl f
127 .Ar device
128 .Nm
129 .Cm list
130 .Op Fl HgLpPv
131 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
132 .Op Fl T Sy u Ns | Ns Sy d
133 .Oo Ar pool Oc Ns ...
134 .Op Ar interval Op Ar count
135 .Nm
136 .Cm offline
137 .Op Fl f
138 .Op Fl t
139 .Ar pool Ar device Ns ...
140 .Nm
141 .Cm online
142 .Op Fl e
143 .Ar pool Ar device Ns ...
144 .Nm
145 .Cm reguid
146 .Ar pool
147 .Nm
148 .Cm reopen
149 .Op Fl n
150 .Ar pool
151 .Nm
152 .Cm remove
153 .Op Fl np
154 .Ar pool Ar device Ns ...
155 .Nm
156 .Cm remove
157 .Fl s
158 .Ar pool
159 .Nm
160 .Cm replace
161 .Op Fl f
162 .Oo Fl o Ar property Ns = Ns Ar value Oc
163 .Ar pool Ar device Op Ar new_device
164 .Nm
165 .Cm resilver
166 .Ar pool Ns ...
167 .Nm
168 .Cm scrub
169 .Op Fl s | Fl p
170 .Ar pool Ns ...
171 .Nm
172 .Cm set
173 .Ar property Ns = Ns Ar value
174 .Ar pool
175 .Nm
176 .Cm split
177 .Op Fl gLlnP
178 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
179 .Op Fl R Ar root
180 .Ar pool newpool
181 .Oo Ar device Oc Ns ...
182 .Nm
183 .Cm status
184 .Oo Fl c Ar SCRIPT Oc
185 .Op Fl gLPvxD
186 .Op Fl T Sy u Ns | Ns Sy d
187 .Oo Ar pool Oc Ns ...
188 .Op Ar interval Op Ar count
189 .Nm
190 .Cm sync
191 .Oo Ar pool Oc Ns ...
192 .Nm
193 .Cm upgrade
194 .Nm
195 .Cm upgrade
196 .Fl v
197 .Nm
198 .Cm upgrade
199 .Op Fl V Ar version
200 .Fl a Ns | Ns Ar pool Ns ...
201 .Sh DESCRIPTION
202 The
203 .Nm
204 command configures ZFS storage pools.
205 A storage pool is a collection of devices that provides physical storage and
206 data replication for ZFS datasets.
207 All datasets within a storage pool share the same space.
208 See
209 .Xr zfs 8
210 for information on managing datasets.
211 .Ss Virtual Devices (vdevs)
212 A "virtual device" describes a single device or a collection of devices
213 organized according to certain performance and fault characteristics.
214 The following virtual devices are supported:
215 .Bl -tag -width Ds
216 .It Sy disk
217 A block device, typically located under
218 .Pa /dev .
219 ZFS can use individual slices or partitions, though the recommended mode of
220 operation is to use whole disks.
221 A disk can be specified by a full path, or it can be a shorthand name
222 .Po the relative portion of the path under
223 .Pa /dev
224 .Pc .
225 A whole disk can be specified by omitting the slice or partition designation.
226 For example,
227 .Pa sda
228 is equivalent to
229 .Pa /dev/sda .
230 When given a whole disk, ZFS automatically labels the disk, if necessary.
231 .It Sy file
232 A regular file.
233 The use of files as a backing store is strongly discouraged.
234 It is designed primarily for experimental purposes, as the fault tolerance of a
235 file is only as good as the file system of which it is a part.
236 A file must be specified by a full path.
237 .It Sy mirror
238 A mirror of two or more devices.
239 Data is replicated in an identical fashion across all components of a mirror.
240 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
241 failing before data integrity is compromised.
242 .It Sy raidz , raidz1 , raidz2 , raidz3
243 A variation on RAID-5 that allows for better distribution of parity and
244 eliminates the RAID-5
245 .Qq write hole
246 .Pq in which data and parity become inconsistent after a power loss .
247 Data and parity is striped across all disks within a raidz group.
248 .Pp
249 A raidz group can have single-, double-, or triple-parity, meaning that the
250 raidz group can sustain one, two, or three failures, respectively, without
251 losing any data.
252 The
253 .Sy raidz1
254 vdev type specifies a single-parity raidz group; the
255 .Sy raidz2
256 vdev type specifies a double-parity raidz group; and the
257 .Sy raidz3
258 vdev type specifies a triple-parity raidz group.
259 The
260 .Sy raidz
261 vdev type is an alias for
262 .Sy raidz1 .
263 .Pp
264 A raidz group with N disks of size X with P parity disks can hold approximately
265 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
266 compromised.
267 The minimum number of devices in a raidz group is one more than the number of
268 parity disks.
269 The recommended number is between 3 and 9 to help increase performance.
270 .It Sy spare
271 A special pseudo-vdev which keeps track of available hot spares for a pool.
272 For more information, see the
273 .Sx Hot Spares
274 section.
275 .It Sy log
276 A separate intent log device.
277 If more than one log device is specified, then writes are load-balanced between
278 devices.
279 Log devices can be mirrored.
280 However, raidz vdev types are not supported for the intent log.
281 For more information, see the
282 .Sx Intent Log
283 section.
284 .It Sy dedup
285 A device dedicated solely for allocating dedup data.
286 The redundancy of this device should match the redundancy of the other normal
287 devices in the pool. If more than one dedup device is specified, then
288 allocations are load-balanced between devices.
289 .It Sy special
290 A device dedicated solely for allocating various kinds of internal metadata,
291 and optionally small file data.
292 The redundancy of this device should match the redundancy of the other normal
293 devices in the pool. If more than one special device is specified, then
294 allocations are load-balanced between devices.
295 .Pp
296 For more information on special allocations, see the
297 .Sx Special Allocation Class
298 section.
299 .It Sy cache
300 A device used to cache storage pool data.
301 A cache device cannot be configured as a mirror or raidz group.
302 For more information, see the
303 .Sx Cache Devices
304 section.
305 .El
306 .Pp
307 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
308 contain files or disks.
309 Mirrors of mirrors
310 .Pq or other combinations
311 are not allowed.
312 .Pp
313 A pool can have any number of virtual devices at the top of the configuration
314 .Po known as
315 .Qq root vdevs
316 .Pc .
317 Data is dynamically distributed across all top-level devices to balance data
318 among devices.
319 As new virtual devices are added, ZFS automatically places data on the newly
320 available devices.
321 .Pp
322 Virtual devices are specified one at a time on the command line, separated by
323 whitespace.
324 The keywords
325 .Sy mirror
326 and
327 .Sy raidz
328 are used to distinguish where a group ends and another begins.
329 For example, the following creates two root vdevs, each a mirror of two disks:
330 .Bd -literal
331 # zpool create mypool mirror sda sdb mirror sdc sdd
332 .Ed
333 .Ss Device Failure and Recovery
334 ZFS supports a rich set of mechanisms for handling device failure and data
335 corruption.
336 All metadata and data is checksummed, and ZFS automatically repairs bad data
337 from a good copy when corruption is detected.
338 .Pp
339 In order to take advantage of these features, a pool must make use of some form
340 of redundancy, using either mirrored or raidz groups.
341 While ZFS supports running in a non-redundant configuration, where each root
342 vdev is simply a disk or file, this is strongly discouraged.
343 A single case of bit corruption can render some or all of your data unavailable.
344 .Pp
345 A pool's health status is described by one of three states: online, degraded,
346 or faulted.
347 An online pool has all devices operating normally.
348 A degraded pool is one in which one or more devices have failed, but the data is
349 still available due to a redundant configuration.
350 A faulted pool has corrupted metadata, or one or more faulted devices, and
351 insufficient replicas to continue functioning.
352 .Pp
353 The health of the top-level vdev, such as mirror or raidz device, is
354 potentially impacted by the state of its associated vdevs, or component
355 devices.
356 A top-level vdev or component device is in one of the following states:
357 .Bl -tag -width "DEGRADED"
358 .It Sy DEGRADED
359 One or more top-level vdevs is in the degraded state because one or more
360 component devices are offline.
361 Sufficient replicas exist to continue functioning.
362 .Pp
363 One or more component devices is in the degraded or faulted state, but
364 sufficient replicas exist to continue functioning.
365 The underlying conditions are as follows:
366 .Bl -bullet
367 .It
368 The number of checksum errors exceeds acceptable levels and the device is
369 degraded as an indication that something may be wrong.
370 ZFS continues to use the device as necessary.
371 .It
372 The number of I/O errors exceeds acceptable levels.
373 The device could not be marked as faulted because there are insufficient
374 replicas to continue functioning.
375 .El
376 .It Sy FAULTED
377 One or more top-level vdevs is in the faulted state because one or more
378 component devices are offline.
379 Insufficient replicas exist to continue functioning.
380 .Pp
381 One or more component devices is in the faulted state, and insufficient
382 replicas exist to continue functioning.
383 The underlying conditions are as follows:
384 .Bl -bullet
385 .It
386 The device could be opened, but the contents did not match expected values.
387 .It
388 The number of I/O errors exceeds acceptable levels and the device is faulted to
389 prevent further use of the device.
390 .El
391 .It Sy OFFLINE
392 The device was explicitly taken offline by the
393 .Nm zpool Cm offline
394 command.
395 .It Sy ONLINE
396 The device is online and functioning.
397 .It Sy REMOVED
398 The device was physically removed while the system was running.
399 Device removal detection is hardware-dependent and may not be supported on all
400 platforms.
401 .It Sy UNAVAIL
402 The device could not be opened.
403 If a pool is imported when a device was unavailable, then the device will be
404 identified by a unique identifier instead of its path since the path was never
405 correct in the first place.
406 .El
407 .Pp
408 If a device is removed and later re-attached to the system, ZFS attempts
409 to put the device online automatically.
410 Device attach detection is hardware-dependent and might not be supported on all
411 platforms.
412 .Ss Hot Spares
413 ZFS allows devices to be associated with pools as
414 .Qq hot spares .
415 These devices are not actively used in the pool, but when an active device
416 fails, it is automatically replaced by a hot spare.
417 To create a pool with hot spares, specify a
418 .Sy spare
419 vdev with any number of devices.
420 For example,
421 .Bd -literal
422 # zpool create pool mirror sda sdb spare sdc sdd
423 .Ed
424 .Pp
425 Spares can be shared across multiple pools, and can be added with the
426 .Nm zpool Cm add
427 command and removed with the
428 .Nm zpool Cm remove
429 command.
430 Once a spare replacement is initiated, a new
431 .Sy spare
432 vdev is created within the configuration that will remain there until the
433 original device is replaced.
434 At this point, the hot spare becomes available again if another device fails.
435 .Pp
436 If a pool has a shared spare that is currently being used, the pool can not be
437 exported since other pools may use this shared spare, which may lead to
438 potential data corruption.
439 .Pp
440 An in-progress spare replacement can be cancelled by detaching the hot spare.
441 If the original faulted device is detached, then the hot spare assumes its
442 place in the configuration, and is removed from the spare list of all active
443 pools.
444 .Pp
445 Spares cannot replace log devices.
446 .Ss Intent Log
447 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
448 transactions.
449 For instance, databases often require their transactions to be on stable storage
450 devices when returning from a system call.
451 NFS and other applications can also use
452 .Xr fsync 2
453 to ensure data stability.
454 By default, the intent log is allocated from blocks within the main pool.
455 However, it might be possible to get better performance using separate intent
456 log devices such as NVRAM or a dedicated disk.
457 For example:
458 .Bd -literal
459 # zpool create pool sda sdb log sdc
460 .Ed
461 .Pp
462 Multiple log devices can also be specified, and they can be mirrored.
463 See the
464 .Sx EXAMPLES
465 section for an example of mirroring multiple log devices.
466 .Pp
467 Log devices can be added, replaced, attached, detached and removed. In
468 addition, log devices are imported and exported as part of the pool
469 that contains them.
470 Mirrored devices can be removed by specifying the top-level mirror vdev.
471 .Ss Cache Devices
472 Devices can be added to a storage pool as
473 .Qq cache devices .
474 These devices provide an additional layer of caching between main memory and
475 disk.
476 For read-heavy workloads, where the working set size is much larger than what
477 can be cached in main memory, using cache devices allow much more of this
478 working set to be served from low latency media.
479 Using cache devices provides the greatest performance improvement for random
480 read-workloads of mostly static content.
481 .Pp
482 To create a pool with cache devices, specify a
483 .Sy cache
484 vdev with any number of devices.
485 For example:
486 .Bd -literal
487 # zpool create pool sda sdb cache sdc sdd
488 .Ed
489 .Pp
490 Cache devices cannot be mirrored or part of a raidz configuration.
491 If a read error is encountered on a cache device, that read I/O is reissued to
492 the original storage pool device, which might be part of a mirrored or raidz
493 configuration.
494 .Pp
495 The content of the cache devices is considered volatile, as is the case with
496 other system caches.
497 .Ss Pool checkpoint
498 Before starting critical procedures that include destructive actions (e.g
499 .Nm zfs Cm destroy
500 ), an administrator can checkpoint the pool's state and in the case of a
501 mistake or failure, rewind the entire pool back to the checkpoint.
502 Otherwise, the checkpoint can be discarded when the procedure has completed
503 successfully.
504 .Pp
505 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
506 with care as it contains every part of the pool's state, from properties to vdev
507 configuration.
508 Thus, while a pool has a checkpoint certain operations are not allowed.
509 Specifically, vdev removal/attach/detach, mirror splitting, and
510 changing the pool's guid.
511 Adding a new vdev is supported but in the case of a rewind it will have to be
512 added again.
513 Finally, users of this feature should keep in mind that scrubs in a pool that
514 has a checkpoint do not repair checkpointed data.
515 .Pp
516 To create a checkpoint for a pool:
517 .Bd -literal
518 # zpool checkpoint pool
519 .Ed
520 .Pp
521 To later rewind to its checkpointed state, you need to first export it and
522 then rewind it during import:
523 .Bd -literal
524 # zpool export pool
525 # zpool import --rewind-to-checkpoint pool
526 .Ed
527 .Pp
528 To discard the checkpoint from a pool:
529 .Bd -literal
530 # zpool checkpoint -d pool
531 .Ed
532 .Pp
533 Dataset reservations (controlled by the
534 .Nm reservation
535 or
536 .Nm refreservation
537 zfs properties) may be unenforceable while a checkpoint exists, because the
538 checkpoint is allowed to consume the dataset's reservation.
539 Finally, data that is part of the checkpoint but has been freed in the
540 current state of the pool won't be scanned during a scrub.
541 .Ss Special Allocation Class
542 The allocations in the special class are dedicated to specific block types.
543 By default this includes all metadata, the indirect blocks of user data, and
544 any dedup data. The class can also be provisioned to accept a limited
545 percentage of small file data blocks.
546 .Pp
547 A pool must always have at least one general (non-specified) vdev before
548 other devices can be assigned to the special class. If the special class
549 becomes full, then allocations intended for it will spill back into the
550 normal class.
551 .Pp
552 Dedup data can be excluded from the special class by setting the
553 .Sy zfs_ddt_data_is_special
554 zfs module parameter to false (0).
555 .Pp
556 Inclusion of small file blocks in the special class is opt-in. Each dataset
557 can control the size of small file blocks allowed in the special class by
558 setting the
559 .Sy special_small_blocks
560 dataset property. It defaults to zero so you must opt-in by setting it to a
561 non-zero value. See
562 .Xr zfs 8
563 for more info on setting this property.
564 .Ss Properties
565 Each pool has several properties associated with it.
566 Some properties are read-only statistics while others are configurable and
567 change the behavior of the pool.
568 .Pp
569 The following are read-only properties:
570 .Bl -tag -width Ds
571 .It Cm allocated
572 Amount of storage used within the pool.
573 .It Sy capacity
574 Percentage of pool space used.
575 This property can also be referred to by its shortened column name,
576 .Sy cap .
577 .It Sy expandsize
578 Amount of uninitialized space within the pool or device that can be used to
579 increase the total capacity of the pool.
580 Uninitialized space consists of any space on an EFI labeled vdev which has not
581 been brought online
582 .Po e.g, using
583 .Nm zpool Cm online Fl e
584 .Pc .
585 This space occurs when a LUN is dynamically expanded.
586 .It Sy fragmentation
587 The amount of fragmentation in the pool.
588 .It Sy free
589 The amount of free space available in the pool.
590 .It Sy freeing
591 After a file system or snapshot is destroyed, the space it was using is
592 returned to the pool asynchronously.
593 .Sy freeing
594 is the amount of space remaining to be reclaimed.
595 Over time
596 .Sy freeing
597 will decrease while
598 .Sy free
599 increases.
600 .It Sy health
601 The current health of the pool.
602 Health can be one of
603 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
604 .It Sy guid
605 A unique identifier for the pool.
606 .It Sy load_guid
607 A unique identifier for the pool.
608 Unlike the
609 .Sy guid
610 property, this identifier is generated every time we load the pool (e.g. does
611 not persist across imports/exports) and never changes while the pool is loaded
612 (even if a
613 .Sy reguid
614 operation takes place).
615 .It Sy size
616 Total size of the storage pool.
617 .It Sy unsupported@ Ns Em feature_guid
618 Information about unsupported features that are enabled on the pool.
619 See
620 .Xr zpool-features 5
621 for details.
622 .El
623 .Pp
624 The space usage properties report actual physical space available to the
625 storage pool.
626 The physical space can be different from the total amount of space that any
627 contained datasets can actually use.
628 The amount of space used in a raidz configuration depends on the characteristics
629 of the data being written.
630 In addition, ZFS reserves some space for internal accounting that the
631 .Xr zfs 8
632 command takes into account, but the
633 .Nm
634 command does not.
635 For non-full pools of a reasonable size, these effects should be invisible.
636 For small pools, or pools that are close to being completely full, these
637 discrepancies may become more noticeable.
638 .Pp
639 The following property can be set at creation time and import time:
640 .Bl -tag -width Ds
641 .It Sy altroot
642 Alternate root directory.
643 If set, this directory is prepended to any mount points within the pool.
644 This can be used when examining an unknown pool where the mount points cannot be
645 trusted, or in an alternate boot environment, where the typical paths are not
646 valid.
647 .Sy altroot
648 is not a persistent property.
649 It is valid only while the system is up.
650 Setting
651 .Sy altroot
652 defaults to using
653 .Sy cachefile Ns = Ns Sy none ,
654 though this may be overridden using an explicit setting.
655 .El
656 .Pp
657 The following property can be set only at import time:
658 .Bl -tag -width Ds
659 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
660 If set to
661 .Sy on ,
662 the pool will be imported in read-only mode.
663 This property can also be referred to by its shortened column name,
664 .Sy rdonly .
665 .El
666 .Pp
667 The following properties can be set at creation time and import time, and later
668 changed with the
669 .Nm zpool Cm set
670 command:
671 .Bl -tag -width Ds
672 .It Sy ashift Ns = Ns Sy ashift
673 Pool sector size exponent, to the power of
674 .Sy 2
675 (internally referred to as
676 .Sy ashift
677 ). Values from 9 to 16, inclusive, are valid; also, the special
678 value 0 (the default) means to auto-detect using the kernel's block
679 layer and a ZFS internal exception list. I/O operations will be aligned
680 to the specified size boundaries. Additionally, the minimum (disk)
681 write size will be set to the specified size, so this represents a
682 space vs. performance trade-off. For optimal performance, the pool
683 sector size should be greater than or equal to the sector size of the
684 underlying disks. The typical case for setting this property is when
685 performance is important and the underlying disks use 4KiB sectors but
686 report 512B sectors to the OS (for compatibility reasons); in that
687 case, set
688 .Sy ashift=12
689 (which is 1<<12 = 4096). When set, this property is
690 used as the default hint value in subsequent vdev operations (add,
691 attach and replace). Changing this value will not modify any existing
692 vdev, not even on disk replacement; however it can be used, for
693 instance, to replace a dying 512B sectors disk with a newer 4KiB
694 sectors device: this will probably result in bad performance but at the
695 same time could prevent loss of data.
696 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
697 Controls automatic pool expansion when the underlying LUN is grown.
698 If set to
699 .Sy on ,
700 the pool will be resized according to the size of the expanded device.
701 If the device is part of a mirror or raidz then all devices within that
702 mirror/raidz group must be expanded before the new space is made available to
703 the pool.
704 The default behavior is
705 .Sy off .
706 This property can also be referred to by its shortened column name,
707 .Sy expand .
708 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
709 Controls automatic device replacement.
710 If set to
711 .Sy off ,
712 device replacement must be initiated by the administrator by using the
713 .Nm zpool Cm replace
714 command.
715 If set to
716 .Sy on ,
717 any new device, found in the same physical location as a device that previously
718 belonged to the pool, is automatically formatted and replaced.
719 The default behavior is
720 .Sy off .
721 This property can also be referred to by its shortened column name,
722 .Sy replace .
723 Autoreplace can also be used with virtual disks (like device
724 mapper) provided that you use the /dev/disk/by-vdev paths setup by
725 vdev_id.conf. See the
726 .Xr vdev_id 8
727 man page for more details.
728 Autoreplace and autoonline require the ZFS Event Daemon be configured and
729 running. See the
730 .Xr zed 8
731 man page for more details.
732 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
733 Identifies the default bootable dataset for the root pool. This property is
734 expected to be set mainly by the installation and upgrade programs.
735 Not all Linux distribution boot processes use the bootfs property.
736 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
737 Controls the location of where the pool configuration is cached.
738 Discovering all pools on system startup requires a cached copy of the
739 configuration data that is stored on the root file system.
740 All pools in this cache are automatically imported when the system boots.
741 Some environments, such as install and clustering, need to cache this
742 information in a different location so that pools are not automatically
743 imported.
744 Setting this property caches the pool configuration in a different location that
745 can later be imported with
746 .Nm zpool Cm import Fl c .
747 Setting it to the special value
748 .Sy none
749 creates a temporary pool that is never cached, and the special value
750 .Qq
751 .Pq empty string
752 uses the default location.
753 .Pp
754 Multiple pools can share the same cache file.
755 Because the kernel destroys and recreates this file when pools are added and
756 removed, care should be taken when attempting to access this file.
757 When the last pool using a
758 .Sy cachefile
759 is exported or destroyed, the file will be empty.
760 .It Sy comment Ns = Ns Ar text
761 A text string consisting of printable ASCII characters that will be stored
762 such that it is available even if the pool becomes faulted.
763 An administrator can provide additional information about a pool using this
764 property.
765 .It Sy dedupditto Ns = Ns Ar number
766 Threshold for the number of block ditto copies.
767 If the reference count for a deduplicated block increases above this number, a
768 new ditto copy of this block is automatically stored.
769 The default setting is
770 .Sy 0
771 which causes no ditto copies to be created for deduplicated blocks.
772 The minimum legal nonzero setting is
773 .Sy 100 .
774 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
775 Controls whether a non-privileged user is granted access based on the dataset
776 permissions defined on the dataset.
777 See
778 .Xr zfs 8
779 for more information on ZFS delegated administration.
780 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
781 Controls the system behavior in the event of catastrophic pool failure.
782 This condition is typically a result of a loss of connectivity to the underlying
783 storage device(s) or a failure of all devices within the pool.
784 The behavior of such an event is determined as follows:
785 .Bl -tag -width "continue"
786 .It Sy wait
787 Blocks all I/O access until the device connectivity is recovered and the errors
788 are cleared.
789 This is the default behavior.
790 .It Sy continue
791 Returns
792 .Er EIO
793 to any new write I/O requests but allows reads to any of the remaining healthy
794 devices.
795 Any write requests that have yet to be committed to disk would be blocked.
796 .It Sy panic
797 Prints out a message to the console and generates a system crash dump.
798 .El
799 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
800 The value of this property is the current state of
801 .Ar feature_name .
802 The only valid value when setting this property is
803 .Sy enabled
804 which moves
805 .Ar feature_name
806 to the enabled state.
807 See
808 .Xr zpool-features 5
809 for details on feature states.
810 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
811 Controls whether information about snapshots associated with this pool is
812 output when
813 .Nm zfs Cm list
814 is run without the
815 .Fl t
816 option.
817 The default value is
818 .Sy off .
819 This property can also be referred to by its shortened name,
820 .Sy listsnaps .
821 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
822 Controls whether a pool activity check should be performed during
823 .Nm zpool Cm import .
824 When a pool is determined to be active it cannot be imported, even with the
825 .Fl f
826 option. This property is intended to be used in failover configurations
827 where multiple hosts have access to a pool on shared storage. When this
828 property is on, periodic writes to storage occur to show the pool is in use.
829 See
830 .Sy zfs_multihost_interval
831 in the
832 .Xr zfs-module-parameters 5
833 man page. In order to enable this property each host must set a unique hostid.
834 See
835 .Xr genhostid 1
836 .Xr zgenhostid 8
837 .Xr spl-module-parameters 5
838 for additional details. The default value is
839 .Sy off .
840 .It Sy version Ns = Ns Ar version
841 The current on-disk version of the pool.
842 This can be increased, but never decreased.
843 The preferred method of updating pools is with the
844 .Nm zpool Cm upgrade
845 command, though this property can be used when a specific version is needed for
846 backwards compatibility.
847 Once feature flags are enabled on a pool this property will no longer have a
848 value.
849 .El
850 .Ss Subcommands
851 All subcommands that modify state are logged persistently to the pool in their
852 original form.
853 .Pp
854 The
855 .Nm
856 command provides subcommands to create and destroy storage pools, add capacity
857 to storage pools, and provide information about the storage pools.
858 The following subcommands are supported:
859 .Bl -tag -width Ds
860 .It Xo
861 .Nm
862 .Fl ?
863 .Xc
864 Displays a help message.
865 .It Xo
866 .Nm
867 .Cm add
868 .Op Fl fgLnP
869 .Oo Fl o Ar property Ns = Ns Ar value Oc
870 .Ar pool vdev Ns ...
871 .Xc
872 Adds the specified virtual devices to the given pool.
873 The
874 .Ar vdev
875 specification is described in the
876 .Sx Virtual Devices
877 section.
878 The behavior of the
879 .Fl f
880 option, and the device checks performed are described in the
881 .Nm zpool Cm create
882 subcommand.
883 .Bl -tag -width Ds
884 .It Fl f
885 Forces use of
886 .Ar vdev Ns s ,
887 even if they appear in use or specify a conflicting replication level.
888 Not all devices can be overridden in this manner.
889 .It Fl g
890 Display
891 .Ar vdev ,
892 GUIDs instead of the normal device names. These GUIDs can be used in place of
893 device names for the zpool detach/offline/remove/replace commands.
894 .It Fl L
895 Display real paths for
896 .Ar vdev Ns s
897 resolving all symbolic links. This can be used to look up the current block
898 device name regardless of the /dev/disk/ path used to open it.
899 .It Fl n
900 Displays the configuration that would be used without actually adding the
901 .Ar vdev Ns s .
902 The actual pool creation can still fail due to insufficient privileges or
903 device sharing.
904 .It Fl P
905 Display real paths for
906 .Ar vdev Ns s
907 instead of only the last component of the path. This can be used in
908 conjunction with the
909 .Fl L
910 flag.
911 .It Fl o Ar property Ns = Ns Ar value
912 Sets the given pool properties. See the
913 .Sx Properties
914 section for a list of valid properties that can be set. The only property
915 supported at the moment is ashift.
916 .El
917 .It Xo
918 .Nm
919 .Cm attach
920 .Op Fl f
921 .Oo Fl o Ar property Ns = Ns Ar value Oc
922 .Ar pool device new_device
923 .Xc
924 Attaches
925 .Ar new_device
926 to the existing
927 .Ar device .
928 The existing device cannot be part of a raidz configuration.
929 If
930 .Ar device
931 is not currently part of a mirrored configuration,
932 .Ar device
933 automatically transforms into a two-way mirror of
934 .Ar device
935 and
936 .Ar new_device .
937 If
938 .Ar device
939 is part of a two-way mirror, attaching
940 .Ar new_device
941 creates a three-way mirror, and so on.
942 In either case,
943 .Ar new_device
944 begins to resilver immediately.
945 .Bl -tag -width Ds
946 .It Fl f
947 Forces use of
948 .Ar new_device ,
949 even if its appears to be in use.
950 Not all devices can be overridden in this manner.
951 .It Fl o Ar property Ns = Ns Ar value
952 Sets the given pool properties. See the
953 .Sx Properties
954 section for a list of valid properties that can be set. The only property
955 supported at the moment is ashift.
956 .El
957 .It Xo
958 .Nm
959 .Cm checkpoint
960 .Op Fl d, -discard
961 .Ar pool
962 .Xc
963 Checkpoints the current state of
964 .Ar pool
965 , which can be later restored by
966 .Nm zpool Cm import --rewind-to-checkpoint .
967 The existence of a checkpoint in a pool prohibits the following
968 .Nm zpool
969 commands:
970 .Cm remove ,
971 .Cm attach ,
972 .Cm detach ,
973 .Cm split ,
974 and
975 .Cm reguid .
976 In addition, it may break reservation boundaries if the pool lacks free
977 space.
978 The
979 .Nm zpool Cm status
980 command indicates the existence of a checkpoint or the progress of discarding a
981 checkpoint from a pool.
982 The
983 .Nm zpool Cm list
984 command reports how much space the checkpoint takes from the pool.
985 .Bl -tag -width Ds
986 .It Fl d, -discard
987 Discards an existing checkpoint from
988 .Ar pool .
989 .El
990 .It Xo
991 .Nm
992 .Cm clear
993 .Ar pool
994 .Op Ar device
995 .Xc
996 Clears device errors in a pool.
997 If no arguments are specified, all device errors within the pool are cleared.
998 If one or more devices is specified, only those errors associated with the
999 specified device or devices are cleared.
1000 .It Xo
1001 .Nm
1002 .Cm create
1003 .Op Fl dfn
1004 .Op Fl m Ar mountpoint
1005 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1006 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1007 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1008 .Op Fl R Ar root
1009 .Op Fl t Ar tname
1010 .Ar pool vdev Ns ...
1011 .Xc
1012 Creates a new storage pool containing the virtual devices specified on the
1013 command line.
1014 The pool name must begin with a letter, and can only contain
1015 alphanumeric characters as well as underscore
1016 .Pq Qq Sy _ ,
1017 dash
1018 .Pq Qq Sy \&- ,
1019 colon
1020 .Pq Qq Sy \&: ,
1021 space
1022 .Pq Qq Sy \&\ ,
1023 and period
1024 .Pq Qq Sy \&. .
1025 The pool names
1026 .Sy mirror ,
1027 .Sy raidz ,
1028 .Sy spare
1029 and
1030 .Sy log
1031 are reserved, as are names beginning with
1032 .Sy mirror ,
1033 .Sy raidz ,
1034 .Sy spare ,
1035 and the pattern
1036 .Sy c[0-9] .
1037 The
1038 .Ar vdev
1039 specification is described in the
1040 .Sx Virtual Devices
1041 section.
1042 .Pp
1043 The command verifies that each device specified is accessible and not currently
1044 in use by another subsystem.
1045 There are some uses, such as being currently mounted, or specified as the
1046 dedicated dump device, that prevents a device from ever being used by ZFS.
1047 Other uses, such as having a preexisting UFS file system, can be overridden with
1048 the
1049 .Fl f
1050 option.
1051 .Pp
1052 The command also checks that the replication strategy for the pool is
1053 consistent.
1054 An attempt to combine redundant and non-redundant storage in a single pool, or
1055 to mix disks and files, results in an error unless
1056 .Fl f
1057 is specified.
1058 The use of differently sized devices within a single raidz or mirror group is
1059 also flagged as an error unless
1060 .Fl f
1061 is specified.
1062 .Pp
1063 Unless the
1064 .Fl R
1065 option is specified, the default mount point is
1066 .Pa / Ns Ar pool .
1067 The mount point must not exist or must be empty, or else the root dataset
1068 cannot be mounted.
1069 This can be overridden with the
1070 .Fl m
1071 option.
1072 .Pp
1073 By default all supported features are enabled on the new pool unless the
1074 .Fl d
1075 option is specified.
1076 .Bl -tag -width Ds
1077 .It Fl d
1078 Do not enable any features on the new pool.
1079 Individual features can be enabled by setting their corresponding properties to
1080 .Sy enabled
1081 with the
1082 .Fl o
1083 option.
1084 See
1085 .Xr zpool-features 5
1086 for details about feature properties.
1087 .It Fl f
1088 Forces use of
1089 .Ar vdev Ns s ,
1090 even if they appear in use or specify a conflicting replication level.
1091 Not all devices can be overridden in this manner.
1092 .It Fl m Ar mountpoint
1093 Sets the mount point for the root dataset.
1094 The default mount point is
1095 .Pa /pool
1096 or
1097 .Pa altroot/pool
1098 if
1099 .Ar altroot
1100 is specified.
1101 The mount point must be an absolute path,
1102 .Sy legacy ,
1103 or
1104 .Sy none .
1105 For more information on dataset mount points, see
1106 .Xr zfs 8 .
1107 .It Fl n
1108 Displays the configuration that would be used without actually creating the
1109 pool.
1110 The actual pool creation can still fail due to insufficient privileges or
1111 device sharing.
1112 .It Fl o Ar property Ns = Ns Ar value
1113 Sets the given pool properties.
1114 See the
1115 .Sx Properties
1116 section for a list of valid properties that can be set.
1117 .It Fl o Ar feature@feature Ns = Ns Ar value
1118 Sets the given pool feature. See the
1119 .Xr zpool-features 5
1120 section for a list of valid features that can be set.
1121 Value can be either disabled or enabled.
1122 .It Fl O Ar file-system-property Ns = Ns Ar value
1123 Sets the given file system properties in the root file system of the pool.
1124 See the
1125 .Sx Properties
1126 section of
1127 .Xr zfs 8
1128 for a list of valid properties that can be set.
1129 .It Fl R Ar root
1130 Equivalent to
1131 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1132 .It Fl t Ar tname
1133 Sets the in-core pool name to
1134 .Sy tname
1135 while the on-disk name will be the name specified as the pool name
1136 .Sy pool .
1137 This will set the default cachefile property to none. This is intended
1138 to handle name space collisions when creating pools for other systems,
1139 such as virtual machines or physical machines whose pools live on network
1140 block devices.
1141 .El
1142 .It Xo
1143 .Nm
1144 .Cm destroy
1145 .Op Fl f
1146 .Ar pool
1147 .Xc
1148 Destroys the given pool, freeing up any devices for other use.
1149 This command tries to unmount any active datasets before destroying the pool.
1150 .Bl -tag -width Ds
1151 .It Fl f
1152 Forces any active datasets contained within the pool to be unmounted.
1153 .El
1154 .It Xo
1155 .Nm
1156 .Cm detach
1157 .Ar pool device
1158 .Xc
1159 Detaches
1160 .Ar device
1161 from a mirror.
1162 The operation is refused if there are no other valid replicas of the data.
1163 If device may be re-added to the pool later on then consider the
1164 .Sy zpool offline
1165 command instead.
1166 .It Xo
1167 .Nm
1168 .Cm events
1169 .Op Fl vHf Oo Ar pool Oc | Fl c
1170 .Xc
1171 Lists all recent events generated by the ZFS kernel modules. These events
1172 are consumed by the
1173 .Xr zed 8
1174 and used to automate administrative tasks such as replacing a failed device
1175 with a hot spare. For more information about the subclasses and event payloads
1176 that can be generated see the
1177 .Xr zfs-events 5
1178 man page.
1179 .Bl -tag -width Ds
1180 .It Fl c
1181 Clear all previous events.
1182 .It Fl f
1183 Follow mode.
1184 .It Fl H
1185 Scripted mode. Do not display headers, and separate fields by a
1186 single tab instead of arbitrary space.
1187 .It Fl v
1188 Print the entire payload for each event.
1189 .El
1190 .It Xo
1191 .Nm
1192 .Cm export
1193 .Op Fl a
1194 .Op Fl f
1195 .Ar pool Ns ...
1196 .Xc
1197 Exports the given pools from the system.
1198 All devices are marked as exported, but are still considered in use by other
1199 subsystems.
1200 The devices can be moved between systems
1201 .Pq even those of different endianness
1202 and imported as long as a sufficient number of devices are present.
1203 .Pp
1204 Before exporting the pool, all datasets within the pool are unmounted.
1205 A pool can not be exported if it has a shared spare that is currently being
1206 used.
1207 .Pp
1208 For pools to be portable, you must give the
1209 .Nm
1210 command whole disks, not just partitions, so that ZFS can label the disks with
1211 portable EFI labels.
1212 Otherwise, disk drivers on platforms of different endianness will not recognize
1213 the disks.
1214 .Bl -tag -width Ds
1215 .It Fl a
1216 Exports all pools imported on the system.
1217 .It Fl f
1218 Forcefully unmount all datasets, using the
1219 .Nm unmount Fl f
1220 command.
1221 .Pp
1222 This command will forcefully export the pool even if it has a shared spare that
1223 is currently being used.
1224 This may lead to potential data corruption.
1225 .El
1226 .It Xo
1227 .Nm
1228 .Cm get
1229 .Op Fl Hp
1230 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1231 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1232 .Oo Ar pool Oc Ns ...
1233 .Xc
1234 Retrieves the given list of properties
1235 .Po
1236 or all properties if
1237 .Sy all
1238 is used
1239 .Pc
1240 for the specified storage pool(s).
1241 These properties are displayed with the following fields:
1242 .Bd -literal
1243 name Name of storage pool
1244 property Property name
1245 value Property value
1246 source Property source, either 'default' or 'local'.
1247 .Ed
1248 .Pp
1249 See the
1250 .Sx Properties
1251 section for more information on the available pool properties.
1252 .Bl -tag -width Ds
1253 .It Fl H
1254 Scripted mode.
1255 Do not display headers, and separate fields by a single tab instead of arbitrary
1256 space.
1257 .It Fl o Ar field
1258 A comma-separated list of columns to display.
1259 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1260 is the default value.
1261 .It Fl p
1262 Display numbers in parsable (exact) values.
1263 .El
1264 .It Xo
1265 .Nm
1266 .Cm history
1267 .Op Fl il
1268 .Oo Ar pool Oc Ns ...
1269 .Xc
1270 Displays the command history of the specified pool(s) or all pools if no pool is
1271 specified.
1272 .Bl -tag -width Ds
1273 .It Fl i
1274 Displays internally logged ZFS events in addition to user initiated events.
1275 .It Fl l
1276 Displays log records in long format, which in addition to standard format
1277 includes, the user name, the hostname, and the zone in which the operation was
1278 performed.
1279 .El
1280 .It Xo
1281 .Nm
1282 .Cm import
1283 .Op Fl D
1284 .Op Fl d Ar dir Ns | Ns device
1285 .Xc
1286 Lists pools available to import.
1287 If the
1288 .Fl d
1289 option is not specified, this command searches for devices in
1290 .Pa /dev .
1291 The
1292 .Fl d
1293 option can be specified multiple times, and all directories are searched.
1294 If the device appears to be part of an exported pool, this command displays a
1295 summary of the pool with the name of the pool, a numeric identifier, as well as
1296 the vdev layout and current health of the device for each device or file.
1297 Destroyed pools, pools that were previously destroyed with the
1298 .Nm zpool Cm destroy
1299 command, are not listed unless the
1300 .Fl D
1301 option is specified.
1302 .Pp
1303 The numeric identifier is unique, and can be used instead of the pool name when
1304 multiple exported pools of the same name are available.
1305 .Bl -tag -width Ds
1306 .It Fl c Ar cachefile
1307 Reads configuration from the given
1308 .Ar cachefile
1309 that was created with the
1310 .Sy cachefile
1311 pool property.
1312 This
1313 .Ar cachefile
1314 is used instead of searching for devices.
1315 .It Fl d Ar dir Ns | Ns Ar device
1316 Uses
1317 .Ar device
1318 or searches for devices or files in
1319 .Ar dir .
1320 The
1321 .Fl d
1322 option can be specified multiple times.
1323 .It Fl D
1324 Lists destroyed pools only.
1325 .El
1326 .It Xo
1327 .Nm
1328 .Cm import
1329 .Fl a
1330 .Op Fl DflmN
1331 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1332 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1333 .Op Fl o Ar mntopts
1334 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1335 .Op Fl R Ar root
1336 .Op Fl s
1337 .Xc
1338 Imports all pools found in the search directories.
1339 Identical to the previous command, except that all pools with a sufficient
1340 number of devices available are imported.
1341 Destroyed pools, pools that were previously destroyed with the
1342 .Nm zpool Cm destroy
1343 command, will not be imported unless the
1344 .Fl D
1345 option is specified.
1346 .Bl -tag -width Ds
1347 .It Fl a
1348 Searches for and imports all pools found.
1349 .It Fl c Ar cachefile
1350 Reads configuration from the given
1351 .Ar cachefile
1352 that was created with the
1353 .Sy cachefile
1354 pool property.
1355 This
1356 .Ar cachefile
1357 is used instead of searching for devices.
1358 .It Fl d Ar dir Ns | Ns Ar device
1359 Uses
1360 .Ar device
1361 or searches for devices or files in
1362 .Ar dir .
1363 The
1364 .Fl d
1365 option can be specified multiple times.
1366 This option is incompatible with the
1367 .Fl c
1368 option.
1369 .It Fl D
1370 Imports destroyed pools only.
1371 The
1372 .Fl f
1373 option is also required.
1374 .It Fl f
1375 Forces import, even if the pool appears to be potentially active.
1376 .It Fl F
1377 Recovery mode for a non-importable pool.
1378 Attempt to return the pool to an importable state by discarding the last few
1379 transactions.
1380 Not all damaged pools can be recovered by using this option.
1381 If successful, the data from the discarded transactions is irretrievably lost.
1382 This option is ignored if the pool is importable or already imported.
1383 .It Fl l
1384 Indicates that this command will request encryption keys for all encrypted
1385 datasets it attempts to mount as it is bringing the pool online. Note that if
1386 any datasets have a
1387 .Sy keylocation
1388 of
1389 .Sy prompt
1390 this command will block waiting for the keys to be entered. Without this flag
1391 encrypted datasets will be left unavailable until the keys are loaded.
1392 .It Fl m
1393 Allows a pool to import when there is a missing log device.
1394 Recent transactions can be lost because the log device will be discarded.
1395 .It Fl n
1396 Used with the
1397 .Fl F
1398 recovery option.
1399 Determines whether a non-importable pool can be made importable again, but does
1400 not actually perform the pool recovery.
1401 For more details about pool recovery mode, see the
1402 .Fl F
1403 option, above.
1404 .It Fl N
1405 Import the pool without mounting any file systems.
1406 .It Fl o Ar mntopts
1407 Comma-separated list of mount options to use when mounting datasets within the
1408 pool.
1409 See
1410 .Xr zfs 8
1411 for a description of dataset properties and mount options.
1412 .It Fl o Ar property Ns = Ns Ar value
1413 Sets the specified property on the imported pool.
1414 See the
1415 .Sx Properties
1416 section for more information on the available pool properties.
1417 .It Fl R Ar root
1418 Sets the
1419 .Sy cachefile
1420 property to
1421 .Sy none
1422 and the
1423 .Sy altroot
1424 property to
1425 .Ar root .
1426 .It Fl -rewind-to-checkpoint
1427 Rewinds pool to the checkpointed state.
1428 Once the pool is imported with this flag there is no way to undo the rewind.
1429 All changes and data that were written after the checkpoint are lost!
1430 The only exception is when the
1431 .Sy readonly
1432 mounting option is enabled.
1433 In this case, the checkpointed state of the pool is opened and an
1434 administrator can see how the pool would look like if they were
1435 to fully rewind.
1436 .It Fl s
1437 Scan using the default search path, the libblkid cache will not be
1438 consulted. A custom search path may be specified by setting the
1439 ZPOOL_IMPORT_PATH environment variable.
1440 .It Fl X
1441 Used with the
1442 .Fl F
1443 recovery option. Determines whether extreme
1444 measures to find a valid txg should take place. This allows the pool to
1445 be rolled back to a txg which is no longer guaranteed to be consistent.
1446 Pools imported at an inconsistent txg may contain uncorrectable
1447 checksum errors. For more details about pool recovery mode, see the
1448 .Fl F
1449 option, above. WARNING: This option can be extremely hazardous to the
1450 health of your pool and should only be used as a last resort.
1451 .It Fl T
1452 Specify the txg to use for rollback. Implies
1453 .Fl FX .
1454 For more details
1455 about pool recovery mode, see the
1456 .Fl X
1457 option, above. WARNING: This option can be extremely hazardous to the
1458 health of your pool and should only be used as a last resort.
1459 .El
1460 .It Xo
1461 .Nm
1462 .Cm import
1463 .Op Fl Dflm
1464 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1465 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1466 .Op Fl o Ar mntopts
1467 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1468 .Op Fl R Ar root
1469 .Op Fl s
1470 .Ar pool Ns | Ns Ar id
1471 .Op Ar newpool
1472 .Xc
1473 Imports a specific pool.
1474 A pool can be identified by its name or the numeric identifier.
1475 If
1476 .Ar newpool
1477 is specified, the pool is imported using the name
1478 .Ar newpool .
1479 Otherwise, it is imported with the same name as its exported name.
1480 .Pp
1481 If a device is removed from a system without running
1482 .Nm zpool Cm export
1483 first, the device appears as potentially active.
1484 It cannot be determined if this was a failed export, or whether the device is
1485 really in use from another host.
1486 To import a pool in this state, the
1487 .Fl f
1488 option is required.
1489 .Bl -tag -width Ds
1490 .It Fl c Ar cachefile
1491 Reads configuration from the given
1492 .Ar cachefile
1493 that was created with the
1494 .Sy cachefile
1495 pool property.
1496 This
1497 .Ar cachefile
1498 is used instead of searching for devices.
1499 .It Fl d Ar dir Ns | Ns Ar device
1500 Uses
1501 .Ar device
1502 or searches for devices or files in
1503 .Ar dir .
1504 The
1505 .Fl d
1506 option can be specified multiple times.
1507 This option is incompatible with the
1508 .Fl c
1509 option.
1510 .It Fl D
1511 Imports destroyed pool.
1512 The
1513 .Fl f
1514 option is also required.
1515 .It Fl f
1516 Forces import, even if the pool appears to be potentially active.
1517 .It Fl F
1518 Recovery mode for a non-importable pool.
1519 Attempt to return the pool to an importable state by discarding the last few
1520 transactions.
1521 Not all damaged pools can be recovered by using this option.
1522 If successful, the data from the discarded transactions is irretrievably lost.
1523 This option is ignored if the pool is importable or already imported.
1524 .It Fl l
1525 Indicates that this command will request encryption keys for all encrypted
1526 datasets it attempts to mount as it is bringing the pool online. Note that if
1527 any datasets have a
1528 .Sy keylocation
1529 of
1530 .Sy prompt
1531 this command will block waiting for the keys to be entered. Without this flag
1532 encrypted datasets will be left unavailable until the keys are loaded.
1533 .It Fl m
1534 Allows a pool to import when there is a missing log device.
1535 Recent transactions can be lost because the log device will be discarded.
1536 .It Fl n
1537 Used with the
1538 .Fl F
1539 recovery option.
1540 Determines whether a non-importable pool can be made importable again, but does
1541 not actually perform the pool recovery.
1542 For more details about pool recovery mode, see the
1543 .Fl F
1544 option, above.
1545 .It Fl o Ar mntopts
1546 Comma-separated list of mount options to use when mounting datasets within the
1547 pool.
1548 See
1549 .Xr zfs 8
1550 for a description of dataset properties and mount options.
1551 .It Fl o Ar property Ns = Ns Ar value
1552 Sets the specified property on the imported pool.
1553 See the
1554 .Sx Properties
1555 section for more information on the available pool properties.
1556 .It Fl R Ar root
1557 Sets the
1558 .Sy cachefile
1559 property to
1560 .Sy none
1561 and the
1562 .Sy altroot
1563 property to
1564 .Ar root .
1565 .It Fl s
1566 Scan using the default search path, the libblkid cache will not be
1567 consulted. A custom search path may be specified by setting the
1568 ZPOOL_IMPORT_PATH environment variable.
1569 .It Fl X
1570 Used with the
1571 .Fl F
1572 recovery option. Determines whether extreme
1573 measures to find a valid txg should take place. This allows the pool to
1574 be rolled back to a txg which is no longer guaranteed to be consistent.
1575 Pools imported at an inconsistent txg may contain uncorrectable
1576 checksum errors. For more details about pool recovery mode, see the
1577 .Fl F
1578 option, above. WARNING: This option can be extremely hazardous to the
1579 health of your pool and should only be used as a last resort.
1580 .It Fl T
1581 Specify the txg to use for rollback. Implies
1582 .Fl FX .
1583 For more details
1584 about pool recovery mode, see the
1585 .Fl X
1586 option, above. WARNING: This option can be extremely hazardous to the
1587 health of your pool and should only be used as a last resort.
1588 .It Fl t
1589 Used with
1590 .Sy newpool .
1591 Specifies that
1592 .Sy newpool
1593 is temporary. Temporary pool names last until export. Ensures that
1594 the original pool name will be used in all label updates and therefore
1595 is retained upon export.
1596 Will also set -o cachefile=none when not explicitly specified.
1597 .El
1598 .It Xo
1599 .Nm
1600 .Cm iostat
1601 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1602 .Op Fl T Sy u Ns | Ns Sy d
1603 .Op Fl ghHLpPvy
1604 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1605 .Op Ar interval Op Ar count
1606 .Xc
1607 Displays I/O statistics for the given pools/vdevs. You can pass in a
1608 list of pools, a pool and list of vdevs in that pool, or a list of any
1609 vdevs from any pool. If no items are specified, statistics for every
1610 pool in the system are shown.
1611 When given an
1612 .Ar interval ,
1613 the statistics are printed every
1614 .Ar interval
1615 seconds until ^C is pressed. If count is specified, the command exits
1616 after count reports are printed. The first report printed is always
1617 the statistics since boot regardless of whether
1618 .Ar interval
1619 and
1620 .Ar count
1621 are passed. However, this behavior can be suppressed with the
1622 .Fl y
1623 flag. Also note that the units of
1624 .Sy K ,
1625 .Sy M ,
1626 .Sy G ...
1627 that are printed in the report are in base 1024. To get the raw
1628 values, use the
1629 .Fl p
1630 flag.
1631 .Bl -tag -width Ds
1632 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1633 Run a script (or scripts) on each vdev and include the output as a new column
1634 in the
1635 .Nm zpool Cm iostat
1636 output. Users can run any script found in their
1637 .Pa ~/.zpool.d
1638 directory or from the system
1639 .Pa /etc/zfs/zpool.d
1640 directory. Script names containing the slash (/) character are not allowed.
1641 The default search path can be overridden by setting the
1642 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1643 .Fl c
1644 if they have the ZPOOL_SCRIPTS_AS_ROOT
1645 environment variable set. If a script requires the use of a privileged
1646 command, like
1647 .Xr smartctl 8 ,
1648 then it's recommended you allow the user access to it in
1649 .Pa /etc/sudoers
1650 or add the user to the
1651 .Pa /etc/sudoers.d/zfs
1652 file.
1653 .Pp
1654 If
1655 .Fl c
1656 is passed without a script name, it prints a list of all scripts.
1657 .Fl c
1658 also sets verbose mode
1659 .No \&( Ns Fl v Ns No \&).
1660 .Pp
1661 Script output should be in the form of "name=value". The column name is
1662 set to "name" and the value is set to "value". Multiple lines can be
1663 used to output multiple columns. The first line of output not in the
1664 "name=value" format is displayed without a column title, and no more
1665 output after that is displayed. This can be useful for printing error
1666 messages. Blank or NULL values are printed as a '-' to make output
1667 awk-able.
1668 .Pp
1669 The following environment variables are set before running each script:
1670 .Bl -tag -width "VDEV_PATH"
1671 .It Sy VDEV_PATH
1672 Full path to the vdev
1673 .El
1674 .Bl -tag -width "VDEV_UPATH"
1675 .It Sy VDEV_UPATH
1676 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1677 multipath, or partitioned vdevs.
1678 .El
1679 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1680 .It Sy VDEV_ENC_SYSFS_PATH
1681 The sysfs path to the enclosure for the vdev (if any).
1682 .El
1683 .It Fl T Sy u Ns | Ns Sy d
1684 Display a time stamp.
1685 Specify
1686 .Sy u
1687 for a printed representation of the internal representation of time.
1688 See
1689 .Xr time 2 .
1690 Specify
1691 .Sy d
1692 for standard date format.
1693 See
1694 .Xr date 1 .
1695 .It Fl g
1696 Display vdev GUIDs instead of the normal device names. These GUIDs
1697 can be used in place of device names for the zpool
1698 detach/offline/remove/replace commands.
1699 .It Fl H
1700 Scripted mode. Do not display headers, and separate fields by a
1701 single tab instead of arbitrary space.
1702 .It Fl L
1703 Display real paths for vdevs resolving all symbolic links. This can
1704 be used to look up the current block device name regardless of the
1705 .Pa /dev/disk/
1706 path used to open it.
1707 .It Fl p
1708 Display numbers in parsable (exact) values. Time values are in
1709 nanoseconds.
1710 .It Fl P
1711 Display full paths for vdevs instead of only the last component of
1712 the path. This can be used in conjunction with the
1713 .Fl L
1714 flag.
1715 .It Fl r
1716 Print request size histograms for the leaf ZIOs. This includes
1717 histograms of individual ZIOs (
1718 .Ar ind )
1719 and aggregate ZIOs (
1720 .Ar agg ).
1721 These stats can be useful for seeing how well the ZFS IO aggregator is
1722 working. Do not confuse these request size stats with the block layer
1723 requests; it's possible ZIOs can be broken up before being sent to the
1724 block device.
1725 .It Fl v
1726 Verbose statistics Reports usage statistics for individual vdevs within the
1727 pool, in addition to the pool-wide statistics.
1728 .It Fl y
1729 Omit statistics since boot.
1730 Normally the first line of output reports the statistics since boot.
1731 This option suppresses that first line of output.
1732 .It Fl w
1733 Display latency histograms:
1734 .Pp
1735 .Ar total_wait :
1736 Total IO time (queuing + disk IO time).
1737 .Ar disk_wait :
1738 Disk IO time (time reading/writing the disk).
1739 .Ar syncq_wait :
1740 Amount of time IO spent in synchronous priority queues. Does not include
1741 disk time.
1742 .Ar asyncq_wait :
1743 Amount of time IO spent in asynchronous priority queues. Does not include
1744 disk time.
1745 .Ar scrub :
1746 Amount of time IO spent in scrub queue. Does not include disk time.
1747 .It Fl l
1748 Include average latency statistics:
1749 .Pp
1750 .Ar total_wait :
1751 Average total IO time (queuing + disk IO time).
1752 .Ar disk_wait :
1753 Average disk IO time (time reading/writing the disk).
1754 .Ar syncq_wait :
1755 Average amount of time IO spent in synchronous priority queues. Does
1756 not include disk time.
1757 .Ar asyncq_wait :
1758 Average amount of time IO spent in asynchronous priority queues.
1759 Does not include disk time.
1760 .Ar scrub :
1761 Average queuing time in scrub queue. Does not include disk time.
1762 .It Fl q
1763 Include active queue statistics. Each priority queue has both
1764 pending (
1765 .Ar pend )
1766 and active (
1767 .Ar activ )
1768 IOs. Pending IOs are waiting to
1769 be issued to the disk, and active IOs have been issued to disk and are
1770 waiting for completion. These stats are broken out by priority queue:
1771 .Pp
1772 .Ar syncq_read/write :
1773 Current number of entries in synchronous priority
1774 queues.
1775 .Ar asyncq_read/write :
1776 Current number of entries in asynchronous priority queues.
1777 .Ar scrubq_read :
1778 Current number of entries in scrub queue.
1779 .Pp
1780 All queue statistics are instantaneous measurements of the number of
1781 entries in the queues. If you specify an interval, the measurements
1782 will be sampled from the end of the interval.
1783 .El
1784 .It Xo
1785 .Nm
1786 .Cm labelclear
1787 .Op Fl f
1788 .Ar device
1789 .Xc
1790 Removes ZFS label information from the specified
1791 .Ar device .
1792 The
1793 .Ar device
1794 must not be part of an active pool configuration.
1795 .Bl -tag -width Ds
1796 .It Fl f
1797 Treat exported or foreign devices as inactive.
1798 .El
1799 .It Xo
1800 .Nm
1801 .Cm list
1802 .Op Fl HgLpPv
1803 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1804 .Op Fl T Sy u Ns | Ns Sy d
1805 .Oo Ar pool Oc Ns ...
1806 .Op Ar interval Op Ar count
1807 .Xc
1808 Lists the given pools along with a health status and space usage.
1809 If no
1810 .Ar pool Ns s
1811 are specified, all pools in the system are listed.
1812 When given an
1813 .Ar interval ,
1814 the information is printed every
1815 .Ar interval
1816 seconds until ^C is pressed.
1817 If
1818 .Ar count
1819 is specified, the command exits after
1820 .Ar count
1821 reports are printed.
1822 .Bl -tag -width Ds
1823 .It Fl g
1824 Display vdev GUIDs instead of the normal device names. These GUIDs
1825 can be used in place of device names for the zpool
1826 detach/offline/remove/replace commands.
1827 .It Fl H
1828 Scripted mode.
1829 Do not display headers, and separate fields by a single tab instead of arbitrary
1830 space.
1831 .It Fl o Ar property
1832 Comma-separated list of properties to display.
1833 See the
1834 .Sx Properties
1835 section for a list of valid properties.
1836 The default list is
1837 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1838 .Cm capacity , dedupratio , health , altroot .
1839 .It Fl L
1840 Display real paths for vdevs resolving all symbolic links. This can
1841 be used to look up the current block device name regardless of the
1842 /dev/disk/ path used to open it.
1843 .It Fl p
1844 Display numbers in parsable
1845 .Pq exact
1846 values.
1847 .It Fl P
1848 Display full paths for vdevs instead of only the last component of
1849 the path. This can be used in conjunction with the
1850 .Fl L
1851 flag.
1852 .It Fl T Sy u Ns | Ns Sy d
1853 Display a time stamp.
1854 Specify
1855 .Fl u
1856 for a printed representation of the internal representation of time.
1857 See
1858 .Xr time 2 .
1859 Specify
1860 .Fl d
1861 for standard date format.
1862 See
1863 .Xr date 1 .
1864 .It Fl v
1865 Verbose statistics.
1866 Reports usage statistics for individual vdevs within the pool, in addition to
1867 the pool-wise statistics.
1868 .El
1869 .It Xo
1870 .Nm
1871 .Cm offline
1872 .Op Fl f
1873 .Op Fl t
1874 .Ar pool Ar device Ns ...
1875 .Xc
1876 Takes the specified physical device offline.
1877 While the
1878 .Ar device
1879 is offline, no attempt is made to read or write to the device.
1880 This command is not applicable to spares.
1881 .Bl -tag -width Ds
1882 .It Fl f
1883 Force fault. Instead of offlining the disk, put it into a faulted
1884 state. The fault will persist across imports unless the
1885 .Fl t
1886 flag was specified.
1887 .It Fl t
1888 Temporary.
1889 Upon reboot, the specified physical device reverts to its previous state.
1890 .El
1891 .It Xo
1892 .Nm
1893 .Cm online
1894 .Op Fl e
1895 .Ar pool Ar device Ns ...
1896 .Xc
1897 Brings the specified physical device online.
1898 This command is not applicable to spares.
1899 .Bl -tag -width Ds
1900 .It Fl e
1901 Expand the device to use all available space.
1902 If the device is part of a mirror or raidz then all devices must be expanded
1903 before the new space will become available to the pool.
1904 .El
1905 .It Xo
1906 .Nm
1907 .Cm reguid
1908 .Ar pool
1909 .Xc
1910 Generates a new unique identifier for the pool.
1911 You must ensure that all devices in this pool are online and healthy before
1912 performing this action.
1913 .It Xo
1914 .Nm
1915 .Cm reopen
1916 .Op Fl n
1917 .Ar pool
1918 .Xc
1919 Reopen all the vdevs associated with the pool.
1920 .Bl -tag -width Ds
1921 .It Fl n
1922 Do not restart an in-progress scrub operation. This is not recommended and can
1923 result in partially resilvered devices unless a second scrub is performed.
1924 .El
1925 .It Xo
1926 .Nm
1927 .Cm remove
1928 .Op Fl np
1929 .Ar pool Ar device Ns ...
1930 .Xc
1931 Removes the specified device from the pool.
1932 This command supports removing hot spare, cache, log, and both mirrored and
1933 non-redundant primary top-level vdevs, including dedup and special vdevs.
1934 When the primary pool storage includes a top-level raidz vdev only hot spare,
1935 cache, and log devices can be removed.
1936 .sp
1937 Removing a top-level vdev reduces the total amount of space in the storage pool.
1938 The specified device will be evacuated by copying all allocated space from it to
1939 the other devices in the pool.
1940 In this case, the
1941 .Nm zpool Cm remove
1942 command initiates the removal and returns, while the evacuation continues in
1943 the background.
1944 The removal progress can be monitored with
1945 .Nm zpool Cm status.
1946 The
1947 .Sy device_removal
1948 feature flag must be enabled to remove a top-level vdev, see
1949 .Xr zpool-features 5 .
1950 .Pp
1951 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1952 same.
1953 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1954 the
1955 .Nm zpool Cm detach
1956 command.
1957 .Bl -tag -width Ds
1958 .It Fl n
1959 Do not actually perform the removal ("no-op").
1960 Instead, print the estimated amount of memory that will be used by the
1961 mapping table after the removal completes.
1962 This is nonzero only for top-level vdevs.
1963 .El
1964 .Bl -tag -width Ds
1965 .It Fl p
1966 Used in conjunction with the
1967 .Fl n
1968 flag, displays numbers as parsable (exact) values.
1969 .El
1970 .It Xo
1971 .Nm
1972 .Cm remove
1973 .Fl s
1974 .Ar pool
1975 .Xc
1976 Stops and cancels an in-progress removal of a top-level vdev.
1977 .It Xo
1978 .Nm
1979 .Cm replace
1980 .Op Fl f
1981 .Op Fl o Ar property Ns = Ns Ar value
1982 .Ar pool Ar device Op Ar new_device
1983 .Xc
1984 Replaces
1985 .Ar old_device
1986 with
1987 .Ar new_device .
1988 This is equivalent to attaching
1989 .Ar new_device ,
1990 waiting for it to resilver, and then detaching
1991 .Ar old_device .
1992 .Pp
1993 The size of
1994 .Ar new_device
1995 must be greater than or equal to the minimum size of all the devices in a mirror
1996 or raidz configuration.
1997 .Pp
1998 .Ar new_device
1999 is required if the pool is not redundant.
2000 If
2001 .Ar new_device
2002 is not specified, it defaults to
2003 .Ar old_device .
2004 This form of replacement is useful after an existing disk has failed and has
2005 been physically replaced.
2006 In this case, the new disk may have the same
2007 .Pa /dev
2008 path as the old device, even though it is actually a different disk.
2009 ZFS recognizes this.
2010 .Bl -tag -width Ds
2011 .It Fl f
2012 Forces use of
2013 .Ar new_device ,
2014 even if its appears to be in use.
2015 Not all devices can be overridden in this manner.
2016 .It Fl o Ar property Ns = Ns Ar value
2017 Sets the given pool properties. See the
2018 .Sx Properties
2019 section for a list of valid properties that can be set.
2020 The only property supported at the moment is
2021 .Sy ashift .
2022 .El
2023 .It Xo
2024 .Nm
2025 .Cm scrub
2026 .Op Fl s | Fl p
2027 .Ar pool Ns ...
2028 .Xc
2029 Begins a scrub or resumes a paused scrub.
2030 The scrub examines all data in the specified pools to verify that it checksums
2031 correctly.
2032 For replicated
2033 .Pq mirror or raidz
2034 devices, ZFS automatically repairs any damage discovered during the scrub.
2035 The
2036 .Nm zpool Cm status
2037 command reports the progress of the scrub and summarizes the results of the
2038 scrub upon completion.
2039 .Pp
2040 Scrubbing and resilvering are very similar operations.
2041 The difference is that resilvering only examines data that ZFS knows to be out
2042 of date
2043 .Po
2044 for example, when attaching a new device to a mirror or replacing an existing
2045 device
2046 .Pc ,
2047 whereas scrubbing examines all data to discover silent errors due to hardware
2048 faults or disk failure.
2049 .Pp
2050 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2051 one at a time.
2052 If a scrub is paused, the
2053 .Nm zpool Cm scrub
2054 resumes it.
2055 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2056 resilver completes.
2057 .Bl -tag -width Ds
2058 .It Fl s
2059 Stop scrubbing.
2060 .El
2061 .Bl -tag -width Ds
2062 .It Fl p
2063 Pause scrubbing.
2064 Scrub pause state and progress are periodically synced to disk.
2065 If the system is restarted or pool is exported during a paused scrub,
2066 even after import, scrub will remain paused until it is resumed.
2067 Once resumed the scrub will pick up from the place where it was last
2068 checkpointed to disk.
2069 To resume a paused scrub issue
2070 .Nm zpool Cm scrub
2071 again.
2072 .El
2073 .It Xo
2074 .Nm
2075 .Cm resilver
2076 .Ar pool Ns ...
2077 .Xc
2078 Starts a resilver. If an existing resilver is already running it will be
2079 restarted from the beginning. Any drives that were scheduled for a deferred
2080 resilver will be added to the new one.
2081 .It Xo
2082 .Nm
2083 .Cm set
2084 .Ar property Ns = Ns Ar value
2085 .Ar pool
2086 .Xc
2087 Sets the given property on the specified pool.
2088 See the
2089 .Sx Properties
2090 section for more information on what properties can be set and acceptable
2091 values.
2092 .It Xo
2093 .Nm
2094 .Cm split
2095 .Op Fl gLlnP
2096 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2097 .Op Fl R Ar root
2098 .Ar pool newpool
2099 .Op Ar device ...
2100 .Xc
2101 Splits devices off
2102 .Ar pool
2103 creating
2104 .Ar newpool .
2105 All vdevs in
2106 .Ar pool
2107 must be mirrors and the pool must not be in the process of resilvering.
2108 At the time of the split,
2109 .Ar newpool
2110 will be a replica of
2111 .Ar pool .
2112 By default, the
2113 last device in each mirror is split from
2114 .Ar pool
2115 to create
2116 .Ar newpool .
2117 .Pp
2118 The optional device specification causes the specified device(s) to be
2119 included in the new
2120 .Ar pool
2121 and, should any devices remain unspecified,
2122 the last device in each mirror is used as would be by default.
2123 .Bl -tag -width Ds
2124 .It Fl g
2125 Display vdev GUIDs instead of the normal device names. These GUIDs
2126 can be used in place of device names for the zpool
2127 detach/offline/remove/replace commands.
2128 .It Fl L
2129 Display real paths for vdevs resolving all symbolic links. This can
2130 be used to look up the current block device name regardless of the
2131 .Pa /dev/disk/
2132 path used to open it.
2133 .It Fl l
2134 Indicates that this command will request encryption keys for all encrypted
2135 datasets it attempts to mount as it is bringing the new pool online. Note that
2136 if any datasets have a
2137 .Sy keylocation
2138 of
2139 .Sy prompt
2140 this command will block waiting for the keys to be entered. Without this flag
2141 encrypted datasets will be left unavailable until the keys are loaded.
2142 .It Fl n
2143 Do dry run, do not actually perform the split.
2144 Print out the expected configuration of
2145 .Ar newpool .
2146 .It Fl P
2147 Display full paths for vdevs instead of only the last component of
2148 the path. This can be used in conjunction with the
2149 .Fl L
2150 flag.
2151 .It Fl o Ar property Ns = Ns Ar value
2152 Sets the specified property for
2153 .Ar newpool .
2154 See the
2155 .Sx Properties
2156 section for more information on the available pool properties.
2157 .It Fl R Ar root
2158 Set
2159 .Sy altroot
2160 for
2161 .Ar newpool
2162 to
2163 .Ar root
2164 and automatically import it.
2165 .El
2166 .It Xo
2167 .Nm
2168 .Cm status
2169 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2170 .Op Fl gLPvxD
2171 .Op Fl T Sy u Ns | Ns Sy d
2172 .Oo Ar pool Oc Ns ...
2173 .Op Ar interval Op Ar count
2174 .Xc
2175 Displays the detailed health status for the given pools.
2176 If no
2177 .Ar pool
2178 is specified, then the status of each pool in the system is displayed.
2179 For more information on pool and device health, see the
2180 .Sx Device Failure and Recovery
2181 section.
2182 .Pp
2183 If a scrub or resilver is in progress, this command reports the percentage done
2184 and the estimated time to completion.
2185 Both of these are only approximate, because the amount of data in the pool and
2186 the other workloads on the system can change.
2187 .Bl -tag -width Ds
2188 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2189 Run a script (or scripts) on each vdev and include the output as a new column
2190 in the
2191 .Nm zpool Cm status
2192 output. See the
2193 .Fl c
2194 option of
2195 .Nm zpool Cm iostat
2196 for complete details.
2197 .It Fl g
2198 Display vdev GUIDs instead of the normal device names. These GUIDs
2199 can be used in place of device names for the zpool
2200 detach/offline/remove/replace commands.
2201 .It Fl L
2202 Display real paths for vdevs resolving all symbolic links. This can
2203 be used to look up the current block device name regardless of the
2204 .Pa /dev/disk/
2205 path used to open it.
2206 .It Fl P
2207 Display full paths for vdevs instead of only the last component of
2208 the path. This can be used in conjunction with the
2209 .Fl L
2210 flag.
2211 .It Fl D
2212 Display a histogram of deduplication statistics, showing the allocated
2213 .Pq physically present on disk
2214 and referenced
2215 .Pq logically referenced in the pool
2216 block counts and sizes by reference count.
2217 .It Fl T Sy u Ns | Ns Sy d
2218 Display a time stamp.
2219 Specify
2220 .Fl u
2221 for a printed representation of the internal representation of time.
2222 See
2223 .Xr time 2 .
2224 Specify
2225 .Fl d
2226 for standard date format.
2227 See
2228 .Xr date 1 .
2229 .It Fl v
2230 Displays verbose data error information, printing out a complete list of all
2231 data errors since the last complete pool scrub.
2232 .It Fl x
2233 Only display status for pools that are exhibiting errors or are otherwise
2234 unavailable.
2235 Warnings about pools not using the latest on-disk format will not be included.
2236 .El
2237 .It Xo
2238 .Nm
2239 .Cm sync
2240 .Op Ar pool ...
2241 .Xc
2242 This command forces all in-core dirty data to be written to the primary
2243 pool storage and not the ZIL. It will also update administrative
2244 information including quota reporting. Without arguments,
2245 .Sy zpool sync
2246 will sync all pools on the system. Otherwise, it will sync only the
2247 specified pool(s).
2248 .It Xo
2249 .Nm
2250 .Cm upgrade
2251 .Xc
2252 Displays pools which do not have all supported features enabled and pools
2253 formatted using a legacy ZFS version number.
2254 These pools can continue to be used, but some features may not be available.
2255 Use
2256 .Nm zpool Cm upgrade Fl a
2257 to enable all features on all pools.
2258 .It Xo
2259 .Nm
2260 .Cm upgrade
2261 .Fl v
2262 .Xc
2263 Displays legacy ZFS versions supported by the current software.
2264 See
2265 .Xr zpool-features 5
2266 for a description of feature flags features supported by the current software.
2267 .It Xo
2268 .Nm
2269 .Cm upgrade
2270 .Op Fl V Ar version
2271 .Fl a Ns | Ns Ar pool Ns ...
2272 .Xc
2273 Enables all supported features on the given pool.
2274 Once this is done, the pool will no longer be accessible on systems that do not
2275 support feature flags.
2276 See
2277 .Xr zpool-features 5
2278 for details on compatibility with systems that support feature flags, but do not
2279 support all features enabled on the pool.
2280 .Bl -tag -width Ds
2281 .It Fl a
2282 Enables all supported features on all pools.
2283 .It Fl V Ar version
2284 Upgrade to the specified legacy version.
2285 If the
2286 .Fl V
2287 flag is specified, no features will be enabled on the pool.
2288 This option can only be used to increase the version number up to the last
2289 supported legacy version number.
2290 .El
2291 .El
2292 .Sh EXIT STATUS
2293 The following exit values are returned:
2294 .Bl -tag -width Ds
2295 .It Sy 0
2296 Successful completion.
2297 .It Sy 1
2298 An error occurred.
2299 .It Sy 2
2300 Invalid command line options were specified.
2301 .El
2302 .Sh EXAMPLES
2303 .Bl -tag -width Ds
2304 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2305 The following command creates a pool with a single raidz root vdev that
2306 consists of six disks.
2307 .Bd -literal
2308 # zpool create tank raidz sda sdb sdc sdd sde sdf
2309 .Ed
2310 .It Sy Example 2 No Creating a Mirrored Storage Pool
2311 The following command creates a pool with two mirrors, where each mirror
2312 contains two disks.
2313 .Bd -literal
2314 # zpool create tank mirror sda sdb mirror sdc sdd
2315 .Ed
2316 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2317 The following command creates an unmirrored pool using two disk partitions.
2318 .Bd -literal
2319 # zpool create tank sda1 sdb2
2320 .Ed
2321 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2322 The following command creates an unmirrored pool using files.
2323 While not recommended, a pool based on files can be useful for experimental
2324 purposes.
2325 .Bd -literal
2326 # zpool create tank /path/to/file/a /path/to/file/b
2327 .Ed
2328 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2329 The following command adds two mirrored disks to the pool
2330 .Em tank ,
2331 assuming the pool is already made up of two-way mirrors.
2332 The additional space is immediately available to any datasets within the pool.
2333 .Bd -literal
2334 # zpool add tank mirror sda sdb
2335 .Ed
2336 .It Sy Example 6 No Listing Available ZFS Storage Pools
2337 The following command lists all available pools on the system.
2338 In this case, the pool
2339 .Em zion
2340 is faulted due to a missing device.
2341 The results from this command are similar to the following:
2342 .Bd -literal
2343 # zpool list
2344 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2345 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2346 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2347 zion - - - - - - - FAULTED -
2348 .Ed
2349 .It Sy Example 7 No Destroying a ZFS Storage Pool
2350 The following command destroys the pool
2351 .Em tank
2352 and any datasets contained within.
2353 .Bd -literal
2354 # zpool destroy -f tank
2355 .Ed
2356 .It Sy Example 8 No Exporting a ZFS Storage Pool
2357 The following command exports the devices in pool
2358 .Em tank
2359 so that they can be relocated or later imported.
2360 .Bd -literal
2361 # zpool export tank
2362 .Ed
2363 .It Sy Example 9 No Importing a ZFS Storage Pool
2364 The following command displays available pools, and then imports the pool
2365 .Em tank
2366 for use on the system.
2367 The results from this command are similar to the following:
2368 .Bd -literal
2369 # zpool import
2370 pool: tank
2371 id: 15451357997522795478
2372 state: ONLINE
2373 action: The pool can be imported using its name or numeric identifier.
2374 config:
2375
2376 tank ONLINE
2377 mirror ONLINE
2378 sda ONLINE
2379 sdb ONLINE
2380
2381 # zpool import tank
2382 .Ed
2383 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2384 The following command upgrades all ZFS Storage pools to the current version of
2385 the software.
2386 .Bd -literal
2387 # zpool upgrade -a
2388 This system is currently running ZFS version 2.
2389 .Ed
2390 .It Sy Example 11 No Managing Hot Spares
2391 The following command creates a new pool with an available hot spare:
2392 .Bd -literal
2393 # zpool create tank mirror sda sdb spare sdc
2394 .Ed
2395 .Pp
2396 If one of the disks were to fail, the pool would be reduced to the degraded
2397 state.
2398 The failed device can be replaced using the following command:
2399 .Bd -literal
2400 # zpool replace tank sda sdd
2401 .Ed
2402 .Pp
2403 Once the data has been resilvered, the spare is automatically removed and is
2404 made available for use should another device fail.
2405 The hot spare can be permanently removed from the pool using the following
2406 command:
2407 .Bd -literal
2408 # zpool remove tank sdc
2409 .Ed
2410 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2411 The following command creates a ZFS storage pool consisting of two, two-way
2412 mirrors and mirrored log devices:
2413 .Bd -literal
2414 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2415 sde sdf
2416 .Ed
2417 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2418 The following command adds two disks for use as cache devices to a ZFS storage
2419 pool:
2420 .Bd -literal
2421 # zpool add pool cache sdc sdd
2422 .Ed
2423 .Pp
2424 Once added, the cache devices gradually fill with content from main memory.
2425 Depending on the size of your cache devices, it could take over an hour for
2426 them to fill.
2427 Capacity and reads can be monitored using the
2428 .Cm iostat
2429 option as follows:
2430 .Bd -literal
2431 # zpool iostat -v pool 5
2432 .Ed
2433 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2434 The following commands remove the mirrored log device
2435 .Sy mirror-2
2436 and mirrored top-level data device
2437 .Sy mirror-1 .
2438 .Pp
2439 Given this configuration:
2440 .Bd -literal
2441 pool: tank
2442 state: ONLINE
2443 scrub: none requested
2444 config:
2445
2446 NAME STATE READ WRITE CKSUM
2447 tank ONLINE 0 0 0
2448 mirror-0 ONLINE 0 0 0
2449 sda ONLINE 0 0 0
2450 sdb ONLINE 0 0 0
2451 mirror-1 ONLINE 0 0 0
2452 sdc ONLINE 0 0 0
2453 sdd ONLINE 0 0 0
2454 logs
2455 mirror-2 ONLINE 0 0 0
2456 sde ONLINE 0 0 0
2457 sdf ONLINE 0 0 0
2458 .Ed
2459 .Pp
2460 The command to remove the mirrored log
2461 .Sy mirror-2
2462 is:
2463 .Bd -literal
2464 # zpool remove tank mirror-2
2465 .Ed
2466 .Pp
2467 The command to remove the mirrored data
2468 .Sy mirror-1
2469 is:
2470 .Bd -literal
2471 # zpool remove tank mirror-1
2472 .Ed
2473 .It Sy Example 15 No Displaying expanded space on a device
2474 The following command displays the detailed information for the pool
2475 .Em data .
2476 This pool is comprised of a single raidz vdev where one of its devices
2477 increased its capacity by 10GB.
2478 In this example, the pool will not be able to utilize this extra capacity until
2479 all the devices under the raidz vdev have been expanded.
2480 .Bd -literal
2481 # zpool list -v data
2482 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2483 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2484 raidz1 23.9G 14.6G 9.30G - 48%
2485 sda - - - - -
2486 sdb - - - 10G -
2487 sdc - - - - -
2488 .Ed
2489 .It Sy Example 16 No Adding output columns
2490 Additional columns can be added to the
2491 .Nm zpool Cm status
2492 and
2493 .Nm zpool Cm iostat
2494 output with
2495 .Fl c
2496 option.
2497 .Bd -literal
2498 # zpool status -c vendor,model,size
2499 NAME STATE READ WRITE CKSUM vendor model size
2500 tank ONLINE 0 0 0
2501 mirror-0 ONLINE 0 0 0
2502 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2503 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2504 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2505 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2506 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2507 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2508
2509 # zpool iostat -vc slaves
2510 capacity operations bandwidth
2511 pool alloc free read write read write slaves
2512 ---------- ----- ----- ----- ----- ----- ----- ---------
2513 tank 20.4G 7.23T 26 152 20.7M 21.6M
2514 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2515 U1 - - 0 31 1.46K 20.6M sdb sdff
2516 U10 - - 0 1 3.77K 13.3K sdas sdgw
2517 U11 - - 0 1 288K 13.3K sdat sdgx
2518 U12 - - 0 1 78.4K 13.3K sdau sdgy
2519 U13 - - 0 1 128K 13.3K sdav sdgz
2520 U14 - - 0 1 63.2K 13.3K sdfk sdg
2521 .Ed
2522 .El
2523 .Sh ENVIRONMENT VARIABLES
2524 .Bl -tag -width "ZFS_ABORT"
2525 .It Ev ZFS_ABORT
2526 Cause
2527 .Nm zpool
2528 to dump core on exit for the purposes of running
2529 .Sy ::findleaks .
2530 .El
2531 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2532 .It Ev ZPOOL_IMPORT_PATH
2533 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2534 .Nm zpool
2535 looks for device nodes and files.
2536 Similar to the
2537 .Fl d
2538 option in
2539 .Nm zpool import .
2540 .El
2541 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2542 .It Ev ZPOOL_VDEV_NAME_GUID
2543 Cause
2544 .Nm zpool subcommands to output vdev guids by default. This behavior
2545 is identical to the
2546 .Nm zpool status -g
2547 command line option.
2548 .El
2549 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2550 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2551 Cause
2552 .Nm zpool
2553 subcommands to follow links for vdev names by default. This behavior is identical to the
2554 .Nm zpool status -L
2555 command line option.
2556 .El
2557 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2558 .It Ev ZPOOL_VDEV_NAME_PATH
2559 Cause
2560 .Nm zpool
2561 subcommands to output full vdev path names by default. This
2562 behavior is identical to the
2563 .Nm zpool status -p
2564 command line option.
2565 .El
2566 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2567 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2568 Older ZFS on Linux implementations had issues when attempting to display pool
2569 config VDEV names if a
2570 .Sy devid
2571 NVP value is present in the pool's config.
2572 .Pp
2573 For example, a pool that originated on illumos platform would have a devid
2574 value in the config and
2575 .Nm zpool status
2576 would fail when listing the config.
2577 This would also be true for future Linux based pools.
2578 .Pp
2579 A pool can be stripped of any
2580 .Sy devid
2581 values on import or prevented from adding
2582 them on
2583 .Nm zpool create
2584 or
2585 .Nm zpool add
2586 by setting
2587 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2588 .El
2589 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2590 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2591 Allow a privileged user to run the
2592 .Nm zpool status/iostat
2593 with the
2594 .Fl c
2595 option. Normally, only unprivileged users are allowed to run
2596 .Fl c .
2597 .El
2598 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2599 .It Ev ZPOOL_SCRIPTS_PATH
2600 The search path for scripts when running
2601 .Nm zpool status/iostat
2602 with the
2603 .Fl c
2604 option. This is a colon-separated list of directories and overrides the default
2605 .Pa ~/.zpool.d
2606 and
2607 .Pa /etc/zfs/zpool.d
2608 search paths.
2609 .El
2610 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2611 .It Ev ZPOOL_SCRIPTS_ENABLED
2612 Allow a user to run
2613 .Nm zpool status/iostat
2614 with the
2615 .Fl c
2616 option. If
2617 .Sy ZPOOL_SCRIPTS_ENABLED
2618 is not set, it is assumed that the user is allowed to run
2619 .Nm zpool status/iostat -c .
2620 .El
2621 .Sh INTERFACE STABILITY
2622 .Sy Evolving
2623 .Sh SEE ALSO
2624 .Xr zfs-events 5 ,
2625 .Xr zfs-module-parameters 5 ,
2626 .Xr zpool-features 5 ,
2627 .Xr zed 8 ,
2628 .Xr zfs 8