]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
OpenZFS 9166 - zfs storage pool checkpoint
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd April 27, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Ar pool Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm iostat
119 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
120 .Op Fl T Sy u Ns | Ns Sy d
121 .Op Fl ghHLpPvy
122 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
123 .Op Ar interval Op Ar count
124 .Nm
125 .Cm labelclear
126 .Op Fl f
127 .Ar device
128 .Nm
129 .Cm list
130 .Op Fl HgLpPv
131 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
132 .Op Fl T Sy u Ns | Ns Sy d
133 .Oo Ar pool Oc Ns ...
134 .Op Ar interval Op Ar count
135 .Nm
136 .Cm offline
137 .Op Fl f
138 .Op Fl t
139 .Ar pool Ar device Ns ...
140 .Nm
141 .Cm online
142 .Op Fl e
143 .Ar pool Ar device Ns ...
144 .Nm
145 .Cm reguid
146 .Ar pool
147 .Nm
148 .Cm reopen
149 .Op Fl n
150 .Ar pool
151 .Nm
152 .Cm remove
153 .Op Fl np
154 .Ar pool Ar device Ns ...
155 .Nm
156 .Cm remove
157 .Fl s
158 .Ar pool
159 .Nm
160 .Cm replace
161 .Op Fl f
162 .Oo Fl o Ar property Ns = Ns Ar value Oc
163 .Ar pool Ar device Op Ar new_device
164 .Nm
165 .Cm scrub
166 .Op Fl s | Fl p
167 .Ar pool Ns ...
168 .Nm
169 .Cm set
170 .Ar property Ns = Ns Ar value
171 .Ar pool
172 .Nm
173 .Cm split
174 .Op Fl gLlnP
175 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
176 .Op Fl R Ar root
177 .Ar pool newpool
178 .Oo Ar device Oc Ns ...
179 .Nm
180 .Cm status
181 .Oo Fl c Ar SCRIPT Oc
182 .Op Fl gLPvxD
183 .Op Fl T Sy u Ns | Ns Sy d
184 .Oo Ar pool Oc Ns ...
185 .Op Ar interval Op Ar count
186 .Nm
187 .Cm sync
188 .Oo Ar pool Oc Ns ...
189 .Nm
190 .Cm upgrade
191 .Nm
192 .Cm upgrade
193 .Fl v
194 .Nm
195 .Cm upgrade
196 .Op Fl V Ar version
197 .Fl a Ns | Ns Ar pool Ns ...
198 .Sh DESCRIPTION
199 The
200 .Nm
201 command configures ZFS storage pools.
202 A storage pool is a collection of devices that provides physical storage and
203 data replication for ZFS datasets.
204 All datasets within a storage pool share the same space.
205 See
206 .Xr zfs 8
207 for information on managing datasets.
208 .Ss Virtual Devices (vdevs)
209 A "virtual device" describes a single device or a collection of devices
210 organized according to certain performance and fault characteristics.
211 The following virtual devices are supported:
212 .Bl -tag -width Ds
213 .It Sy disk
214 A block device, typically located under
215 .Pa /dev .
216 ZFS can use individual slices or partitions, though the recommended mode of
217 operation is to use whole disks.
218 A disk can be specified by a full path, or it can be a shorthand name
219 .Po the relative portion of the path under
220 .Pa /dev
221 .Pc .
222 A whole disk can be specified by omitting the slice or partition designation.
223 For example,
224 .Pa sda
225 is equivalent to
226 .Pa /dev/sda .
227 When given a whole disk, ZFS automatically labels the disk, if necessary.
228 .It Sy file
229 A regular file.
230 The use of files as a backing store is strongly discouraged.
231 It is designed primarily for experimental purposes, as the fault tolerance of a
232 file is only as good as the file system of which it is a part.
233 A file must be specified by a full path.
234 .It Sy mirror
235 A mirror of two or more devices.
236 Data is replicated in an identical fashion across all components of a mirror.
237 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
238 failing before data integrity is compromised.
239 .It Sy raidz , raidz1 , raidz2 , raidz3
240 A variation on RAID-5 that allows for better distribution of parity and
241 eliminates the RAID-5
242 .Qq write hole
243 .Pq in which data and parity become inconsistent after a power loss .
244 Data and parity is striped across all disks within a raidz group.
245 .Pp
246 A raidz group can have single-, double-, or triple-parity, meaning that the
247 raidz group can sustain one, two, or three failures, respectively, without
248 losing any data.
249 The
250 .Sy raidz1
251 vdev type specifies a single-parity raidz group; the
252 .Sy raidz2
253 vdev type specifies a double-parity raidz group; and the
254 .Sy raidz3
255 vdev type specifies a triple-parity raidz group.
256 The
257 .Sy raidz
258 vdev type is an alias for
259 .Sy raidz1 .
260 .Pp
261 A raidz group with N disks of size X with P parity disks can hold approximately
262 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
263 compromised.
264 The minimum number of devices in a raidz group is one more than the number of
265 parity disks.
266 The recommended number is between 3 and 9 to help increase performance.
267 .It Sy spare
268 A special pseudo-vdev which keeps track of available hot spares for a pool.
269 For more information, see the
270 .Sx Hot Spares
271 section.
272 .It Sy log
273 A separate intent log device.
274 If more than one log device is specified, then writes are load-balanced between
275 devices.
276 Log devices can be mirrored.
277 However, raidz vdev types are not supported for the intent log.
278 For more information, see the
279 .Sx Intent Log
280 section.
281 .It Sy cache
282 A device used to cache storage pool data.
283 A cache device cannot be configured as a mirror or raidz group.
284 For more information, see the
285 .Sx Cache Devices
286 section.
287 .El
288 .Pp
289 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
290 contain files or disks.
291 Mirrors of mirrors
292 .Pq or other combinations
293 are not allowed.
294 .Pp
295 A pool can have any number of virtual devices at the top of the configuration
296 .Po known as
297 .Qq root vdevs
298 .Pc .
299 Data is dynamically distributed across all top-level devices to balance data
300 among devices.
301 As new virtual devices are added, ZFS automatically places data on the newly
302 available devices.
303 .Pp
304 Virtual devices are specified one at a time on the command line, separated by
305 whitespace.
306 The keywords
307 .Sy mirror
308 and
309 .Sy raidz
310 are used to distinguish where a group ends and another begins.
311 For example, the following creates two root vdevs, each a mirror of two disks:
312 .Bd -literal
313 # zpool create mypool mirror sda sdb mirror sdc sdd
314 .Ed
315 .Ss Device Failure and Recovery
316 ZFS supports a rich set of mechanisms for handling device failure and data
317 corruption.
318 All metadata and data is checksummed, and ZFS automatically repairs bad data
319 from a good copy when corruption is detected.
320 .Pp
321 In order to take advantage of these features, a pool must make use of some form
322 of redundancy, using either mirrored or raidz groups.
323 While ZFS supports running in a non-redundant configuration, where each root
324 vdev is simply a disk or file, this is strongly discouraged.
325 A single case of bit corruption can render some or all of your data unavailable.
326 .Pp
327 A pool's health status is described by one of three states: online, degraded,
328 or faulted.
329 An online pool has all devices operating normally.
330 A degraded pool is one in which one or more devices have failed, but the data is
331 still available due to a redundant configuration.
332 A faulted pool has corrupted metadata, or one or more faulted devices, and
333 insufficient replicas to continue functioning.
334 .Pp
335 The health of the top-level vdev, such as mirror or raidz device, is
336 potentially impacted by the state of its associated vdevs, or component
337 devices.
338 A top-level vdev or component device is in one of the following states:
339 .Bl -tag -width "DEGRADED"
340 .It Sy DEGRADED
341 One or more top-level vdevs is in the degraded state because one or more
342 component devices are offline.
343 Sufficient replicas exist to continue functioning.
344 .Pp
345 One or more component devices is in the degraded or faulted state, but
346 sufficient replicas exist to continue functioning.
347 The underlying conditions are as follows:
348 .Bl -bullet
349 .It
350 The number of checksum errors exceeds acceptable levels and the device is
351 degraded as an indication that something may be wrong.
352 ZFS continues to use the device as necessary.
353 .It
354 The number of I/O errors exceeds acceptable levels.
355 The device could not be marked as faulted because there are insufficient
356 replicas to continue functioning.
357 .El
358 .It Sy FAULTED
359 One or more top-level vdevs is in the faulted state because one or more
360 component devices are offline.
361 Insufficient replicas exist to continue functioning.
362 .Pp
363 One or more component devices is in the faulted state, and insufficient
364 replicas exist to continue functioning.
365 The underlying conditions are as follows:
366 .Bl -bullet
367 .It
368 The device could be opened, but the contents did not match expected values.
369 .It
370 The number of I/O errors exceeds acceptable levels and the device is faulted to
371 prevent further use of the device.
372 .El
373 .It Sy OFFLINE
374 The device was explicitly taken offline by the
375 .Nm zpool Cm offline
376 command.
377 .It Sy ONLINE
378 The device is online and functioning.
379 .It Sy REMOVED
380 The device was physically removed while the system was running.
381 Device removal detection is hardware-dependent and may not be supported on all
382 platforms.
383 .It Sy UNAVAIL
384 The device could not be opened.
385 If a pool is imported when a device was unavailable, then the device will be
386 identified by a unique identifier instead of its path since the path was never
387 correct in the first place.
388 .El
389 .Pp
390 If a device is removed and later re-attached to the system, ZFS attempts
391 to put the device online automatically.
392 Device attach detection is hardware-dependent and might not be supported on all
393 platforms.
394 .Ss Hot Spares
395 ZFS allows devices to be associated with pools as
396 .Qq hot spares .
397 These devices are not actively used in the pool, but when an active device
398 fails, it is automatically replaced by a hot spare.
399 To create a pool with hot spares, specify a
400 .Sy spare
401 vdev with any number of devices.
402 For example,
403 .Bd -literal
404 # zpool create pool mirror sda sdb spare sdc sdd
405 .Ed
406 .Pp
407 Spares can be shared across multiple pools, and can be added with the
408 .Nm zpool Cm add
409 command and removed with the
410 .Nm zpool Cm remove
411 command.
412 Once a spare replacement is initiated, a new
413 .Sy spare
414 vdev is created within the configuration that will remain there until the
415 original device is replaced.
416 At this point, the hot spare becomes available again if another device fails.
417 .Pp
418 If a pool has a shared spare that is currently being used, the pool can not be
419 exported since other pools may use this shared spare, which may lead to
420 potential data corruption.
421 .Pp
422 An in-progress spare replacement can be cancelled by detaching the hot spare.
423 If the original faulted device is detached, then the hot spare assumes its
424 place in the configuration, and is removed from the spare list of all active
425 pools.
426 .Pp
427 Spares cannot replace log devices.
428 .Ss Intent Log
429 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
430 transactions.
431 For instance, databases often require their transactions to be on stable storage
432 devices when returning from a system call.
433 NFS and other applications can also use
434 .Xr fsync 2
435 to ensure data stability.
436 By default, the intent log is allocated from blocks within the main pool.
437 However, it might be possible to get better performance using separate intent
438 log devices such as NVRAM or a dedicated disk.
439 For example:
440 .Bd -literal
441 # zpool create pool sda sdb log sdc
442 .Ed
443 .Pp
444 Multiple log devices can also be specified, and they can be mirrored.
445 See the
446 .Sx EXAMPLES
447 section for an example of mirroring multiple log devices.
448 .Pp
449 Log devices can be added, replaced, attached, detached and removed. In
450 addition, log devices are imported and exported as part of the pool
451 that contains them.
452 Mirrored devices can be removed by specifying the top-level mirror vdev.
453 .Ss Cache Devices
454 Devices can be added to a storage pool as
455 .Qq cache devices .
456 These devices provide an additional layer of caching between main memory and
457 disk.
458 For read-heavy workloads, where the working set size is much larger than what
459 can be cached in main memory, using cache devices allow much more of this
460 working set to be served from low latency media.
461 Using cache devices provides the greatest performance improvement for random
462 read-workloads of mostly static content.
463 .Pp
464 To create a pool with cache devices, specify a
465 .Sy cache
466 vdev with any number of devices.
467 For example:
468 .Bd -literal
469 # zpool create pool sda sdb cache sdc sdd
470 .Ed
471 .Pp
472 Cache devices cannot be mirrored or part of a raidz configuration.
473 If a read error is encountered on a cache device, that read I/O is reissued to
474 the original storage pool device, which might be part of a mirrored or raidz
475 configuration.
476 .Pp
477 The content of the cache devices is considered volatile, as is the case with
478 other system caches.
479 .Ss Pool checkpoint
480 Before starting critical procedures that include destructive actions (e.g
481 .Nm zfs Cm destroy
482 ), an administrator can checkpoint the pool's state and in the case of a
483 mistake or failure, rewind the entire pool back to the checkpoint.
484 Otherwise, the checkpoint can be discarded when the procedure has completed
485 successfully.
486 .Pp
487 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
488 with care as it contains every part of the pool's state, from properties to vdev
489 configuration.
490 Thus, while a pool has a checkpoint certain operations are not allowed.
491 Specifically, vdev removal/attach/detach, mirror splitting, and
492 changing the pool's guid.
493 Adding a new vdev is supported but in the case of a rewind it will have to be
494 added again.
495 Finally, users of this feature should keep in mind that scrubs in a pool that
496 has a checkpoint do not repair checkpointed data.
497 .Pp
498 To create a checkpoint for a pool:
499 .Bd -literal
500 # zpool checkpoint pool
501 .Ed
502 .Pp
503 To later rewind to its checkpointed state, you need to first export it and
504 then rewind it during import:
505 .Bd -literal
506 # zpool export pool
507 # zpool import --rewind-to-checkpoint pool
508 .Ed
509 .Pp
510 To discard the checkpoint from a pool:
511 .Bd -literal
512 # zpool checkpoint -d pool
513 .Ed
514 .Pp
515 Dataset reservations (controlled by the
516 .Nm reservation
517 or
518 .Nm refreservation
519 zfs properties) may be unenforceable while a checkpoint exists, because the
520 checkpoint is allowed to consume the dataset's reservation.
521 Finally, data that is part of the checkpoint but has been freed in the
522 current state of the pool won't be scanned during a scrub.
523 .Ss Properties
524 Each pool has several properties associated with it.
525 Some properties are read-only statistics while others are configurable and
526 change the behavior of the pool.
527 .Pp
528 The following are read-only properties:
529 .Bl -tag -width Ds
530 .It Cm allocated
531 Amount of storage used within the pool.
532 .It Sy capacity
533 Percentage of pool space used.
534 This property can also be referred to by its shortened column name,
535 .Sy cap .
536 .It Sy expandsize
537 Amount of uninitialized space within the pool or device that can be used to
538 increase the total capacity of the pool.
539 Uninitialized space consists of any space on an EFI labeled vdev which has not
540 been brought online
541 .Po e.g, using
542 .Nm zpool Cm online Fl e
543 .Pc .
544 This space occurs when a LUN is dynamically expanded.
545 .It Sy fragmentation
546 The amount of fragmentation in the pool.
547 .It Sy free
548 The amount of free space available in the pool.
549 .It Sy freeing
550 After a file system or snapshot is destroyed, the space it was using is
551 returned to the pool asynchronously.
552 .Sy freeing
553 is the amount of space remaining to be reclaimed.
554 Over time
555 .Sy freeing
556 will decrease while
557 .Sy free
558 increases.
559 .It Sy health
560 The current health of the pool.
561 Health can be one of
562 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
563 .It Sy guid
564 A unique identifier for the pool.
565 .It Sy size
566 Total size of the storage pool.
567 .It Sy unsupported@ Ns Em feature_guid
568 Information about unsupported features that are enabled on the pool.
569 See
570 .Xr zpool-features 5
571 for details.
572 .El
573 .Pp
574 The space usage properties report actual physical space available to the
575 storage pool.
576 The physical space can be different from the total amount of space that any
577 contained datasets can actually use.
578 The amount of space used in a raidz configuration depends on the characteristics
579 of the data being written.
580 In addition, ZFS reserves some space for internal accounting that the
581 .Xr zfs 8
582 command takes into account, but the
583 .Nm
584 command does not.
585 For non-full pools of a reasonable size, these effects should be invisible.
586 For small pools, or pools that are close to being completely full, these
587 discrepancies may become more noticeable.
588 .Pp
589 The following property can be set at creation time and import time:
590 .Bl -tag -width Ds
591 .It Sy altroot
592 Alternate root directory.
593 If set, this directory is prepended to any mount points within the pool.
594 This can be used when examining an unknown pool where the mount points cannot be
595 trusted, or in an alternate boot environment, where the typical paths are not
596 valid.
597 .Sy altroot
598 is not a persistent property.
599 It is valid only while the system is up.
600 Setting
601 .Sy altroot
602 defaults to using
603 .Sy cachefile Ns = Ns Sy none ,
604 though this may be overridden using an explicit setting.
605 .El
606 .Pp
607 The following property can be set only at import time:
608 .Bl -tag -width Ds
609 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
610 If set to
611 .Sy on ,
612 the pool will be imported in read-only mode.
613 This property can also be referred to by its shortened column name,
614 .Sy rdonly .
615 .El
616 .Pp
617 The following properties can be set at creation time and import time, and later
618 changed with the
619 .Nm zpool Cm set
620 command:
621 .Bl -tag -width Ds
622 .It Sy ashift Ns = Ns Sy ashift
623 Pool sector size exponent, to the power of
624 .Sy 2
625 (internally referred to as
626 .Sy ashift
627 ). Values from 9 to 16, inclusive, are valid; also, the special
628 value 0 (the default) means to auto-detect using the kernel's block
629 layer and a ZFS internal exception list. I/O operations will be aligned
630 to the specified size boundaries. Additionally, the minimum (disk)
631 write size will be set to the specified size, so this represents a
632 space vs. performance trade-off. For optimal performance, the pool
633 sector size should be greater than or equal to the sector size of the
634 underlying disks. The typical case for setting this property is when
635 performance is important and the underlying disks use 4KiB sectors but
636 report 512B sectors to the OS (for compatibility reasons); in that
637 case, set
638 .Sy ashift=12
639 (which is 1<<12 = 4096). When set, this property is
640 used as the default hint value in subsequent vdev operations (add,
641 attach and replace). Changing this value will not modify any existing
642 vdev, not even on disk replacement; however it can be used, for
643 instance, to replace a dying 512B sectors disk with a newer 4KiB
644 sectors device: this will probably result in bad performance but at the
645 same time could prevent loss of data.
646 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
647 Controls automatic pool expansion when the underlying LUN is grown.
648 If set to
649 .Sy on ,
650 the pool will be resized according to the size of the expanded device.
651 If the device is part of a mirror or raidz then all devices within that
652 mirror/raidz group must be expanded before the new space is made available to
653 the pool.
654 The default behavior is
655 .Sy off .
656 This property can also be referred to by its shortened column name,
657 .Sy expand .
658 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
659 Controls automatic device replacement.
660 If set to
661 .Sy off ,
662 device replacement must be initiated by the administrator by using the
663 .Nm zpool Cm replace
664 command.
665 If set to
666 .Sy on ,
667 any new device, found in the same physical location as a device that previously
668 belonged to the pool, is automatically formatted and replaced.
669 The default behavior is
670 .Sy off .
671 This property can also be referred to by its shortened column name,
672 .Sy replace .
673 Autoreplace can also be used with virtual disks (like device
674 mapper) provided that you use the /dev/disk/by-vdev paths setup by
675 vdev_id.conf. See the
676 .Xr vdev_id 8
677 man page for more details.
678 Autoreplace and autoonline require the ZFS Event Daemon be configured and
679 running. See the
680 .Xr zed 8
681 man page for more details.
682 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
683 Identifies the default bootable dataset for the root pool. This property is
684 expected to be set mainly by the installation and upgrade programs.
685 Not all Linux distribution boot processes use the bootfs property.
686 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
687 Controls the location of where the pool configuration is cached.
688 Discovering all pools on system startup requires a cached copy of the
689 configuration data that is stored on the root file system.
690 All pools in this cache are automatically imported when the system boots.
691 Some environments, such as install and clustering, need to cache this
692 information in a different location so that pools are not automatically
693 imported.
694 Setting this property caches the pool configuration in a different location that
695 can later be imported with
696 .Nm zpool Cm import Fl c .
697 Setting it to the special value
698 .Sy none
699 creates a temporary pool that is never cached, and the special value
700 .Qq
701 .Pq empty string
702 uses the default location.
703 .Pp
704 Multiple pools can share the same cache file.
705 Because the kernel destroys and recreates this file when pools are added and
706 removed, care should be taken when attempting to access this file.
707 When the last pool using a
708 .Sy cachefile
709 is exported or destroyed, the file will be empty.
710 .It Sy comment Ns = Ns Ar text
711 A text string consisting of printable ASCII characters that will be stored
712 such that it is available even if the pool becomes faulted.
713 An administrator can provide additional information about a pool using this
714 property.
715 .It Sy dedupditto Ns = Ns Ar number
716 Threshold for the number of block ditto copies.
717 If the reference count for a deduplicated block increases above this number, a
718 new ditto copy of this block is automatically stored.
719 The default setting is
720 .Sy 0
721 which causes no ditto copies to be created for deduplicated blocks.
722 The minimum legal nonzero setting is
723 .Sy 100 .
724 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
725 Controls whether a non-privileged user is granted access based on the dataset
726 permissions defined on the dataset.
727 See
728 .Xr zfs 8
729 for more information on ZFS delegated administration.
730 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
731 Controls the system behavior in the event of catastrophic pool failure.
732 This condition is typically a result of a loss of connectivity to the underlying
733 storage device(s) or a failure of all devices within the pool.
734 The behavior of such an event is determined as follows:
735 .Bl -tag -width "continue"
736 .It Sy wait
737 Blocks all I/O access until the device connectivity is recovered and the errors
738 are cleared.
739 This is the default behavior.
740 .It Sy continue
741 Returns
742 .Er EIO
743 to any new write I/O requests but allows reads to any of the remaining healthy
744 devices.
745 Any write requests that have yet to be committed to disk would be blocked.
746 .It Sy panic
747 Prints out a message to the console and generates a system crash dump.
748 .El
749 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
750 The value of this property is the current state of
751 .Ar feature_name .
752 The only valid value when setting this property is
753 .Sy enabled
754 which moves
755 .Ar feature_name
756 to the enabled state.
757 See
758 .Xr zpool-features 5
759 for details on feature states.
760 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
761 Controls whether information about snapshots associated with this pool is
762 output when
763 .Nm zfs Cm list
764 is run without the
765 .Fl t
766 option.
767 The default value is
768 .Sy off .
769 This property can also be referred to by its shortened name,
770 .Sy listsnaps .
771 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
772 Controls whether a pool activity check should be performed during
773 .Nm zpool Cm import .
774 When a pool is determined to be active it cannot be imported, even with the
775 .Fl f
776 option. This property is intended to be used in failover configurations
777 where multiple hosts have access to a pool on shared storage. When this
778 property is on, periodic writes to storage occur to show the pool is in use.
779 See
780 .Sy zfs_multihost_interval
781 in the
782 .Xr zfs-module-parameters 5
783 man page. In order to enable this property each host must set a unique hostid.
784 See
785 .Xr genhostid 1
786 .Xr zgenhostid 8
787 .Xr spl-module-parameters 5
788 for additional details. The default value is
789 .Sy off .
790 .It Sy version Ns = Ns Ar version
791 The current on-disk version of the pool.
792 This can be increased, but never decreased.
793 The preferred method of updating pools is with the
794 .Nm zpool Cm upgrade
795 command, though this property can be used when a specific version is needed for
796 backwards compatibility.
797 Once feature flags are enabled on a pool this property will no longer have a
798 value.
799 .El
800 .Ss Subcommands
801 All subcommands that modify state are logged persistently to the pool in their
802 original form.
803 .Pp
804 The
805 .Nm
806 command provides subcommands to create and destroy storage pools, add capacity
807 to storage pools, and provide information about the storage pools.
808 The following subcommands are supported:
809 .Bl -tag -width Ds
810 .It Xo
811 .Nm
812 .Fl ?
813 .Xc
814 Displays a help message.
815 .It Xo
816 .Nm
817 .Cm add
818 .Op Fl fgLnP
819 .Oo Fl o Ar property Ns = Ns Ar value Oc
820 .Ar pool vdev Ns ...
821 .Xc
822 Adds the specified virtual devices to the given pool.
823 The
824 .Ar vdev
825 specification is described in the
826 .Sx Virtual Devices
827 section.
828 The behavior of the
829 .Fl f
830 option, and the device checks performed are described in the
831 .Nm zpool Cm create
832 subcommand.
833 .Bl -tag -width Ds
834 .It Fl f
835 Forces use of
836 .Ar vdev Ns s ,
837 even if they appear in use or specify a conflicting replication level.
838 Not all devices can be overridden in this manner.
839 .It Fl g
840 Display
841 .Ar vdev ,
842 GUIDs instead of the normal device names. These GUIDs can be used in place of
843 device names for the zpool detach/offline/remove/replace commands.
844 .It Fl L
845 Display real paths for
846 .Ar vdev Ns s
847 resolving all symbolic links. This can be used to look up the current block
848 device name regardless of the /dev/disk/ path used to open it.
849 .It Fl n
850 Displays the configuration that would be used without actually adding the
851 .Ar vdev Ns s .
852 The actual pool creation can still fail due to insufficient privileges or
853 device sharing.
854 .It Fl P
855 Display real paths for
856 .Ar vdev Ns s
857 instead of only the last component of the path. This can be used in
858 conjunction with the
859 .Fl L
860 flag.
861 .It Fl o Ar property Ns = Ns Ar value
862 Sets the given pool properties. See the
863 .Sx Properties
864 section for a list of valid properties that can be set. The only property
865 supported at the moment is ashift.
866 .El
867 .It Xo
868 .Nm
869 .Cm attach
870 .Op Fl f
871 .Oo Fl o Ar property Ns = Ns Ar value Oc
872 .Ar pool device new_device
873 .Xc
874 Attaches
875 .Ar new_device
876 to the existing
877 .Ar device .
878 The existing device cannot be part of a raidz configuration.
879 If
880 .Ar device
881 is not currently part of a mirrored configuration,
882 .Ar device
883 automatically transforms into a two-way mirror of
884 .Ar device
885 and
886 .Ar new_device .
887 If
888 .Ar device
889 is part of a two-way mirror, attaching
890 .Ar new_device
891 creates a three-way mirror, and so on.
892 In either case,
893 .Ar new_device
894 begins to resilver immediately.
895 .Bl -tag -width Ds
896 .It Fl f
897 Forces use of
898 .Ar new_device ,
899 even if its appears to be in use.
900 Not all devices can be overridden in this manner.
901 .It Fl o Ar property Ns = Ns Ar value
902 Sets the given pool properties. See the
903 .Sx Properties
904 section for a list of valid properties that can be set. The only property
905 supported at the moment is ashift.
906 .El
907 .It Xo
908 .Nm
909 .Cm checkpoint
910 .Op Fl d, -discard
911 .Ar pool
912 .Xc
913 Checkpoints the current state of
914 .Ar pool
915 , which can be later restored by
916 .Nm zpool Cm import --rewind-to-checkpoint .
917 The existence of a checkpoint in a pool prohibits the following
918 .Nm zpool
919 commands:
920 .Cm remove ,
921 .Cm attach ,
922 .Cm detach ,
923 .Cm split ,
924 and
925 .Cm reguid .
926 In addition, it may break reservation boundaries if the pool lacks free
927 space.
928 The
929 .Nm zpool Cm status
930 command indicates the existence of a checkpoint or the progress of discarding a
931 checkpoint from a pool.
932 The
933 .Nm zpool Cm list
934 command reports how much space the checkpoint takes from the pool.
935 .Bl -tag -width Ds
936 .It Fl d, -discard
937 Discards an existing checkpoint from
938 .Ar pool .
939 .El
940 .It Xo
941 .Nm
942 .Cm clear
943 .Ar pool
944 .Op Ar device
945 .Xc
946 Clears device errors in a pool.
947 If no arguments are specified, all device errors within the pool are cleared.
948 If one or more devices is specified, only those errors associated with the
949 specified device or devices are cleared.
950 .It Xo
951 .Nm
952 .Cm create
953 .Op Fl dfn
954 .Op Fl m Ar mountpoint
955 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
956 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
957 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
958 .Op Fl R Ar root
959 .Op Fl t Ar tname
960 .Ar pool vdev Ns ...
961 .Xc
962 Creates a new storage pool containing the virtual devices specified on the
963 command line.
964 The pool name must begin with a letter, and can only contain
965 alphanumeric characters as well as underscore
966 .Pq Qq Sy _ ,
967 dash
968 .Pq Qq Sy \&- ,
969 colon
970 .Pq Qq Sy \&: ,
971 space
972 .Pq Qq Sy \&\ ,
973 and period
974 .Pq Qq Sy \&. .
975 The pool names
976 .Sy mirror ,
977 .Sy raidz ,
978 .Sy spare
979 and
980 .Sy log
981 are reserved, as are names beginning with
982 .Sy mirror ,
983 .Sy raidz ,
984 .Sy spare ,
985 and the pattern
986 .Sy c[0-9] .
987 The
988 .Ar vdev
989 specification is described in the
990 .Sx Virtual Devices
991 section.
992 .Pp
993 The command verifies that each device specified is accessible and not currently
994 in use by another subsystem.
995 There are some uses, such as being currently mounted, or specified as the
996 dedicated dump device, that prevents a device from ever being used by ZFS.
997 Other uses, such as having a preexisting UFS file system, can be overridden with
998 the
999 .Fl f
1000 option.
1001 .Pp
1002 The command also checks that the replication strategy for the pool is
1003 consistent.
1004 An attempt to combine redundant and non-redundant storage in a single pool, or
1005 to mix disks and files, results in an error unless
1006 .Fl f
1007 is specified.
1008 The use of differently sized devices within a single raidz or mirror group is
1009 also flagged as an error unless
1010 .Fl f
1011 is specified.
1012 .Pp
1013 Unless the
1014 .Fl R
1015 option is specified, the default mount point is
1016 .Pa / Ns Ar pool .
1017 The mount point must not exist or must be empty, or else the root dataset
1018 cannot be mounted.
1019 This can be overridden with the
1020 .Fl m
1021 option.
1022 .Pp
1023 By default all supported features are enabled on the new pool unless the
1024 .Fl d
1025 option is specified.
1026 .Bl -tag -width Ds
1027 .It Fl d
1028 Do not enable any features on the new pool.
1029 Individual features can be enabled by setting their corresponding properties to
1030 .Sy enabled
1031 with the
1032 .Fl o
1033 option.
1034 See
1035 .Xr zpool-features 5
1036 for details about feature properties.
1037 .It Fl f
1038 Forces use of
1039 .Ar vdev Ns s ,
1040 even if they appear in use or specify a conflicting replication level.
1041 Not all devices can be overridden in this manner.
1042 .It Fl m Ar mountpoint
1043 Sets the mount point for the root dataset.
1044 The default mount point is
1045 .Pa /pool
1046 or
1047 .Pa altroot/pool
1048 if
1049 .Ar altroot
1050 is specified.
1051 The mount point must be an absolute path,
1052 .Sy legacy ,
1053 or
1054 .Sy none .
1055 For more information on dataset mount points, see
1056 .Xr zfs 8 .
1057 .It Fl n
1058 Displays the configuration that would be used without actually creating the
1059 pool.
1060 The actual pool creation can still fail due to insufficient privileges or
1061 device sharing.
1062 .It Fl o Ar property Ns = Ns Ar value
1063 Sets the given pool properties.
1064 See the
1065 .Sx Properties
1066 section for a list of valid properties that can be set.
1067 .It Fl o Ar feature@feature Ns = Ns Ar value
1068 Sets the given pool feature. See the
1069 .Xr zpool-features 5
1070 section for a list of valid features that can be set.
1071 Value can be either disabled or enabled.
1072 .It Fl O Ar file-system-property Ns = Ns Ar value
1073 Sets the given file system properties in the root file system of the pool.
1074 See the
1075 .Sx Properties
1076 section of
1077 .Xr zfs 8
1078 for a list of valid properties that can be set.
1079 .It Fl R Ar root
1080 Equivalent to
1081 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1082 .It Fl t Ar tname
1083 Sets the in-core pool name to
1084 .Sy tname
1085 while the on-disk name will be the name specified as the pool name
1086 .Sy pool .
1087 This will set the default cachefile property to none. This is intended
1088 to handle name space collisions when creating pools for other systems,
1089 such as virtual machines or physical machines whose pools live on network
1090 block devices.
1091 .El
1092 .It Xo
1093 .Nm
1094 .Cm destroy
1095 .Op Fl f
1096 .Ar pool
1097 .Xc
1098 Destroys the given pool, freeing up any devices for other use.
1099 This command tries to unmount any active datasets before destroying the pool.
1100 .Bl -tag -width Ds
1101 .It Fl f
1102 Forces any active datasets contained within the pool to be unmounted.
1103 .El
1104 .It Xo
1105 .Nm
1106 .Cm detach
1107 .Ar pool device
1108 .Xc
1109 Detaches
1110 .Ar device
1111 from a mirror.
1112 The operation is refused if there are no other valid replicas of the data.
1113 If device may be re-added to the pool later on then consider the
1114 .Sy zpool offline
1115 command instead.
1116 .It Xo
1117 .Nm
1118 .Cm events
1119 .Op Fl vHf Oo Ar pool Oc | Fl c
1120 .Xc
1121 Lists all recent events generated by the ZFS kernel modules. These events
1122 are consumed by the
1123 .Xr zed 8
1124 and used to automate administrative tasks such as replacing a failed device
1125 with a hot spare. For more information about the subclasses and event payloads
1126 that can be generated see the
1127 .Xr zfs-events 5
1128 man page.
1129 .Bl -tag -width Ds
1130 .It Fl c
1131 Clear all previous events.
1132 .It Fl f
1133 Follow mode.
1134 .It Fl H
1135 Scripted mode. Do not display headers, and separate fields by a
1136 single tab instead of arbitrary space.
1137 .It Fl v
1138 Print the entire payload for each event.
1139 .El
1140 .It Xo
1141 .Nm
1142 .Cm export
1143 .Op Fl a
1144 .Op Fl f
1145 .Ar pool Ns ...
1146 .Xc
1147 Exports the given pools from the system.
1148 All devices are marked as exported, but are still considered in use by other
1149 subsystems.
1150 The devices can be moved between systems
1151 .Pq even those of different endianness
1152 and imported as long as a sufficient number of devices are present.
1153 .Pp
1154 Before exporting the pool, all datasets within the pool are unmounted.
1155 A pool can not be exported if it has a shared spare that is currently being
1156 used.
1157 .Pp
1158 For pools to be portable, you must give the
1159 .Nm
1160 command whole disks, not just partitions, so that ZFS can label the disks with
1161 portable EFI labels.
1162 Otherwise, disk drivers on platforms of different endianness will not recognize
1163 the disks.
1164 .Bl -tag -width Ds
1165 .It Fl a
1166 Exports all pools imported on the system.
1167 .It Fl f
1168 Forcefully unmount all datasets, using the
1169 .Nm unmount Fl f
1170 command.
1171 .Pp
1172 This command will forcefully export the pool even if it has a shared spare that
1173 is currently being used.
1174 This may lead to potential data corruption.
1175 .El
1176 .It Xo
1177 .Nm
1178 .Cm get
1179 .Op Fl Hp
1180 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1181 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1182 .Ar pool Ns ...
1183 .Xc
1184 Retrieves the given list of properties
1185 .Po
1186 or all properties if
1187 .Sy all
1188 is used
1189 .Pc
1190 for the specified storage pool(s).
1191 These properties are displayed with the following fields:
1192 .Bd -literal
1193 name Name of storage pool
1194 property Property name
1195 value Property value
1196 source Property source, either 'default' or 'local'.
1197 .Ed
1198 .Pp
1199 See the
1200 .Sx Properties
1201 section for more information on the available pool properties.
1202 .Bl -tag -width Ds
1203 .It Fl H
1204 Scripted mode.
1205 Do not display headers, and separate fields by a single tab instead of arbitrary
1206 space.
1207 .It Fl o Ar field
1208 A comma-separated list of columns to display.
1209 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1210 is the default value.
1211 .It Fl p
1212 Display numbers in parsable (exact) values.
1213 .El
1214 .It Xo
1215 .Nm
1216 .Cm history
1217 .Op Fl il
1218 .Oo Ar pool Oc Ns ...
1219 .Xc
1220 Displays the command history of the specified pool(s) or all pools if no pool is
1221 specified.
1222 .Bl -tag -width Ds
1223 .It Fl i
1224 Displays internally logged ZFS events in addition to user initiated events.
1225 .It Fl l
1226 Displays log records in long format, which in addition to standard format
1227 includes, the user name, the hostname, and the zone in which the operation was
1228 performed.
1229 .El
1230 .It Xo
1231 .Nm
1232 .Cm import
1233 .Op Fl D
1234 .Op Fl d Ar dir Ns | Ns device
1235 .Xc
1236 Lists pools available to import.
1237 If the
1238 .Fl d
1239 option is not specified, this command searches for devices in
1240 .Pa /dev .
1241 The
1242 .Fl d
1243 option can be specified multiple times, and all directories are searched.
1244 If the device appears to be part of an exported pool, this command displays a
1245 summary of the pool with the name of the pool, a numeric identifier, as well as
1246 the vdev layout and current health of the device for each device or file.
1247 Destroyed pools, pools that were previously destroyed with the
1248 .Nm zpool Cm destroy
1249 command, are not listed unless the
1250 .Fl D
1251 option is specified.
1252 .Pp
1253 The numeric identifier is unique, and can be used instead of the pool name when
1254 multiple exported pools of the same name are available.
1255 .Bl -tag -width Ds
1256 .It Fl c Ar cachefile
1257 Reads configuration from the given
1258 .Ar cachefile
1259 that was created with the
1260 .Sy cachefile
1261 pool property.
1262 This
1263 .Ar cachefile
1264 is used instead of searching for devices.
1265 .It Fl d Ar dir Ns | Ns Ar device
1266 Uses
1267 .Ar device
1268 or searches for devices or files in
1269 .Ar dir .
1270 The
1271 .Fl d
1272 option can be specified multiple times.
1273 .It Fl D
1274 Lists destroyed pools only.
1275 .El
1276 .It Xo
1277 .Nm
1278 .Cm import
1279 .Fl a
1280 .Op Fl DflmN
1281 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1282 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1283 .Op Fl o Ar mntopts
1284 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1285 .Op Fl R Ar root
1286 .Op Fl s
1287 .Xc
1288 Imports all pools found in the search directories.
1289 Identical to the previous command, except that all pools with a sufficient
1290 number of devices available are imported.
1291 Destroyed pools, pools that were previously destroyed with the
1292 .Nm zpool Cm destroy
1293 command, will not be imported unless the
1294 .Fl D
1295 option is specified.
1296 .Bl -tag -width Ds
1297 .It Fl a
1298 Searches for and imports all pools found.
1299 .It Fl c Ar cachefile
1300 Reads configuration from the given
1301 .Ar cachefile
1302 that was created with the
1303 .Sy cachefile
1304 pool property.
1305 This
1306 .Ar cachefile
1307 is used instead of searching for devices.
1308 .It Fl d Ar dir Ns | Ns Ar device
1309 Uses
1310 .Ar device
1311 or searches for devices or files in
1312 .Ar dir .
1313 The
1314 .Fl d
1315 option can be specified multiple times.
1316 This option is incompatible with the
1317 .Fl c
1318 option.
1319 .It Fl D
1320 Imports destroyed pools only.
1321 The
1322 .Fl f
1323 option is also required.
1324 .It Fl f
1325 Forces import, even if the pool appears to be potentially active.
1326 .It Fl F
1327 Recovery mode for a non-importable pool.
1328 Attempt to return the pool to an importable state by discarding the last few
1329 transactions.
1330 Not all damaged pools can be recovered by using this option.
1331 If successful, the data from the discarded transactions is irretrievably lost.
1332 This option is ignored if the pool is importable or already imported.
1333 .It Fl l
1334 Indicates that this command will request encryption keys for all encrypted
1335 datasets it attempts to mount as it is bringing the pool online. Note that if
1336 any datasets have a
1337 .Sy keylocation
1338 of
1339 .Sy prompt
1340 this command will block waiting for the keys to be entered. Without this flag
1341 encrypted datasets will be left unavailable until the keys are loaded.
1342 .It Fl m
1343 Allows a pool to import when there is a missing log device.
1344 Recent transactions can be lost because the log device will be discarded.
1345 .It Fl n
1346 Used with the
1347 .Fl F
1348 recovery option.
1349 Determines whether a non-importable pool can be made importable again, but does
1350 not actually perform the pool recovery.
1351 For more details about pool recovery mode, see the
1352 .Fl F
1353 option, above.
1354 .It Fl N
1355 Import the pool without mounting any file systems.
1356 .It Fl o Ar mntopts
1357 Comma-separated list of mount options to use when mounting datasets within the
1358 pool.
1359 See
1360 .Xr zfs 8
1361 for a description of dataset properties and mount options.
1362 .It Fl o Ar property Ns = Ns Ar value
1363 Sets the specified property on the imported pool.
1364 See the
1365 .Sx Properties
1366 section for more information on the available pool properties.
1367 .It Fl R Ar root
1368 Sets the
1369 .Sy cachefile
1370 property to
1371 .Sy none
1372 and the
1373 .Sy altroot
1374 property to
1375 .Ar root .
1376 .It Fl -rewind-to-checkpoint
1377 Rewinds pool to the checkpointed state.
1378 Once the pool is imported with this flag there is no way to undo the rewind.
1379 All changes and data that were written after the checkpoint are lost!
1380 The only exception is when the
1381 .Sy readonly
1382 mounting option is enabled.
1383 In this case, the checkpointed state of the pool is opened and an
1384 administrator can see how the pool would look like if they were
1385 to fully rewind.
1386 .It Fl s
1387 Scan using the default search path, the libblkid cache will not be
1388 consulted. A custom search path may be specified by setting the
1389 ZPOOL_IMPORT_PATH environment variable.
1390 .It Fl X
1391 Used with the
1392 .Fl F
1393 recovery option. Determines whether extreme
1394 measures to find a valid txg should take place. This allows the pool to
1395 be rolled back to a txg which is no longer guaranteed to be consistent.
1396 Pools imported at an inconsistent txg may contain uncorrectable
1397 checksum errors. For more details about pool recovery mode, see the
1398 .Fl F
1399 option, above. WARNING: This option can be extremely hazardous to the
1400 health of your pool and should only be used as a last resort.
1401 .It Fl T
1402 Specify the txg to use for rollback. Implies
1403 .Fl FX .
1404 For more details
1405 about pool recovery mode, see the
1406 .Fl X
1407 option, above. WARNING: This option can be extremely hazardous to the
1408 health of your pool and should only be used as a last resort.
1409 .El
1410 .It Xo
1411 .Nm
1412 .Cm import
1413 .Op Fl Dflm
1414 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1415 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1416 .Op Fl o Ar mntopts
1417 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1418 .Op Fl R Ar root
1419 .Op Fl s
1420 .Ar pool Ns | Ns Ar id
1421 .Op Ar newpool
1422 .Xc
1423 Imports a specific pool.
1424 A pool can be identified by its name or the numeric identifier.
1425 If
1426 .Ar newpool
1427 is specified, the pool is imported using the name
1428 .Ar newpool .
1429 Otherwise, it is imported with the same name as its exported name.
1430 .Pp
1431 If a device is removed from a system without running
1432 .Nm zpool Cm export
1433 first, the device appears as potentially active.
1434 It cannot be determined if this was a failed export, or whether the device is
1435 really in use from another host.
1436 To import a pool in this state, the
1437 .Fl f
1438 option is required.
1439 .Bl -tag -width Ds
1440 .It Fl c Ar cachefile
1441 Reads configuration from the given
1442 .Ar cachefile
1443 that was created with the
1444 .Sy cachefile
1445 pool property.
1446 This
1447 .Ar cachefile
1448 is used instead of searching for devices.
1449 .It Fl d Ar dir Ns | Ns Ar device
1450 Uses
1451 .Ar device
1452 or searches for devices or files in
1453 .Ar dir .
1454 The
1455 .Fl d
1456 option can be specified multiple times.
1457 This option is incompatible with the
1458 .Fl c
1459 option.
1460 .It Fl D
1461 Imports destroyed pool.
1462 The
1463 .Fl f
1464 option is also required.
1465 .It Fl f
1466 Forces import, even if the pool appears to be potentially active.
1467 .It Fl F
1468 Recovery mode for a non-importable pool.
1469 Attempt to return the pool to an importable state by discarding the last few
1470 transactions.
1471 Not all damaged pools can be recovered by using this option.
1472 If successful, the data from the discarded transactions is irretrievably lost.
1473 This option is ignored if the pool is importable or already imported.
1474 .It Fl l
1475 Indicates that this command will request encryption keys for all encrypted
1476 datasets it attempts to mount as it is bringing the pool online. Note that if
1477 any datasets have a
1478 .Sy keylocation
1479 of
1480 .Sy prompt
1481 this command will block waiting for the keys to be entered. Without this flag
1482 encrypted datasets will be left unavailable until the keys are loaded.
1483 .It Fl m
1484 Allows a pool to import when there is a missing log device.
1485 Recent transactions can be lost because the log device will be discarded.
1486 .It Fl n
1487 Used with the
1488 .Fl F
1489 recovery option.
1490 Determines whether a non-importable pool can be made importable again, but does
1491 not actually perform the pool recovery.
1492 For more details about pool recovery mode, see the
1493 .Fl F
1494 option, above.
1495 .It Fl o Ar mntopts
1496 Comma-separated list of mount options to use when mounting datasets within the
1497 pool.
1498 See
1499 .Xr zfs 8
1500 for a description of dataset properties and mount options.
1501 .It Fl o Ar property Ns = Ns Ar value
1502 Sets the specified property on the imported pool.
1503 See the
1504 .Sx Properties
1505 section for more information on the available pool properties.
1506 .It Fl R Ar root
1507 Sets the
1508 .Sy cachefile
1509 property to
1510 .Sy none
1511 and the
1512 .Sy altroot
1513 property to
1514 .Ar root .
1515 .It Fl s
1516 Scan using the default search path, the libblkid cache will not be
1517 consulted. A custom search path may be specified by setting the
1518 ZPOOL_IMPORT_PATH environment variable.
1519 .It Fl X
1520 Used with the
1521 .Fl F
1522 recovery option. Determines whether extreme
1523 measures to find a valid txg should take place. This allows the pool to
1524 be rolled back to a txg which is no longer guaranteed to be consistent.
1525 Pools imported at an inconsistent txg may contain uncorrectable
1526 checksum errors. For more details about pool recovery mode, see the
1527 .Fl F
1528 option, above. WARNING: This option can be extremely hazardous to the
1529 health of your pool and should only be used as a last resort.
1530 .It Fl T
1531 Specify the txg to use for rollback. Implies
1532 .Fl FX .
1533 For more details
1534 about pool recovery mode, see the
1535 .Fl X
1536 option, above. WARNING: This option can be extremely hazardous to the
1537 health of your pool and should only be used as a last resort.
1538 .It Fl t
1539 Used with
1540 .Sy newpool .
1541 Specifies that
1542 .Sy newpool
1543 is temporary. Temporary pool names last until export. Ensures that
1544 the original pool name will be used in all label updates and therefore
1545 is retained upon export.
1546 Will also set -o cachefile=none when not explicitly specified.
1547 .El
1548 .It Xo
1549 .Nm
1550 .Cm iostat
1551 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1552 .Op Fl T Sy u Ns | Ns Sy d
1553 .Op Fl ghHLpPvy
1554 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1555 .Op Ar interval Op Ar count
1556 .Xc
1557 Displays I/O statistics for the given pools/vdevs. You can pass in a
1558 list of pools, a pool and list of vdevs in that pool, or a list of any
1559 vdevs from any pool. If no items are specified, statistics for every
1560 pool in the system are shown.
1561 When given an
1562 .Ar interval ,
1563 the statistics are printed every
1564 .Ar interval
1565 seconds until ^C is pressed. If count is specified, the command exits
1566 after count reports are printed. The first report printed is always
1567 the statistics since boot regardless of whether
1568 .Ar interval
1569 and
1570 .Ar count
1571 are passed. However, this behavior can be suppressed with the
1572 .Fl y
1573 flag. Also note that the units of
1574 .Sy K ,
1575 .Sy M ,
1576 .Sy G ...
1577 that are printed in the report are in base 1024. To get the raw
1578 values, use the
1579 .Fl p
1580 flag.
1581 .Bl -tag -width Ds
1582 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1583 Run a script (or scripts) on each vdev and include the output as a new column
1584 in the
1585 .Nm zpool Cm iostat
1586 output. Users can run any script found in their
1587 .Pa ~/.zpool.d
1588 directory or from the system
1589 .Pa /etc/zfs/zpool.d
1590 directory. Script names containing the slash (/) character are not allowed.
1591 The default search path can be overridden by setting the
1592 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1593 .Fl c
1594 if they have the ZPOOL_SCRIPTS_AS_ROOT
1595 environment variable set. If a script requires the use of a privileged
1596 command, like
1597 .Xr smartctl 8 ,
1598 then it's recommended you allow the user access to it in
1599 .Pa /etc/sudoers
1600 or add the user to the
1601 .Pa /etc/sudoers.d/zfs
1602 file.
1603 .Pp
1604 If
1605 .Fl c
1606 is passed without a script name, it prints a list of all scripts.
1607 .Fl c
1608 also sets verbose mode
1609 .No \&( Ns Fl v Ns No \&).
1610 .Pp
1611 Script output should be in the form of "name=value". The column name is
1612 set to "name" and the value is set to "value". Multiple lines can be
1613 used to output multiple columns. The first line of output not in the
1614 "name=value" format is displayed without a column title, and no more
1615 output after that is displayed. This can be useful for printing error
1616 messages. Blank or NULL values are printed as a '-' to make output
1617 awk-able.
1618 .Pp
1619 The following environment variables are set before running each script:
1620 .Bl -tag -width "VDEV_PATH"
1621 .It Sy VDEV_PATH
1622 Full path to the vdev
1623 .El
1624 .Bl -tag -width "VDEV_UPATH"
1625 .It Sy VDEV_UPATH
1626 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1627 multipath, or partitioned vdevs.
1628 .El
1629 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1630 .It Sy VDEV_ENC_SYSFS_PATH
1631 The sysfs path to the enclosure for the vdev (if any).
1632 .El
1633 .It Fl T Sy u Ns | Ns Sy d
1634 Display a time stamp.
1635 Specify
1636 .Sy u
1637 for a printed representation of the internal representation of time.
1638 See
1639 .Xr time 2 .
1640 Specify
1641 .Sy d
1642 for standard date format.
1643 See
1644 .Xr date 1 .
1645 .It Fl g
1646 Display vdev GUIDs instead of the normal device names. These GUIDs
1647 can be used in place of device names for the zpool
1648 detach/offline/remove/replace commands.
1649 .It Fl H
1650 Scripted mode. Do not display headers, and separate fields by a
1651 single tab instead of arbitrary space.
1652 .It Fl L
1653 Display real paths for vdevs resolving all symbolic links. This can
1654 be used to look up the current block device name regardless of the
1655 .Pa /dev/disk/
1656 path used to open it.
1657 .It Fl p
1658 Display numbers in parsable (exact) values. Time values are in
1659 nanoseconds.
1660 .It Fl P
1661 Display full paths for vdevs instead of only the last component of
1662 the path. This can be used in conjunction with the
1663 .Fl L
1664 flag.
1665 .It Fl r
1666 Print request size histograms for the leaf ZIOs. This includes
1667 histograms of individual ZIOs (
1668 .Ar ind )
1669 and aggregate ZIOs (
1670 .Ar agg ).
1671 These stats can be useful for seeing how well the ZFS IO aggregator is
1672 working. Do not confuse these request size stats with the block layer
1673 requests; it's possible ZIOs can be broken up before being sent to the
1674 block device.
1675 .It Fl v
1676 Verbose statistics Reports usage statistics for individual vdevs within the
1677 pool, in addition to the pool-wide statistics.
1678 .It Fl y
1679 Omit statistics since boot.
1680 Normally the first line of output reports the statistics since boot.
1681 This option suppresses that first line of output.
1682 .It Fl w
1683 Display latency histograms:
1684 .Pp
1685 .Ar total_wait :
1686 Total IO time (queuing + disk IO time).
1687 .Ar disk_wait :
1688 Disk IO time (time reading/writing the disk).
1689 .Ar syncq_wait :
1690 Amount of time IO spent in synchronous priority queues. Does not include
1691 disk time.
1692 .Ar asyncq_wait :
1693 Amount of time IO spent in asynchronous priority queues. Does not include
1694 disk time.
1695 .Ar scrub :
1696 Amount of time IO spent in scrub queue. Does not include disk time.
1697 .It Fl l
1698 Include average latency statistics:
1699 .Pp
1700 .Ar total_wait :
1701 Average total IO time (queuing + disk IO time).
1702 .Ar disk_wait :
1703 Average disk IO time (time reading/writing the disk).
1704 .Ar syncq_wait :
1705 Average amount of time IO spent in synchronous priority queues. Does
1706 not include disk time.
1707 .Ar asyncq_wait :
1708 Average amount of time IO spent in asynchronous priority queues.
1709 Does not include disk time.
1710 .Ar scrub :
1711 Average queuing time in scrub queue. Does not include disk time.
1712 .It Fl q
1713 Include active queue statistics. Each priority queue has both
1714 pending (
1715 .Ar pend )
1716 and active (
1717 .Ar activ )
1718 IOs. Pending IOs are waiting to
1719 be issued to the disk, and active IOs have been issued to disk and are
1720 waiting for completion. These stats are broken out by priority queue:
1721 .Pp
1722 .Ar syncq_read/write :
1723 Current number of entries in synchronous priority
1724 queues.
1725 .Ar asyncq_read/write :
1726 Current number of entries in asynchronous priority queues.
1727 .Ar scrubq_read :
1728 Current number of entries in scrub queue.
1729 .Pp
1730 All queue statistics are instantaneous measurements of the number of
1731 entries in the queues. If you specify an interval, the measurements
1732 will be sampled from the end of the interval.
1733 .El
1734 .It Xo
1735 .Nm
1736 .Cm labelclear
1737 .Op Fl f
1738 .Ar device
1739 .Xc
1740 Removes ZFS label information from the specified
1741 .Ar device .
1742 The
1743 .Ar device
1744 must not be part of an active pool configuration.
1745 .Bl -tag -width Ds
1746 .It Fl f
1747 Treat exported or foreign devices as inactive.
1748 .El
1749 .It Xo
1750 .Nm
1751 .Cm list
1752 .Op Fl HgLpPv
1753 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1754 .Op Fl T Sy u Ns | Ns Sy d
1755 .Oo Ar pool Oc Ns ...
1756 .Op Ar interval Op Ar count
1757 .Xc
1758 Lists the given pools along with a health status and space usage.
1759 If no
1760 .Ar pool Ns s
1761 are specified, all pools in the system are listed.
1762 When given an
1763 .Ar interval ,
1764 the information is printed every
1765 .Ar interval
1766 seconds until ^C is pressed.
1767 If
1768 .Ar count
1769 is specified, the command exits after
1770 .Ar count
1771 reports are printed.
1772 .Bl -tag -width Ds
1773 .It Fl g
1774 Display vdev GUIDs instead of the normal device names. These GUIDs
1775 can be used in place of device names for the zpool
1776 detach/offline/remove/replace commands.
1777 .It Fl H
1778 Scripted mode.
1779 Do not display headers, and separate fields by a single tab instead of arbitrary
1780 space.
1781 .It Fl o Ar property
1782 Comma-separated list of properties to display.
1783 See the
1784 .Sx Properties
1785 section for a list of valid properties.
1786 The default list is
1787 .Cm name , size , allocated , free , expandsize , fragmentation , capacity ,
1788 .Cm dedupratio , health , altroot .
1789 .It Fl L
1790 Display real paths for vdevs resolving all symbolic links. This can
1791 be used to look up the current block device name regardless of the
1792 /dev/disk/ path used to open it.
1793 .It Fl p
1794 Display numbers in parsable
1795 .Pq exact
1796 values.
1797 .It Fl P
1798 Display full paths for vdevs instead of only the last component of
1799 the path. This can be used in conjunction with the
1800 .Fl L
1801 flag.
1802 .It Fl T Sy u Ns | Ns Sy d
1803 Display a time stamp.
1804 Specify
1805 .Fl u
1806 for a printed representation of the internal representation of time.
1807 See
1808 .Xr time 2 .
1809 Specify
1810 .Fl d
1811 for standard date format.
1812 See
1813 .Xr date 1 .
1814 .It Fl v
1815 Verbose statistics.
1816 Reports usage statistics for individual vdevs within the pool, in addition to
1817 the pool-wise statistics.
1818 .El
1819 .It Xo
1820 .Nm
1821 .Cm offline
1822 .Op Fl f
1823 .Op Fl t
1824 .Ar pool Ar device Ns ...
1825 .Xc
1826 Takes the specified physical device offline.
1827 While the
1828 .Ar device
1829 is offline, no attempt is made to read or write to the device.
1830 This command is not applicable to spares.
1831 .Bl -tag -width Ds
1832 .It Fl f
1833 Force fault. Instead of offlining the disk, put it into a faulted
1834 state. The fault will persist across imports unless the
1835 .Fl t
1836 flag was specified.
1837 .It Fl t
1838 Temporary.
1839 Upon reboot, the specified physical device reverts to its previous state.
1840 .El
1841 .It Xo
1842 .Nm
1843 .Cm online
1844 .Op Fl e
1845 .Ar pool Ar device Ns ...
1846 .Xc
1847 Brings the specified physical device online.
1848 This command is not applicable to spares.
1849 .Bl -tag -width Ds
1850 .It Fl e
1851 Expand the device to use all available space.
1852 If the device is part of a mirror or raidz then all devices must be expanded
1853 before the new space will become available to the pool.
1854 .El
1855 .It Xo
1856 .Nm
1857 .Cm reguid
1858 .Ar pool
1859 .Xc
1860 Generates a new unique identifier for the pool.
1861 You must ensure that all devices in this pool are online and healthy before
1862 performing this action.
1863 .It Xo
1864 .Nm
1865 .Cm reopen
1866 .Op Fl n
1867 .Ar pool
1868 .Xc
1869 Reopen all the vdevs associated with the pool.
1870 .Bl -tag -width Ds
1871 .It Fl n
1872 Do not restart an in-progress scrub operation. This is not recommended and can
1873 result in partially resilvered devices unless a second scrub is performed.
1874 .El
1875 .It Xo
1876 .Nm
1877 .Cm remove
1878 .Op Fl np
1879 .Ar pool Ar device Ns ...
1880 .Xc
1881 Removes the specified device from the pool.
1882 This command currently only supports removing hot spares, cache, log
1883 devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1884 .sp
1885 Removing a top-level vdev reduces the total amount of space in the storage pool.
1886 The specified device will be evacuated by copying all allocated space from it to
1887 the other devices in the pool.
1888 In this case, the
1889 .Nm zpool Cm remove
1890 command initiates the removal and returns, while the evacuation continues in
1891 the background.
1892 The removal progress can be monitored with
1893 .Nm zpool Cm status.
1894 This feature must be enabled to be used, see
1895 .Xr zpool-features 5
1896 .Pp
1897 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1898 same.
1899 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1900 the
1901 .Nm zpool Cm detach
1902 command.
1903 .Bl -tag -width Ds
1904 .It Fl n
1905 Do not actually perform the removal ("no-op").
1906 Instead, print the estimated amount of memory that will be used by the
1907 mapping table after the removal completes.
1908 This is nonzero only for top-level vdevs.
1909 .El
1910 .Bl -tag -width Ds
1911 .It Fl p
1912 Used in conjunction with the
1913 .Fl n
1914 flag, displays numbers as parsable (exact) values.
1915 .El
1916 .It Xo
1917 .Nm
1918 .Cm remove
1919 .Fl s
1920 .Ar pool
1921 .Xc
1922 Stops and cancels an in-progress removal of a top-level vdev.
1923 .It Xo
1924 .Nm
1925 .Cm replace
1926 .Op Fl f
1927 .Op Fl o Ar property Ns = Ns Ar value
1928 .Ar pool Ar device Op Ar new_device
1929 .Xc
1930 Replaces
1931 .Ar old_device
1932 with
1933 .Ar new_device .
1934 This is equivalent to attaching
1935 .Ar new_device ,
1936 waiting for it to resilver, and then detaching
1937 .Ar old_device .
1938 .Pp
1939 The size of
1940 .Ar new_device
1941 must be greater than or equal to the minimum size of all the devices in a mirror
1942 or raidz configuration.
1943 .Pp
1944 .Ar new_device
1945 is required if the pool is not redundant.
1946 If
1947 .Ar new_device
1948 is not specified, it defaults to
1949 .Ar old_device .
1950 This form of replacement is useful after an existing disk has failed and has
1951 been physically replaced.
1952 In this case, the new disk may have the same
1953 .Pa /dev
1954 path as the old device, even though it is actually a different disk.
1955 ZFS recognizes this.
1956 .Bl -tag -width Ds
1957 .It Fl f
1958 Forces use of
1959 .Ar new_device ,
1960 even if its appears to be in use.
1961 Not all devices can be overridden in this manner.
1962 .It Fl o Ar property Ns = Ns Ar value
1963 Sets the given pool properties. See the
1964 .Sx Properties
1965 section for a list of valid properties that can be set.
1966 The only property supported at the moment is
1967 .Sy ashift .
1968 .El
1969 .It Xo
1970 .Nm
1971 .Cm scrub
1972 .Op Fl s | Fl p
1973 .Ar pool Ns ...
1974 .Xc
1975 Begins a scrub or resumes a paused scrub.
1976 The scrub examines all data in the specified pools to verify that it checksums
1977 correctly.
1978 For replicated
1979 .Pq mirror or raidz
1980 devices, ZFS automatically repairs any damage discovered during the scrub.
1981 The
1982 .Nm zpool Cm status
1983 command reports the progress of the scrub and summarizes the results of the
1984 scrub upon completion.
1985 .Pp
1986 Scrubbing and resilvering are very similar operations.
1987 The difference is that resilvering only examines data that ZFS knows to be out
1988 of date
1989 .Po
1990 for example, when attaching a new device to a mirror or replacing an existing
1991 device
1992 .Pc ,
1993 whereas scrubbing examines all data to discover silent errors due to hardware
1994 faults or disk failure.
1995 .Pp
1996 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1997 one at a time.
1998 If a scrub is paused, the
1999 .Nm zpool Cm scrub
2000 resumes it.
2001 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2002 resilver completes.
2003 .Bl -tag -width Ds
2004 .It Fl s
2005 Stop scrubbing.
2006 .El
2007 .Bl -tag -width Ds
2008 .It Fl p
2009 Pause scrubbing.
2010 Scrub pause state and progress are periodically synced to disk.
2011 If the system is restarted or pool is exported during a paused scrub,
2012 even after import, scrub will remain paused until it is resumed.
2013 Once resumed the scrub will pick up from the place where it was last
2014 checkpointed to disk.
2015 To resume a paused scrub issue
2016 .Nm zpool Cm scrub
2017 again.
2018 .El
2019 .It Xo
2020 .Nm
2021 .Cm set
2022 .Ar property Ns = Ns Ar value
2023 .Ar pool
2024 .Xc
2025 Sets the given property on the specified pool.
2026 See the
2027 .Sx Properties
2028 section for more information on what properties can be set and acceptable
2029 values.
2030 .It Xo
2031 .Nm
2032 .Cm split
2033 .Op Fl gLlnP
2034 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2035 .Op Fl R Ar root
2036 .Ar pool newpool
2037 .Op Ar device ...
2038 .Xc
2039 Splits devices off
2040 .Ar pool
2041 creating
2042 .Ar newpool .
2043 All vdevs in
2044 .Ar pool
2045 must be mirrors and the pool must not be in the process of resilvering.
2046 At the time of the split,
2047 .Ar newpool
2048 will be a replica of
2049 .Ar pool .
2050 By default, the
2051 last device in each mirror is split from
2052 .Ar pool
2053 to create
2054 .Ar newpool .
2055 .Pp
2056 The optional device specification causes the specified device(s) to be
2057 included in the new
2058 .Ar pool
2059 and, should any devices remain unspecified,
2060 the last device in each mirror is used as would be by default.
2061 .Bl -tag -width Ds
2062 .It Fl g
2063 Display vdev GUIDs instead of the normal device names. These GUIDs
2064 can be used in place of device names for the zpool
2065 detach/offline/remove/replace commands.
2066 .It Fl L
2067 Display real paths for vdevs resolving all symbolic links. This can
2068 be used to look up the current block device name regardless of the
2069 .Pa /dev/disk/
2070 path used to open it.
2071 .It Fl l
2072 Indicates that this command will request encryption keys for all encrypted
2073 datasets it attempts to mount as it is bringing the new pool online. Note that
2074 if any datasets have a
2075 .Sy keylocation
2076 of
2077 .Sy prompt
2078 this command will block waiting for the keys to be entered. Without this flag
2079 encrypted datasets will be left unavailable until the keys are loaded.
2080 .It Fl n
2081 Do dry run, do not actually perform the split.
2082 Print out the expected configuration of
2083 .Ar newpool .
2084 .It Fl P
2085 Display full paths for vdevs instead of only the last component of
2086 the path. This can be used in conjunction with the
2087 .Fl L
2088 flag.
2089 .It Fl o Ar property Ns = Ns Ar value
2090 Sets the specified property for
2091 .Ar newpool .
2092 See the
2093 .Sx Properties
2094 section for more information on the available pool properties.
2095 .It Fl R Ar root
2096 Set
2097 .Sy altroot
2098 for
2099 .Ar newpool
2100 to
2101 .Ar root
2102 and automatically import it.
2103 .El
2104 .It Xo
2105 .Nm
2106 .Cm status
2107 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2108 .Op Fl gLPvxD
2109 .Op Fl T Sy u Ns | Ns Sy d
2110 .Oo Ar pool Oc Ns ...
2111 .Op Ar interval Op Ar count
2112 .Xc
2113 Displays the detailed health status for the given pools.
2114 If no
2115 .Ar pool
2116 is specified, then the status of each pool in the system is displayed.
2117 For more information on pool and device health, see the
2118 .Sx Device Failure and Recovery
2119 section.
2120 .Pp
2121 If a scrub or resilver is in progress, this command reports the percentage done
2122 and the estimated time to completion.
2123 Both of these are only approximate, because the amount of data in the pool and
2124 the other workloads on the system can change.
2125 .Bl -tag -width Ds
2126 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2127 Run a script (or scripts) on each vdev and include the output as a new column
2128 in the
2129 .Nm zpool Cm status
2130 output. See the
2131 .Fl c
2132 option of
2133 .Nm zpool Cm iostat
2134 for complete details.
2135 .It Fl g
2136 Display vdev GUIDs instead of the normal device names. These GUIDs
2137 can be used in place of device names for the zpool
2138 detach/offline/remove/replace commands.
2139 .It Fl L
2140 Display real paths for vdevs resolving all symbolic links. This can
2141 be used to look up the current block device name regardless of the
2142 .Pa /dev/disk/
2143 path used to open it.
2144 .It Fl P
2145 Display full paths for vdevs instead of only the last component of
2146 the path. This can be used in conjunction with the
2147 .Fl L
2148 flag.
2149 .It Fl D
2150 Display a histogram of deduplication statistics, showing the allocated
2151 .Pq physically present on disk
2152 and referenced
2153 .Pq logically referenced in the pool
2154 block counts and sizes by reference count.
2155 .It Fl T Sy u Ns | Ns Sy d
2156 Display a time stamp.
2157 Specify
2158 .Fl u
2159 for a printed representation of the internal representation of time.
2160 See
2161 .Xr time 2 .
2162 Specify
2163 .Fl d
2164 for standard date format.
2165 See
2166 .Xr date 1 .
2167 .It Fl v
2168 Displays verbose data error information, printing out a complete list of all
2169 data errors since the last complete pool scrub.
2170 .It Fl x
2171 Only display status for pools that are exhibiting errors or are otherwise
2172 unavailable.
2173 Warnings about pools not using the latest on-disk format will not be included.
2174 .El
2175 .It Xo
2176 .Nm
2177 .Cm sync
2178 .Op Ar pool ...
2179 .Xc
2180 This command forces all in-core dirty data to be written to the primary
2181 pool storage and not the ZIL. It will also update administrative
2182 information including quota reporting. Without arguments,
2183 .Sy zpool sync
2184 will sync all pools on the system. Otherwise, it will sync only the
2185 specified pool(s).
2186 .It Xo
2187 .Nm
2188 .Cm upgrade
2189 .Xc
2190 Displays pools which do not have all supported features enabled and pools
2191 formatted using a legacy ZFS version number.
2192 These pools can continue to be used, but some features may not be available.
2193 Use
2194 .Nm zpool Cm upgrade Fl a
2195 to enable all features on all pools.
2196 .It Xo
2197 .Nm
2198 .Cm upgrade
2199 .Fl v
2200 .Xc
2201 Displays legacy ZFS versions supported by the current software.
2202 See
2203 .Xr zpool-features 5
2204 for a description of feature flags features supported by the current software.
2205 .It Xo
2206 .Nm
2207 .Cm upgrade
2208 .Op Fl V Ar version
2209 .Fl a Ns | Ns Ar pool Ns ...
2210 .Xc
2211 Enables all supported features on the given pool.
2212 Once this is done, the pool will no longer be accessible on systems that do not
2213 support feature flags.
2214 See
2215 .Xr zfs-features 5
2216 for details on compatibility with systems that support feature flags, but do not
2217 support all features enabled on the pool.
2218 .Bl -tag -width Ds
2219 .It Fl a
2220 Enables all supported features on all pools.
2221 .It Fl V Ar version
2222 Upgrade to the specified legacy version.
2223 If the
2224 .Fl V
2225 flag is specified, no features will be enabled on the pool.
2226 This option can only be used to increase the version number up to the last
2227 supported legacy version number.
2228 .El
2229 .El
2230 .Sh EXIT STATUS
2231 The following exit values are returned:
2232 .Bl -tag -width Ds
2233 .It Sy 0
2234 Successful completion.
2235 .It Sy 1
2236 An error occurred.
2237 .It Sy 2
2238 Invalid command line options were specified.
2239 .El
2240 .Sh EXAMPLES
2241 .Bl -tag -width Ds
2242 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2243 The following command creates a pool with a single raidz root vdev that
2244 consists of six disks.
2245 .Bd -literal
2246 # zpool create tank raidz sda sdb sdc sdd sde sdf
2247 .Ed
2248 .It Sy Example 2 No Creating a Mirrored Storage Pool
2249 The following command creates a pool with two mirrors, where each mirror
2250 contains two disks.
2251 .Bd -literal
2252 # zpool create tank mirror sda sdb mirror sdc sdd
2253 .Ed
2254 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2255 The following command creates an unmirrored pool using two disk partitions.
2256 .Bd -literal
2257 # zpool create tank sda1 sdb2
2258 .Ed
2259 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2260 The following command creates an unmirrored pool using files.
2261 While not recommended, a pool based on files can be useful for experimental
2262 purposes.
2263 .Bd -literal
2264 # zpool create tank /path/to/file/a /path/to/file/b
2265 .Ed
2266 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2267 The following command adds two mirrored disks to the pool
2268 .Em tank ,
2269 assuming the pool is already made up of two-way mirrors.
2270 The additional space is immediately available to any datasets within the pool.
2271 .Bd -literal
2272 # zpool add tank mirror sda sdb
2273 .Ed
2274 .It Sy Example 6 No Listing Available ZFS Storage Pools
2275 The following command lists all available pools on the system.
2276 In this case, the pool
2277 .Em zion
2278 is faulted due to a missing device.
2279 The results from this command are similar to the following:
2280 .Bd -literal
2281 # zpool list
2282 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2283 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2284 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2285 zion - - - - - - - FAULTED -
2286 .Ed
2287 .It Sy Example 7 No Destroying a ZFS Storage Pool
2288 The following command destroys the pool
2289 .Em tank
2290 and any datasets contained within.
2291 .Bd -literal
2292 # zpool destroy -f tank
2293 .Ed
2294 .It Sy Example 8 No Exporting a ZFS Storage Pool
2295 The following command exports the devices in pool
2296 .Em tank
2297 so that they can be relocated or later imported.
2298 .Bd -literal
2299 # zpool export tank
2300 .Ed
2301 .It Sy Example 9 No Importing a ZFS Storage Pool
2302 The following command displays available pools, and then imports the pool
2303 .Em tank
2304 for use on the system.
2305 The results from this command are similar to the following:
2306 .Bd -literal
2307 # zpool import
2308 pool: tank
2309 id: 15451357997522795478
2310 state: ONLINE
2311 action: The pool can be imported using its name or numeric identifier.
2312 config:
2313
2314 tank ONLINE
2315 mirror ONLINE
2316 sda ONLINE
2317 sdb ONLINE
2318
2319 # zpool import tank
2320 .Ed
2321 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2322 The following command upgrades all ZFS Storage pools to the current version of
2323 the software.
2324 .Bd -literal
2325 # zpool upgrade -a
2326 This system is currently running ZFS version 2.
2327 .Ed
2328 .It Sy Example 11 No Managing Hot Spares
2329 The following command creates a new pool with an available hot spare:
2330 .Bd -literal
2331 # zpool create tank mirror sda sdb spare sdc
2332 .Ed
2333 .Pp
2334 If one of the disks were to fail, the pool would be reduced to the degraded
2335 state.
2336 The failed device can be replaced using the following command:
2337 .Bd -literal
2338 # zpool replace tank sda sdd
2339 .Ed
2340 .Pp
2341 Once the data has been resilvered, the spare is automatically removed and is
2342 made available for use should another device fail.
2343 The hot spare can be permanently removed from the pool using the following
2344 command:
2345 .Bd -literal
2346 # zpool remove tank sdc
2347 .Ed
2348 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2349 The following command creates a ZFS storage pool consisting of two, two-way
2350 mirrors and mirrored log devices:
2351 .Bd -literal
2352 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2353 sde sdf
2354 .Ed
2355 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2356 The following command adds two disks for use as cache devices to a ZFS storage
2357 pool:
2358 .Bd -literal
2359 # zpool add pool cache sdc sdd
2360 .Ed
2361 .Pp
2362 Once added, the cache devices gradually fill with content from main memory.
2363 Depending on the size of your cache devices, it could take over an hour for
2364 them to fill.
2365 Capacity and reads can be monitored using the
2366 .Cm iostat
2367 option as follows:
2368 .Bd -literal
2369 # zpool iostat -v pool 5
2370 .Ed
2371 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2372 The following commands remove the mirrored log device
2373 .Sy mirror-2
2374 and mirrored top-level data device
2375 .Sy mirror-1 .
2376 .Pp
2377 Given this configuration:
2378 .Bd -literal
2379 pool: tank
2380 state: ONLINE
2381 scrub: none requested
2382 config:
2383
2384 NAME STATE READ WRITE CKSUM
2385 tank ONLINE 0 0 0
2386 mirror-0 ONLINE 0 0 0
2387 sda ONLINE 0 0 0
2388 sdb ONLINE 0 0 0
2389 mirror-1 ONLINE 0 0 0
2390 sdc ONLINE 0 0 0
2391 sdd ONLINE 0 0 0
2392 logs
2393 mirror-2 ONLINE 0 0 0
2394 sde ONLINE 0 0 0
2395 sdf ONLINE 0 0 0
2396 .Ed
2397 .Pp
2398 The command to remove the mirrored log
2399 .Sy mirror-2
2400 is:
2401 .Bd -literal
2402 # zpool remove tank mirror-2
2403 .Ed
2404 .Pp
2405 The command to remove the mirrored data
2406 .Sy mirror-1
2407 is:
2408 .Bd -literal
2409 # zpool remove tank mirror-1
2410 .Ed
2411 .It Sy Example 15 No Displaying expanded space on a device
2412 The following command displays the detailed information for the pool
2413 .Em data .
2414 This pool is comprised of a single raidz vdev where one of its devices
2415 increased its capacity by 10GB.
2416 In this example, the pool will not be able to utilize this extra capacity until
2417 all the devices under the raidz vdev have been expanded.
2418 .Bd -literal
2419 # zpool list -v data
2420 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2421 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2422 raidz1 23.9G 14.6G 9.30G - 48%
2423 sda - - - - -
2424 sdb - - - 10G -
2425 sdc - - - - -
2426 .Ed
2427 .It Sy Example 16 No Adding output columns
2428 Additional columns can be added to the
2429 .Nm zpool Cm status
2430 and
2431 .Nm zpool Cm iostat
2432 output with
2433 .Fl c
2434 option.
2435 .Bd -literal
2436 # zpool status -c vendor,model,size
2437 NAME STATE READ WRITE CKSUM vendor model size
2438 tank ONLINE 0 0 0
2439 mirror-0 ONLINE 0 0 0
2440 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2441 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2442 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2443 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2444 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2445 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2446
2447 # zpool iostat -vc slaves
2448 capacity operations bandwidth
2449 pool alloc free read write read write slaves
2450 ---------- ----- ----- ----- ----- ----- ----- ---------
2451 tank 20.4G 7.23T 26 152 20.7M 21.6M
2452 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2453 U1 - - 0 31 1.46K 20.6M sdb sdff
2454 U10 - - 0 1 3.77K 13.3K sdas sdgw
2455 U11 - - 0 1 288K 13.3K sdat sdgx
2456 U12 - - 0 1 78.4K 13.3K sdau sdgy
2457 U13 - - 0 1 128K 13.3K sdav sdgz
2458 U14 - - 0 1 63.2K 13.3K sdfk sdg
2459 .Ed
2460 .El
2461 .Sh ENVIRONMENT VARIABLES
2462 .Bl -tag -width "ZFS_ABORT"
2463 .It Ev ZFS_ABORT
2464 Cause
2465 .Nm zpool
2466 to dump core on exit for the purposes of running
2467 .Sy ::findleaks .
2468 .El
2469 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2470 .It Ev ZPOOL_IMPORT_PATH
2471 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2472 .Nm zpool
2473 looks for device nodes and files.
2474 Similar to the
2475 .Fl d
2476 option in
2477 .Nm zpool import .
2478 .El
2479 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2480 .It Ev ZPOOL_VDEV_NAME_GUID
2481 Cause
2482 .Nm zpool subcommands to output vdev guids by default. This behavior
2483 is identical to the
2484 .Nm zpool status -g
2485 command line option.
2486 .El
2487 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2488 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2489 Cause
2490 .Nm zpool
2491 subcommands to follow links for vdev names by default. This behavior is identical to the
2492 .Nm zpool status -L
2493 command line option.
2494 .El
2495 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2496 .It Ev ZPOOL_VDEV_NAME_PATH
2497 Cause
2498 .Nm zpool
2499 subcommands to output full vdev path names by default. This
2500 behavior is identical to the
2501 .Nm zpool status -p
2502 command line option.
2503 .El
2504 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2505 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2506 Older ZFS on Linux implementations had issues when attempting to display pool
2507 config VDEV names if a
2508 .Sy devid
2509 NVP value is present in the pool's config.
2510 .Pp
2511 For example, a pool that originated on illumos platform would have a devid
2512 value in the config and
2513 .Nm zpool status
2514 would fail when listing the config.
2515 This would also be true for future Linux based pools.
2516 .Pp
2517 A pool can be stripped of any
2518 .Sy devid
2519 values on import or prevented from adding
2520 them on
2521 .Nm zpool create
2522 or
2523 .Nm zpool add
2524 by setting
2525 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2526 .El
2527 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2528 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2529 Allow a privileged user to run the
2530 .Nm zpool status/iostat
2531 with the
2532 .Fl c
2533 option. Normally, only unprivileged users are allowed to run
2534 .Fl c .
2535 .El
2536 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2537 .It Ev ZPOOL_SCRIPTS_PATH
2538 The search path for scripts when running
2539 .Nm zpool status/iostat
2540 with the
2541 .Fl c
2542 option. This is a colon-separated list of directories and overrides the default
2543 .Pa ~/.zpool.d
2544 and
2545 .Pa /etc/zfs/zpool.d
2546 search paths.
2547 .El
2548 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2549 .It Ev ZPOOL_SCRIPTS_ENABLED
2550 Allow a user to run
2551 .Nm zpool status/iostat
2552 with the
2553 .Fl c
2554 option. If
2555 .Sy ZPOOL_SCRIPTS_ENABLED
2556 is not set, it is assumed that the user is allowed to run
2557 .Nm zpool status/iostat -c .
2558 .El
2559 .Sh INTERFACE STABILITY
2560 .Sy Evolving
2561 .Sh SEE ALSO
2562 .Xr zfs-events 5 ,
2563 .Xr zfs-module-parameters 5 ,
2564 .Xr zpool-features 5 ,
2565 .Xr zed 8 ,
2566 .Xr zfs 8